id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2306.09575
Quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure
Quantum geometry - the geometry of electron Bloch wavefunctions - is central to modern condensed matter physics. Due to the quantum nature, quantum geometry has two parts, the real part quantum metric and the imaginary part Berry curvature. The studies of Berry curvature have led to countless breakthroughs, ranging from the quantum Hall effect in 2DEGs to the anomalous Hall effect (AHE) in ferromagnets. However, in contrast to Berry curvature, the quantum metric has rarely been explored. Here, we report a new nonlinear Hall effect induced by quantum metric by interfacing even-layered MnBi2Te4 (a PT-symmetric antiferromagnet (AFM)) with black phosphorus. This novel nonlinear Hall effect switches direction upon reversing the AFM spins and exhibits distinct scaling that suggests a non-dissipative nature. Like the AHE brought Berry curvature under the spotlight, our results open the door to discovering quantum metric responses. Moreover, we demonstrate that the AFM can harvest wireless electromagnetic energy via the new nonlinear Hall effect, therefore enabling intriguing applications that bridges nonlinear electronics with AFM spintronics.
Anyuan Gao, Yu-Fei Liu, Jian-Xiang Qiu, Barun Ghosh, Thaís V. Trevisan, Yugo Onishi, Chaowei Hu, Tiema Qian, Hung-Ju Tien, Shao-Wen Chen, Mengqi Huang, Damien Bérubé, Houchen Li, Christian Tzschaschel, Thao Dinh, Zhe Sun, Sheng-Chin Ho, Shang-Wei Lien, Bahadur Singh, Kenji Watanabe, Takashi Taniguchi, David C. Bell, Hsin Lin, Tay-Rong Chang, Chunhui Rita Du, Arun Bansil, Liang Fu, Ni Ni, Peter P. Orth, Qiong Ma, Su-Yang Xu
2023-06-16T01:40:15Z
http://arxiv.org/abs/2306.09575v2
# Quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure ###### Abstract The quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric nonlinear nonlinear Hall effect in a topological antiferromagnetic heterostructure is a key ingredient in the study of the quantum metric **Quantum geometry - the geometry of electron Bloch wavefunctions - is central to modern condensed matter physics. Due to the quantum nature, quantum geometry has two parts, the real part quantum metric and the imaginary part Berry curvature. The studies of Berry curvature have led to countless breakthroughs, ranging from the quantum Hall effect in 2DEGs to the anomalous Hall effect (AHE) in ferromagnets. However, in contrast to Berry curvature, the quantum metric has rarely been explored. Here, we report a new nonlinear Hall effect induced by quantum metric by interfacing even-layered MnBi\({}_{2}\)Te\({}_{4}\) (a \(\mathcal{PT}\)-symmetric antiferromagnet (AFM)) with black phosphorus. This novel nonlinear Hall effect switches direction upon reversing the AFM spins and exhibits distinct scaling that suggests a non-dissipative nature. Like the AHE brought Berry curvature under the spotlight, our results open the door to discovering quantum metric responses. Moreover, we demonstrate that the AFM can harvest wireless electromagnetic energy via the new nonlinear Hall effect, therefore enabling intriguing applications that bridges nonlinear electronics with AFM spintronics.** **Introduction** Nonlinearities are crucial in many branches of physics, ranging from atomic physics to condensed matter and complex dynamical systems. Nonlinear electrical transport is the foundation of applications such as rectification and wave mixing. Classically, the most well-known nonlinear device is a PN diode (Fig. 1A). Noncentrosymmetric polar materials (Fig. 1B) are similar to PN diodes as they both possess an electric dipole. They have recently been discovered to show intrinsic nonlinear electrical transport, which not only suggests novel nonlinear applications but also provides a powerful probe of the quantum geometry of the conduction electrons [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. Broadly, the nonlinear transport in both diodes (Fig. 1A) and noncentrosymmetric conductors (Fig. 1B) arise from an inversion asymmetric charge distributions (e.g. an electric dipole). Since the electron has another fundamental degree of freedom, spin, an interesting question is whether spin can also lead to an electrical nonlinearity even in a centrosymmetric lattice. One ideal platform is the \(\mathcal{PT}\)-symmetric AFMs [20], where only the spins feature a noncentrosymmetric distribution (Fig. 1C). Important clues can be drawn from previous optical experiments, where optical second-harmonic generation (SHG) has been observed in the \(\mathcal{PT}\)-symmetric AFMs including Cr\({}_{2}\)O\({}_{3}\)[21] and CrI\({}_{3}\)[22]. Nevertheless, nonlinear transport is distinct because it directly probes the Fermi surface electrons and in many cases their geometrical properties [1; 2; 3; 4; 5]. As such, it enables a probe of the quantum geometry [1; 2; 3; 4; 5] of the topological bands at the Fermi level of novel conductors. The quantum geometry has two parts, \(T=g-\frac{i}{2}\Omega\)[1; 2] (\(T\) is the quantum geometrical tensor). The imaginary part is the well-known Berry curvature \(\Omega_{\alpha\beta}=-2\text{Im}\sum_{m\neq n}[\langle u_{n}|i\partial_{k_{ \alpha}}u_{m}\rangle\langle u_{m}|i\partial_{k_{\beta}}u_{n}\rangle]\), which describes the curvature of wavefunction in Hilbert space (\(n,m\) are band indices and \(\alpha,\beta\) are spatial directions). Berry curvature has been identified as the source of many novel electronic and optical responses. By contrast, the real part is the quantum metric, \(g_{\alpha\beta}=\text{Re}\sum_{m\neq n}[\langle u_{n}|i\partial_{k_{\alpha}}u_{ m}\rangle\langle u_{m}|i\partial_{k_{\beta}}u_{n}\rangle]\), which measures the distance between neighboring Bloch wavefunctions in Hilbert space (i.e., the distance when Bloch wavefunctions are mapped onto a Bloch sphere, see SM. IV.1). Although being equally important, the quantum metric is much less explored. There have been a few examples related to the quantum metric, including prediction for the electrical and orbital magnetic susceptibilities [23], observation of a third order Hall effect [16] and the quantum metric in atomic physics [24]. However, examples have remained limited and how quantum metric regulates the electronic motion remains largely unknown. Recently, theory has started to predict a wide range of exotic quantum metric responses [25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38]. One particularly intriguing platform is the \(\mathcal{PT}\)-symmetric AFM [25; 26; 27; 28; 29], because \(\mathcal{PT}\) forces the Berry curvature to vanish identically, hence isolating novel phenomena related to quantum metric. Here, we focus on the recent proposal of a nondissipative, intrinsic second-order Hall effect induced by the quantum metric dipole [25; 26; 29]. We design and fabricate a feasible material platform and demonstrate the first realization. To conceptualize this new nonlinear Hall effect, we draw comparison with the well-known AHE in ferromagnetic metals [39], where Berry curvature leads to the anomalous velocity and therefore the AHE, \(v_{\text{anomalous}}\propto\int_{\mathbf{k}}\mathbf{E}_{\parallel}\times \mathbf{\Omega}\), (\(\mathbf{E}_{\parallel}\) is the in-plane source-drain electric field). By contrast, in a \(\mathcal{PT}\)-symmetric AFM, Berry curvature is zero due to \(\mathcal{PT}\). However, a nonzero quantum metric \(g\) in the two-band limit can induce an anomalous velocity to the second-order of \(\mathbf{E}_{\parallel}\), \(v_{\text{anomalous}}\propto\int_{\mathbf{k}}\mathbf{E}_{\parallel}\times[ \nabla_{\mathbf{k}}\times(g\mathbf{E}_{\parallel})]\), as proposed in [25]. This leads to the intrinsic second-order Hall effect. From the expression above, one can show that this effect is nonzero only when the system breaks both \(\mathcal{P}\) and \(\mathcal{T}\). Therefore, we need \(\mathcal{PT}\)-symmetric AFM conductors with a large quantum metric on the Fermi surface. We have carefully considered possible materials, and identified 2D even-layered MnBi\({}_{2}\)Te\({}_{4}\)[18; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50] as an ideal platform. Even-layered MnBi\({}_{2}\)Te\({}_{4}\) is a \(\mathcal{PT}\)-symmetric AFM. Moreover, its topological bands support gate-tunable transport and a giant quantum metric. However, its lattice has \(\mathcal{C}_{3z}\) rotational symmetry (Figs. 1D,E), which forces the effect to vanish [26]. To break \(C_{3z}\), we interface it with black phosphorus (BP) [51]. **Demonstration of rotational symmetry breaking** We start by showing that interfacing MnBi\({}_{2}\)Te\({}_{4}\) with BP indeed breaks its \(\mathcal{C}_{3z}\) rotational symmetry. To this end, we study the directional dependence of the resistance [9; 52] of MnBi\({}_{2}\)Te\({}_{4}\) without and with BP. We fabricated a 6-septuple-layer (6SL) MnBi\({}_{2}\)Te\({}_{4}\) device with radially distributed electrical contacts (Device-BM1). As shown by the blue curve in Fig. 1G, the four-probe resistance (\(T=1.8\) K) is found to be fully isotropic, consistent with the presence of the \(\mathcal{C}_{3z}\) symmetry. We then stacked a BP layer (\(\sim 10\) nm) onto this MnBi\({}_{2}\)Te\({}_{4}\) sample and performed the measurements again. As shown by the red curve in Fig. 1G, the resistance develops a clear anisotropy with a \(180^{\circ}\) periodicity, providing a clear signature of the breaking of \(\mathcal{C}_{3z}\) symmetry (In SM. I.3, we present additional experiments to show that the transport signal is dominated by the MnBi\({}_{2}\)Te\({}_{4}\) layer of the heterostructure). The transverse resistance and two-probe resistance also show the breaking of \(\mathcal{C}_{3z}\) (fig. S6). We further substantiate the breaking of \(\mathcal{C}_{3z}\) symmetry by an independent method, the optical second harmonic generation (SHG) at room temperature. As shown in Fig. 1H, our SHG data also shows the clear breaking of \(\mathcal{C}_{3z}\) symmetry (see detailed discussions in SM. I.5 and fig. S7). Our demonstration of \(\mathcal{C}_{3z}\) breaking establishes the BP/MnBi\({}_{2}\)Te\({}_{4}\) heterostructure as an ideal platform to search for this effect. **Observation of the nonlinear Hall effect** In order to measure the linear and nonlinear electrical transport, we pass a current at frequency \(\omega\) (\(I^{\omega}\)) and use the lock-in technique to detect linear voltage \(V^{\omega}\) and nonlinear voltage \(V^{2\omega}\). We describe the nonlinear voltage as \(V^{2\omega}_{ijk}\), where \(i\) is the direction of the nonlinear voltage \(V^{2\omega}\) and \(j,k\) are the directions of the injected current \(I^{\omega}\). All measurements are performed at \(B=0\). Figure 1I shows the nonlinear Hall voltage \(V^{2\omega}_{yxx}\) of the Device-BM1 before and after interfaced with BP. Remarkably, a prominent nonlinear Hall signal only emerges after BP is introduced. This is in sharp contrast to the linear voltage (inset of Fig. 1I), which becomes even slightly smaller upon the introduction of BP. Such observation agrees well with the theoretical expectation of the intrinsic nonlinear Hall effect induced by a quantum metric dipole. To exclude that the effect is caused by a Berry curvature dipole [7; 9; 10; 12], which leads to a second-order Hall effect in nonmagnetic, noncentrosymmetric conductors, we study the relationship between the second-order nonlinear Hall effect and the AFM order in MnBi\({}_{2}\)Te\({}_{4}\). **The AFM spin-induced nonlinearity** Overall, we have fabricated 26 BP/MnBi\({}_{2}\)Te\({}_{4}\) heterostructure devices. In all of the 26 devices, we have observed the nonlinear Hall effect with consistent behaviors as a function of AFM order, spatial direction, scattering time, vertical electric field and doping (see fig. S15 and table S1 for a summary of all 26 devices). Here, we focus on the Device-BMB1 (Fig. 2A), which has 2L BP on both sides of 6SL MnBi\({}_{2}\)Te\({}_{4}\). Moreover, we have made sure that the crystalline \(a\) axes of the BPs and the MnBi\({}_{2}\)Te\({}_{4}\) are all aligned (Fig. 2A). Such a carefully controlled configuration is important to preserve MnBi\({}_{2}\)Te\({}_{4}\)'s \(\mathcal{PT}\) symmetry, which enforces the Berry curvature and Berry curvature dipole to vanish. Figure 2B shows the basic nonlinear transport responses. A large transverse nonlinear response \(V_{yxx}^{2\omega}\) is found, showing the nonlinear Hall effect in Device-BMB1. We have also measured the longitudinal nonlinear response \(V_{xxx}^{2\omega}\), which shows no observable signal. Therefore, our data reveals an interesting "Hall dominance" in the nonlinear transport. We now focus on exploring how the nonlinear Hall signal depends on opposite AFM states. In ferromagnets, the opposite FM states can be controlled by sweeping \(B\) field. In \(\mathcal{PT}\)-symmetric AFMs including Cr\({}_{2}\)O\({}_{3}\), even-layered CrI\({}_{3}\) and even-layered MnBi\({}_{2}\)Te\({}_{4}\)[46; 53; 54], previous works have shown that the opposite AFM states can be controlled by sweeping vertical \(B_{z}\) field under a fixed vertical \(E_{z}\) field. Hence, we follow the procedures established by previous works [46]: under a fixed \(E_{z}\) (\(E_{z}=-0.17\) V/nm), we sweep \(B_{z}\) from \(-8\) T to \(0\) T or from \(+8\) T to \(0\) T to prepare the two AFM states (Fig. 2, C and D). We first study the AFM-I. The linear voltage \(V_{xx}^{\omega}\) (Fig. 2E) exhibits a typical Ohm's law behavior. The nonlinear voltage \(V_{yxx}^{2\omega}\) (Fig. 2G) is prominent and its sign is positive. We then prepare AFM-II. The linear voltage \(V_{xx}^{\omega}\) (Fig. 2F) remains unchanged. In sharp contrast, the nonlinear voltage \(V_{yxx}^{2\omega}\) (Fig. 2H) flips sign. For both AFM-I and II, if we measure \(V_{yxx}^{2\omega}\) while warming up, we found that the nonlinear Hall effect is only present in the AFM phase but is absent in the nonmagnetic phase (Fig. 2, I and J). Therefore, we demonstrate that our nonlinear Hall effect arises from a spin-induced nonlinearity in the Fermi surface electrons. We now perform further systematic studies. Because the nonlinear Hall current flips sign upon reversing the AFM order, all the nonlinear Hall data (apart from Fig. 2) are obtained by taking the difference between the two AFM domains. First, the intrinsic nonlinear Hall effect is expected to be dissipationless. Interestingly, this represents the first known dissipationless nonlinear transport effect. Here, "dissipationless" means that the intrinsic nonlinear Hall conductivity is independent of the scattering time \(\tau\)[25; 26; 27], just like the intrinsic AHE in ferromagnetic metals was referred as a dissipationless effect [39] when the anomalous Hall conductivity is independent of \(\tau\). In both cases, there is still dissipation through the linear Drude conductivity \(\sigma_{xx}\). So they are different from the QAHE that has no dissipation channel at all. The nonlinear Hall conductivity can be directly extracted from our data by \(\sigma_{yxx}^{2\omega}=J_{yxx}^{2\omega}/E_{x}^{\omega 2}=\frac{V_{yxx}^{2 \omega}}{l_{x}^{2}R_{xx}^{3}}\frac{l^{3}}{w^{2}d}\), where \(l,w,d\) are the length, width and thickness of the sample. Previous experiments have studied the scattering time \(\tau\) dependence of various Hall effects [9; 12; 17; 39] by investigating the scaling between the corresponding Hall conductivity and the Drude conductivity. Therefore, following the established method, we study the scaling between \(\sigma_{yxx}^{2\omega}\) and \(\sigma_{xx}\). Our data (Fig. 3A) show that \(\sigma_{yxx}^{2\omega}\) is independent of \(\sigma_{xx}\), consistent with being non-dissipative. Second, the intrinsic nonlinear Hall effect does not require a noncentrosymmetric lattice or any explicit breaking of \(\mathcal{PT}\) symmetry. To test this, we explicitly break \(\mathcal{PT}\) by applying a vertical \(E_{z}\) field via dual gating. As shown in Fig. 3D, the nonlinear Hall signal is already prominent even at \(E_{z}=0\), confirming that it does not require any \(\mathcal{PT}\) breaking. Moreover, the nonlinear Hall signal is symmetric for \(\pm E_{z}\), also consistent with the expectation (see SM. IV.2). Third, the nonlinear Hall effect is expected to be sensitive to the direction of the incident current \(I^{\omega}\). In Fig. 3B, we measure the nonlinear Hall conductivity as a function of the direction of \(I^{\omega}\). Indeed, we found that the signal is most prominent when \(I^{\omega}\) is along a particular in-plane direction. In this way, we managed to experimentally map out the direction of the relevant geometrical dipole (in our case it is the quantum metric dipole as we demonstrate next). **Demonstrating the quantum metric mechanism by excluding competing mechanisms** Although we tried to eliminate Berry curvature dipole by aligning the crystalline \(a\) axes between BPs and MnBi\({}_{2}\)Te\({}_{4}\) to preserve \(\mathcal{PT}\) symmetry (Fig. 2A). Let us assume that the alignment is imperfect, so Berry curvature dipole is allowed. We now show that the observed relationship between the nonlinear Hall signal and AFM order can discern Berry curvature dipole \(D_{\text{Berry}}\) and quantum metric dipole \(D_{\text{Metric}}\)[26]. \(D_{\text{Berry}}\) can be understood as a distribution of the Berry curvature around the Fermi surface such that it is larger on one side of the Fermi surface than on the opposite side. A similar picture holds for \(D_{\text{Metric}}\) (Fig. 3). As we observe that the nonlinear Hall signal changes sign upon the reversal of AFM order, the dipole that causes our observed nonlinear Hall signal must also flip. Let us assume that the AFM-I has \(D_{\text{Berry}}>0\) and \(D_{\text{Metric}}>0\), which is visualized in a tilted gapped Dirac band structure in Figs. 3E and G. We now flip the AFM order to the AFM-II by performing time reversal \(\mathcal{T}\). Under \(\mathcal{T}\), the bands are flipped between \(\pm\mathbf{k}\) (Figs. 3F-H), the Berry curvature flips sign (\(\Omega(k)\xrightarrow{\mathcal{T}}-\Omega(-k)\)), but the quantum metric keeps the same sign (\(g(k)\xrightarrow{\mathcal{T}}g(-k)\)). Hence, from Figs. 3F-H, one can see that, \(D_{\text{Berry}}(\text{AFM-II})=D_{\text{Berry}}(\text{AFM-I})\), but \(D_{\text{Metric}}(\text{AFM-II})=-D_{\text{Metric}}(\text{AFM-I})\). Therefore, our observation that the nonlinear Hall signal flips sign upon reversing the AFM order excludes the Berry curvature dipole mechanism. Within the nonlinear effects that flip sign upon reversing the AFM order, there is another possibility, the second-order Drude effect [8; 15; 20; 26]. This effect can be ruled out based on our scaling data in Fig. 3A, because it is expected to be proportional to \(\tau^{2}\)[26]. Moreover, the nonlinear Hall effect (NHE) is antisymmetric (upon exchanging the first two indices) \(\sigma^{\rm NHE}_{\alpha\beta\gamma}=-\sigma^{\rm NHE}_{\beta\alpha\gamma}\) but the second-order Drude effect (SODE) is symmetric \(\sigma^{\rm SODE}_{\alpha\beta\gamma}=\sigma^{\rm SODE}_{\beta\alpha\gamma}\)[26]. Using a novel electrical sum-frequency generation method (SM. II.2), we showed that our signal is indeed antisymmetric, i.e., \(\sigma^{2\omega}_{yxx}=-\sigma^{2\omega}_{xyx}\), which demonstrates that the SODE is insignificant in our signal (SM II.2). Finally, we also carefully addressed other competing origins such as thermal and accidental diode junctions (SM. II.3). By excluding competing mechanisms, we establish the quantum metric dipole as the underlying interpretation. **Energy-resolved probe of quantum metric in \(\mathcal{PT}\)-symmetric AFM** We also study the evolution of the nonlinear conductivity \(\sigma^{2\omega}_{yxx}\) with the charge density \(n\). As shown in Fig. 4A, the nonlinear Hall signal is zero inside the charge neutrality gap. This is consistent with the expectation that the nonlinear Hall effect is a Fermi surface property. As we tune the Fermi energy away from the charge neutrality, the nonlinear Hall signal emerges. Importantly, the conductivity in electron and hole regimes have the same sign. As we go deeper into the electron-doped regime, the signal reverses sign again. We now provide an intuitive physical picture to understand the large quantum metric dipole and its Fermi level dependence. MnBi\({}_{2}\)Te\({}_{4}\) features Dirac surface states, which are gapped due to the AFM, leading to large quantum metric near the gap edge. Moreover, because the AFM order breaks both \(\mathcal{T}\) and \(\mathcal{P}\), the Dirac bands are asymmetric about \(\mathbf{k}=0\), as shown in Fig. 3G. Hence, at a fixed energy, positive and negative momenta have different quantum metric, leading to a nonzero quantum metric dipole. Intuitively, we can understand the sign of the nonlinear Hall signal by which momentum side has a larger quantum metric. We see from Fig. 3G that both upper and lower parts of the Dirac cone have \(g(+k_{\rm F})>g(-k_{\rm F})\), suggesting that the nonlinear Hall signals should show the same sign in electron and hole regimes, consistent with our data (Fig. 4A). The additional sign change in the electron-doped regime is beyond this simple picture. To achieve a more comprehensive understanding, we built an effective model of the BP/6SL MnBi\({}_{2}\)Te\({}_{4}\)/BP heterostructure. Due to the incommensurability of the BP and MnBi\({}_{2}\)Te\({}_{4}\) lattices, we need to derive the coupling between the Bloch states of the two materials in the real-space continuum (i.e. within the extended Brillouin zone BZ). The low-energy bands are located in the BZ center \(\Gamma\), so only Bloch bands with the same momentum hybridize. The coupling amplitude depends only on the characteristic decay length of the atomic orbitals as any discrete lattice structure is averaged out [51]. The Hamiltonian reads \(\hat{h}(k_{x},k_{y})=\left(\begin{array}{ccc}\hat{h}_{\rm MBT}&\hat{U}_{t}& \hat{U}_{b}\\ \hat{U}_{t}^{\dagger}&\hat{h}_{\rm BP,t}&0\\ \hat{U}_{b}^{\dagger}&0&\hat{h}_{\rm BP,b}\end{array}\right)\) where \(\hat{h}_{\rm MBT}\) and \(\hat{h}_{\rm BP,t(b)}\) are Hamiltonians for 6SL MnBi\({}_{2}\)Te\({}_{4}\) and top (bottom) BP, respectively. \(\hat{U}_{t}\) and \(\hat{U}_{b}\) denote the nearest-neighbors coupling between MnBi\({}_{2}\)Te\({}_{4}\) and BP, which is crucial for breaking the \(C_{3z}\). We first turn off the coupling between the MnBi\({}_{2}\)Te\({}_{4}\) and BP (\(\hat{U}_{t}=\hat{U}_{b}=0\)). The Fermi surface shown in Fig. 4C (\(-50\) meV) is \(C_{3z}\) symmetric and there are already large quantum metric (\(g_{xx}\) and \(g_{yx}\)) around it. According to Ref. [26], the \(D_{\rm Metric}\) responsible for the nonlinear Hall is given by \(D_{\rm Metric}=\int_{\bf k}(v_{y}g_{xx}-v_{x}g_{yx})\delta(\varepsilon- \varepsilon_{\rm F})\) (\(v\) is the Fermi velocity). We plot the integral kernel (\(v_{y}g_{xx}-v_{x}g_{yx}\)) as color in Fig. 4D. Positive and negative contributions around the contour exactly cancel because of \(C_{3z}\) symmetry. So the integral goes to zero (the left panel in Fig. 4D). We then turn on the MnBi\({}_{2}\)Te\({}_{4}\)-BP couplings, which breaks \(C_{3z}\). For the \(C_{3z}\)-breaking contour, we observe unequal contributions from the two colors, leading to a nonzero \(D_{\rm Metric}\) (the right panel in Fig. 4D). Figure 4E shows the band structure of the BP/6SL MnBi\({}_{2}\)Te\({}_{4}\)/BP heterostructure, based on which we can compute the intrinsic nonlinear Hall conductivity \(\sigma_{yxx}^{2\omega}\) as a function of chemical potential. In particular, near the charge neutrality gap, we found that \(\sigma_{yxx}^{2\omega}\) indeed mainly comes from the quantum metric of the Dirac surface states, consistent with the intuitive picture above. The sign inversion in the electron-doped regime mainly comes from the quantum metric of the avoided crossing inside conduction bands. Note that due to the multiband nature of our model, the \(\sigma_{yxx}^{2\omega}\) was calculated by the general expression \(\sigma_{yxx}^{2\omega}=-2e^{3}\sum_{n,m}^{\epsilon_{n}\neq\epsilon_{m}}{\rm Re }\int_{\bf k}\left(\frac{v_{y}^{n}(u_{n}|i\partial_{k_{x}}u_{m})\langle u_{m} |i\partial_{k_{x}}u_{n}\rangle}{\varepsilon_{n}-\varepsilon_{m}}-\frac{v_{x}^{ n}\langle u_{n}|i\partial_{k_{y}}u_{m}\rangle\langle u_{m}|i\partial_{k_{x}}u_{n} \rangle}{\varepsilon_{n}-\varepsilon_{m}}\right)\delta(\varepsilon_{n}- \varepsilon_{\rm F})\)[26]. This general expression can be decomposed into the quantum metric dipole \(D_{\rm Metric}\) contribution plus additional inter-band contributions (AIC), \[\sigma_{yxx}^{2\omega}=-2e^{3}\sum_{n}\int_{\bf k}\frac{v_{y}^{n}g_{xx}^{n}-v_ {x}^{n}g_{yx}^{n}}{\varepsilon_{n}-\varepsilon_{\bar{n}}}\delta(\varepsilon_{ n}-\varepsilon_{\rm F})+{\rm AIC}, \tag{1}\] where the first term is the quantum metric dipole contribution, and the second term is \({\rm AIC}=-2e^{3}\sum_{n,m}^{\varepsilon_{n}\neq\epsilon_{n},\varepsilon_{\bar {n}}}{\rm Re}\int_{\bf k}\left(\frac{v_{y}^{n}\langle u_{n}|i\partial_{k_{x}}u_ {m}\rangle\langle u_{m}|i\partial_{k_{x}}u_{n}\rangle}{\varepsilon_{n}- \varepsilon_{m}}-\frac{v_{x}^{n}\langle u_{n}|i\partial_{k_{y}}u_{m}\rangle \langle u_{m}|i\partial_{k_{x}}u_{n}\rangle}{\varepsilon_{n}-\varepsilon_{m}} \right)\frac{\varepsilon_{n}-\varepsilon_{\bar{n}}}{\varepsilon_{n}-\varepsilon _{\bar{n}}}\delta(\varepsilon_{n}-\varepsilon_{\bar{n}})\) (\(\bar{n}\) is the band whose energy is closest to \(n\)). In our BP/6SL MnBi\({}_{2}\)Te\({}_{4}\)/BP system, we found that the quantum metric dipole contribution strongly dominates, whereas the AIC is small (see details in SM. IV.3). By comparing the calculated and measured \(\sigma_{yxx}^{2\omega}\) (Fig. 4, A and B), we found a good agreement. Therefore, our nonlinear Hall measurement is a powerful, energy-resolved probe of the quantum metric. **AFM spin-based wireless rectification and outlook** The second-order nonlinear effect enables not only frequency doubling (\(\omega\to 2\omega\)) but also rectification (\(\omega\to{\rm DC}\)). The rectification is crucial for harvesting electromagnetic radiation energy [12; 15] because we can convert the electromagnetic radiation into DC electricity. We use the intrinsic AFM nonlinear Hall effect to demonstrate wireless rectification with zero external bias (battery-free) and without magnetic field. We inject microwave radiation and measure the DC signal. As shown in Fig. 4F, we observe clear rectification DC voltage in response to the microwave radiation, which shows a broad band response, including the WiFi frequencies (2.4 and 5 GHz) and even higher frequencies (see fig. S21). In summary, we have presented the first experimental realization of the intrinsic second-order Hall effect. This effect realizes an electrical nonlinearity induced by the AFM spins and provides a rare example of a quantum metric response. Both aspects are of fundamental interest. Just like the AHE about a decade ago inspired the discoveries of a variety of Berry curvature responses, we hope that our work opens the door to experimentally search for quantum metric responses. As highlighted by recent theoretical studies, the influence of the quantum metric is expected to span many different areas, ranging from nonlinear responses in \(\mathcal{PT}\)-symmetric AFMs to flat band conductivity, superconductivity and charge orders in moire systems, the fractional Chern insulator, and \(\mathbf{k}\)-space dual of gravity [25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38]. Another interesting future direction is to explore the nonlinear responses in canted AFM materials, where nonzero Berry curvature of higher order in magnetization have recently been observed ([47; 50; 55]). In terms of materials, the vdW interface engineering has been widely applied to engineer band structure, such as the band alignment in semiconductors. We show that, beyond "band structure engineering", the vdW interfaces can be used to engineer the properties of the wavefunction i.e., "quantum geometry engineering" [51]. We demonstrate that, the topological Dirac surface state on the interface of a TI can be the source of a wide range of novel topological and geometrical phenomena beyond the Berry curvature upon proper engineering. In terms of spin-induced electrical nonlinearity, our observation enables the possibility to use AFM spins to harvest electromagnetic energy and to realize self-powered AFM spintronic devices.
2304.07321
Constraints on the proton fraction of cosmic rays at the highest energies and the consequences for cosmogenic neutrinos and photons
Over the last decade, observations have shown that the mean mass of ultra-high-energy cosmic rays (UHECRs) increases progressively toward the highest energies. However, the precise composition is still unknown, and several theoretical studies hint at the existence of a subdominant proton component up to the highest energies. Motivated by the exciting prospect of performing charged-particle astronomy with ultra-high-energy (UHE) protons we quantify the level of UHE-proton flux that is compatible with present multimessenger observations and the associated fluxes of neutral messengers produced in the interactions of the protons. We study this scenario with numerical simulations of two independent populations of extragalactic sources and perform a fit to the combined UHECR energy spectrum and composition observables, constrained by diffuse gamma-ray and neutrino observations. We find that up to of order $10\%$ of the cosmic rays at the highest energies can be UHE protons, although the result depends critically on the selected hadronic interaction model for the air showers. Depending on the maximum proton energy ($E_\text{max}^\text{p}$) and the redshift evolution of sources, the associated flux of cosmogenic neutrinos and UHE gamma rays can significantly exceed the multimessenger signal of the mixed-mass cosmic rays. Moreover, if $E_\text{max}^\text{p}$ is above the GZK limit, we predict a large flux of UHE neutrinos above EeV energies that is absent in alternate scenarios for the origin of UHECRs. We present the implications and opportunities afforded by these UHE proton, neutrino and photon fluxes for future multimessenger observations.
Domenik Ehlert, Arjen van Vliet, Foteini Oikonomou, Walter Winter
2023-04-14T18:00:08Z
http://arxiv.org/abs/2304.07321v2
Constraints on the proton fraction of cosmic rays at the highest energies and the consequences for cosmogenic neutrinos and photons ###### Abstract Over the last decade, observations have shown that the mean mass of ultra-high-energy cosmic rays (UHECRs) increases progressively toward the highest energies. However, the precise composition is still unknown, and several theoretical studies hint at the existence of a subdominant proton component up to the highest energies. Motivated by the exciting prospect of performing charged-particle astronomy with ultra-high-energy (UHE) protons we quantify the level of UHE-proton flux that is compatible with present multimessenger observations and the associated fluxes of neutral messengers produced in the interactions of the protons. We study this scenario with numerical simulations of two independent populations of extragalactic sources and perform a fit to the combined UHECR energy spectrum and composition observables, constrained by diffuse gamma-ray and neutrino observations. We find that up to of order 10% of the cosmic rays at the highest energies can be UHE protons, although the result depends critically on the selected hadronic interaction model for the air showers. Depending on the maximum proton energy (\(E_{\rm max}^{\rm p}\)) and the redshift evolution of sources, the associated flux of cosmogenic neutrinos and UHE gamma rays can significantly exceed the multimessenger signal of the mixed-mass cosmic rays. Moreover, if \(E_{\rm max}^{\rm p}\) is above the GZK limit, we predict a large flux of UHE neutrinos above EeV energies that is absent in alternate scenarios for the origin of UHECRs. We present the implications and opportunities afforded by these UHE proton, neutrino and photon fluxes for future multimessenger observations. keywords: astroparticle physics -- cosmic rays -- neutrinos -- gamma-rays -- methods:numerical ## 1 Introduction Ultra-high-energy cosmic rays (UHECRs), charged particles of astrophysical origin with energy above \(\sim 10^{18}\) eV, are the most energetic cosmic messengers and are, as such, probes of the most extreme astrophysical environments. Because of extragalactic and Galactic magnetic fields, their sources remain elusive, even after years of high-precision observation by the latest generation of UHECR detectors, in particular the Pierre Auger Observatory (Auger) and the Telescope Array (TA). Observations suggest that the composition of UHECRs is surprisingly pure, with each accelerated nuclear species only dominant in a very narrow band of the UHECR spectrum, and the entire spectrum is produced through a carefully-tuned combination of the individual peaks (e.g. Unger et al., 2015; Aab et al., 2017; Alves Batista et al., 2019; Heinze et al., 2019). The combination of a smooth increase of average mass and pure composition at all energies implies that the population variance of sources must be remarkably low (Ehlert et al., 2022; see also Heinze et al., 2020). Under these circumstances, the observed flux-cutoff at \(E_{\rm CR}\gtrsim 50\) EeV is generally predicted to be an effect of the maximum particle energy reachable at the cosmic accelerators. Within this "Peters cycle" (Peters, 1961; Gaisser et al., 2016) model of cosmic-ray acceleration with rigidity-dependent maximum energy, no light cosmic rays (CRs) are expected at the highest energies. Nevertheless, the existence of protons or light nuclei at the highest energies, where there are no measurements of composition-sensitive observables with the fluorescence detectors of Auger and TA, cannot be ruled out at present. A very interesting possibility would be the existence of an additional proton-dominated component at the highest energies. Such a flux cannot be easily explained by reprocessing of accelerated UHECR within the source as proposed for extragalactic protons below the ankle, see e.g. Unger et al. (2015), but must originate from a secondary population of independent sources that exclusively accelerates protons to ultra-high energies or where heavier nuclei are efficiently disintegrated before escaping the source region. Motivations for an additional source population come from the expected differences between possible UHECR accelerators, e.g. active galactic nuclei (Rodrigues et al., 2021) or gamma-ray bursts (Waxman, 1995). Such a proton flux does not need to be produced by astrophysical processes necessarily, but could also originate from the decay of heavy dark matter (e.g. Ishiwata et al. (2020); Das et al. (2023)). Circumstantial evidence for an additional proton component is provided by an apparent flattening of the increase in observed UHECR mass at \(E_{\rm CR}\gtrsim 30\) EeV, as reported in an analysis of Auger surface detector data (Aab et al., 2017; Todero Peixoto, 2019). This feature could indicate a flux of UHE protons with different spectral index to the bulk of the UHECRs, either from a secondary source population or from a single nearby source (Plotko et al., 2022), but it could also originate from a natural mass limit of the mixed UHECR flux. Similar two-component models were studied previously, either in the context of the transition region between Galactic and extragalactic cosmic rays below \(10^{18.7}\,\mathrm{eV}\)(Mollerach & Roulet, 2020; Abreu et al., 2021; Luce et al., 2022; Abdul Halim et al., 2022), or similar to the present paper at the highest energies (Muzio et al., 2019; Das et al., 2021; Munoz et al., 2023). We discuss our findings in the context of existing results in Sec. 6. UHE protons, should they exist, are of significant interest for "UHECR astronomy" due to their high rigidity and consequently weak deflections in magnetic fields. Additionally, if they are accelerated to energies beyond \(\sim 10^{19.7}\,\mathrm{eV}\), the cross-section for photo-pion production on CMB photons is enhanced due to the \(\Delta\)-resonance. This effect, known as the Greisen-Zatsepin-Kuzmin (GZK) limit (Greisen, 1966; Zatsepin & Kuzmin, 1966), leads to strong attenuation of UHE protons above this energy if they are produced in sources more distant than \(\sim 100\,\mathrm{Mpc}\)(see e.g. Gaisser et al., 2016) and the abundant production of charged and neutral pions. The subsequent decay of these pions will result in a large flux of high-energy neutrinos and gamma rays. In this paper we quantify the maximum flux of UHE protons compatible with current observations of UHECR spectrum and composition, considering multimessenger constraints from gamma rays and neutrinos. We investigate two separate scenarios for the maximum proton energy; (i) a high-\(E_{\mathrm{max}}^{\mathrm{p}}\) and (ii) a low-\(E_{\mathrm{max}}^{\mathrm{p}}\) scenario. A brief overview of the model is provided in Sec. 2. Injection and propagation of the cosmic rays are simulated with the Monte-Carlo framework CRPropa 3(Alves Batista et al., 2016, 2022), taking into account the interaction with the cosmic microwave background and extragalactic background light (Gilmore et al., 2012). The best-fit source parameters are obtained in Sec. 3 by comparing the model predictions with existing observations, and in Sec. 4 we discuss the expected multimessenger signal. A specific, exotic scenario with flux recovery beyond the GZK cutoff is presented in Sec. 5. Finally, we discuss our results in the context of similar existing studies in Sec. 6, and conclude in Sec. 7 that current UHECR data is compatible with a significant contribution by this additional proton component of up to 15% at 20 EeV. The precise value depends critically on the choice of the hadronic interaction model for air shower modelling and the maximum proton energy. ## 2 Methods The primary, **mixed-composition, UHECR sources (MIX)** are modelled following the effective parametrisation introduced in Aab et al. (2017c) but with minor modifications detailed in Ehlert et al. (2022). We assume the acceleration to be universal in particle rigidity1, following a "Peters cycle", with a power-law source spectrum and an exponential cutoff at the highest energies. Sources within the MIX population are assumed as identical and the total emission rate per unit volume is Footnote 1: The rigidity of a particle is defined as \(R=E/Z\) in natural units, where \(Z\) is the nuclear charge. It is a measure of the susceptibility to magnetic deflections. \[Q_{A}(E)=Q_{A}^{E_{0}}\left(\frac{E}{E_{0}}\right)^{-\gamma}\,\exp\left(-\frac {E}{Z\,E_{\mathrm{max}}^{\mathrm{p}}}\right) \tag{1}\] for the five injected elements \(A\in\{^{1}\mathrm{H},\,^{4}\mathrm{He},\,^{14}\mathrm{N},\,^{28}\mathrm{Si}, \,^{56}\mathrm{Fe}\}\). Here \(Q_{A}^{E_{0}}\) is the local emission rate at a normalisation energy \(E_{0}\ll E_{\mathrm{max}}^{\mathrm{p}}\) in erg Mpc\({}^{-3}\) yr\({}^{-1}\), and \(\gamma\) is the spectral index which is \(\approx 2\) for diffusive shock acceleration. The source emissivity, i.e. the luminosity density, can be derived from the emission rate as \[L_{0}=\sum_{A}\int_{E_{\mathrm{min}}}^{\infty}\,\mathrm{d}E\,\left(E\cdot Q_{A }^{E_{0}}\right)\,, \tag{2}\] where we have chosen \(E_{\mathrm{min}}=10^{17.8}\,\mathrm{eV}\). The predicted flux at Earth for an observed nuclear mass \(A^{\prime}\) and energy \(E^{\prime}\), and for a redshift evolution of the source population emissivity \(n(z)\), is \[\phi(E^{\prime},A^{\prime})=\sum_{A}\int\mathrm{d}E\int\mathrm{d}z \left|\frac{\mathrm{d}t}{\mathrm{d}z}\right|n(z)\,Q_{A}(E)\] \[\cdot\frac{\mathrm{d}M_{A^{\prime}}}{\mathrm{d}E^{\prime}\, \mathrm{d}M_{A}}(E^{\prime},E,z)\,. \tag{3}\] The last term translates the injected spectrum at the sources to the observed spectrum after propagation and is obtained via Monte-Carlo simulations with CRPropa. In general, the source redshift evolution \(n(z)\) is composed of the evolution of per-source luminosities and the density evolution of the source population. In our analysis, we do not attempt to distinguish the difference between these effects and describe the evolution with a (broken) power law \[n(z)=\begin{cases}(1+z)^{m}&\text{for }m\leq 0,\\ (1+z)^{m}&\text{for }m>0\text{ and }z<z_{0},\\ (1+z_{0})^{m}&\text{for }m>0\text{ and }z_{0}<z<z_{\mathrm{max}},\\ 0&\text{otherwise},\end{cases} \tag{4}\] with \(z_{0}=1.5\) and \(z_{\mathrm{max}}=4\)(Van Vliet et al., 2019). Sources at \(z\gtrsim 1\) have a negligible impact on the observed UHECR flux because of attenuation effects, but they play an important role for the expected multimessenger signal of co-produced neutrinos and low-energy gamma rays. A more conservative estimate of the cosmogenic neutrino flux is obtained if these high-redshift sources, which cannot be constrained by the cosmic-ray fit, are ignored. For the additional population of **UHE pure-proton sources (PP)**, we are particularly interested in the predicted flux of cosmogenic neutrinos at \(E_{\nu}\approx 1\,\mathrm{EeV}\) since this corresponds to the peak sensitivity interval of many existing and planned neutrino experiments. If these neutrinos are produced in the interactions of cosmic rays with photon fields, they typically receive \(\sim 5\%\) of the primary CR energy (Gaisser et al., 2016), which implies that the relevant energy is \(E_{\mathrm{CR}}\approx 20\,\mathrm{EeV}\). We define this value as the reference energy at which we evaluate the contribution of the PP UHE protons to the observed flux of UHECRs. Properties of the pure-proton sources are described by the independent set of parameters \(E_{\mathrm{max}}^{\mathrm{PP}}\), \(\gamma^{\mathrm{PP}}\), \(\gamma^{\mathrm{PP}}\), and \(L_{0}^{\mathrm{PP}}\). The interactions of UHECRs with cosmic background photons lead to the production of secondary photons and neutrinos, with the strength of this "cosmogenic" multimessenger signal depending predominantly on the cosmic-ray composition, injection spectral index and source distance. We compare our model predictions for the UHECR spectrum and composition with publicly available data by Auger (Aab et al., 2020; Yushkov, 2020). Since the composition cannot be observed directly, the mean, \(\langle X_{\mathrm{max}}\rangle\), and standard deviation, \(\sigma(X_{\mathrm{max}})\), of the depth of the air-shower maximum are used as proxy observables, and the conversion is performed with the hadronic interaction models Eros-LHC (Pierog et al., 2015) and Sibyll2.3c (Feydinich et al., 2019). The best-fit source parameters are determined in a two-step fitting process. We discretise the parameter space in maximum energy/rigidity, spectral index and redshift evolution for both source classes and sample a large number of possible combinations of these parameters. For each of these possible source configurations, we then use the Levenberg-Marquardt algorithm2 to find the injection fractions \(f_{A}\) of the MIX sources and emissivities \([L_{0},L_{0}^{\rm PP}]\) of both source populations that minimise the \(\chi^{2}\) differences between our model predictions for the UHECR spectrum and composition and the Auger data points. If a reasonable fit (\(\chi^{2}<250\)) is found for a particular combination of source parameters then adjacent points are also evaluated in an iterative process. Footnote 2: Implemented in the curve_fit routine from the SciPy.optimize library. Constraints on the source parameters derived from a comparison of the predicted cosmogenic flux of gamma rays and neutrinos with observations and upper limits are taken into account with additional \(\Delta\chi^{2}\)-penalty terms. For observed fluxes, such as the Fermi-LAT IGRB (Ackermann et al., 2015) and parts of the IceCube HESE neutrino flux (Abbasi et al., 2021) we consider a simple one-sided \(\chi^{2}\) penalty that only contributes if the predicted flux exceeds observations. For upper-limit points with a low number of, or zero, events per bin, e.g. the Auger UHE neutrino (Pedreira, 2021) and UHE gamma-ray limits (Savina, 2021; Abreu et al., 2022) we use the Poisson likelihood \(\chi^{2}\)(Baker and Cousins, 1984) but the penalty is only applied if the predicted number of events in a bin exceeds the observed number. The relevant data sets are: \[\Delta\chi_{\nu}^{2}:\ \text{IceCube HESE flux (Abbasi et al., 2021),} \tag{5}\] \[\text{Auger UHE neutrino limit (Pedreira, 2021),}\] \[\Delta\chi_{\gamma}^{2}:\ \text{Fermi-LAT IGRB flux (Ackermann et al., 2015),}\] \[\text{Auger hybrid UHE gamma-ray limit (Savina, 2021),}\] \[\text{Auger SD UHE gamma-ray limit (Abreu et al., 2022).}\] We exclude possible source configurations where the combined multimessenger penalty exceeds the level of two sigma, i.e. when \(\Delta\chi_{\nu}^{2}+\Delta\chi_{\gamma}^{2}>4\). However, in the plots of the cosmogenic neutrino and gamma-ray fluxes we only include the rejections by the respective messenger. ## 3 Fit with an additional proton component We investigate two different scenarios in terms of the proton maximum energy, assuming Eros-LHC as hadronic interaction model. Results for Sibyll2.3c are shown in Appendix A. In both scenarios, we find the redshift evolution of the PP number density to be unconstrained by cosmic-ray observations alone. Since the PP flux is pure protons, interactions during propagation do not affect the observed composition. However, propagation effects soften the distribution and attenuate the original UHE proton flux. Stronger redshift evolutions require harder injection spectra and higher source emissivity. ### Two-Source-Class Dip Model (2SC-dip) We are particularly interested in scenarios that produce a large flux of UHE neutrinos and gamma rays. This requires proton energies sufficiently above the GZK limit to enable copious photo-pion production on CMB photons, and we, therefore, choose \(E_{\rm max}^{\rm PP}=10^{23}\,\text{eV}\) for our first scenario. The best-fit properties of both source populations are listed in Tab. 1, 2nd column, and the predicted spectrum and composition at Earth are shown in Fig. 1, left. The preferred maximum rigidity, spectral index and redshift evolution of the mixed-composition source population are compatible with the values obtained for the single-population model within uncertainties, and the additional protons provide a relatively constant contribution of approx. \(5-10\%\) between the ankle and the end of the GZK cutoff (Fig. 2, teal band). We find that the overall shape effectively corresponds to the predictions from the classical "proton-dip" explanation of the UHECR flux (Berezinsky and Grigor'eva, 1988). While this model is inconsistent with current measurements of the UHECR composition and the high-energy neutrino flux (Heinze et al., 2016), our results show that it can still be relevant if the total proton contribution remains subdominant to the primary, mixed-composition, cosmic-ray flux. We refer to the presented source model as the "dip" or 2SC-dip (two-source-class dip) model. In the 2SC-dip model, the proton sources are required to exhibit a soft injection spectrum (see Appendix B), which could be a distinguishing feature of this additional source population in the observed flux, provided that reliable event-by-event mass reconstruction becomes available in the future. Softer spectra than suggested by the best fit are disfavoured since the associated sub-ankle flux would exceed observational limits. For hard spectra, \(\gamma^{\rm PP}\lesssim 2\), the additional protons only contribute at energies around the GZK cutoff and the possibilities for improving the fit over the entire energy range are consequently limited. The combination of both effects results in a clearly localised preferred spectral index of the proton sources. ### Two-Source-Class Best-Fit Model (2SC-uhcer) An alternative scenario is presented by proton sources with energies comparable to the standard, mixed-composition, cosmic-ray sources. For this model, we set \(E_{\rm max}^{\rm PP}=10\,\text{EeV}\). At the best fit (Tab. 1, 3rd column), the improvement over the dip-model is \(\Delta\chi^{2}\approx-15\) but very hard proton spectra are required (see Appendix B). The predicted PP proton spectrum at Earth exhibits a peak-like shape reminiscent of the individual, peaked, mass groups originating from the mixed composition sources (Fig. 1, right). However, due to the choice of \(E_{\rm max}^{\rm PP}\), the peak energy is shifted upward by approximately an order of magnitude compared to the mixed-population proton peak. Compared to the 2SC-dip model, the best-fit observed proton fraction at 20 EeV is significantly larger, up to 15%, but the contribution is limited to a small energy interval and becomes negligible below the ankle (Fig. 2, brown band). While this scenario, the "UHECR best-fit" model (2SC-uhcer), provides a significant improvement in the cosmic-ray fit, it comes at the cost of extremely hard proton injection spectra, and the expected cosmogenic neutrino and UHE gamma-ray signal associated with the protons is reduced due to the sub-GZK maximum proton energies. With the injection spectrum of the additional protons similar to the bulk of the cosmic rays, separation of the two components will be difficult even if event-by-event mass reconstruction were available. However, the predicted existence of two separate proton bumps in the cosmic-ray spectrum is a distinguishing feature of this model. ## 4 Multimessenger signal In the following, we discuss the predicted multimessenger signal produced through interactions with the CMB and the Extragalactic Background Light during the propagation of the cosmic rays. We focus on the 2SC-dip "proton-dip" model which predicts a large flux of cosmogenic neutrinos and UHE gamma rays. The multimessenger signal of the 2SC-uhecr model is briefly discussed at the end. ### 2SC-dip Photons, electrons, and positrons produced with PeV-EeV energies in photohadronic interactions of the UHE protons interact with cosmic photon fields, leading to the development of electromagnetic cascades and reprocessing to lower energies. In the scenario of low-\(E_{\rm max}\) and mixed-composition cosmic-ray sources only, most of the gamma-ray signal is expected at GeV-PeV energies since the CR energies are insufficient for large interaction cross-sections with CMB photons. In this energy range (Fig. 3, top), the predicted gamma-ray flux associated with the PP protons in our model is at a similar level to the flux expected from the mixed cosmic rays. Depending on the exact choice of source parameters, the combined gamma-ray flux of both populations can saturate the upper limit imposed by the re-scaled3 Fermi-LAT flux at \(\sim 700\) GeV, however, the tension is not statistically significant. Most of the gamma-ray flux at \(E_{\gamma}\gtrsim 100\) GeV is produced by the mixed-composition cosmic rays. At lower energies, the cosmogenic gamma rays are safely below the observed diffuse background flux. Footnote 3: Following Alves Baista et al. (2019), we re-scale the isotropic gamma-ray background reported by Fermi-LAT (Ackermann et al., 2015) by a conservative factor of \(\times 0.4\) to account for the contribution from unresolved point sources which was estimated to be \(\approx 68^{\rm\mu s}_{-8}\)% by Lisanti et al. (2016). The situation is more promising at ultra-high energies where the signal from the ordinary, mixed cosmic rays is expected to be very small. By construction, the protons injected at the PP sources have typical energies \(E_{\gamma}>10^{18}\)eV and consequently large cross-sections for photo-pion production on the abundant CMB photons. The predicted UHE gamma-ray flux from the protons is therefore orders of magnitude above the flux produced by the mixed cosmic rays (Fig. 3, bottom). It correlates inversely with the PP spectral index - harder injection spectra result in more cosmogenic UHE photons. As indicated previously, hard injection spectra generally require strongly positive redshift evolutions to soften the observed spectrum. Present limits by Auger and TA are not constraining, even in the most optimistic scenario within \(3\sigma\) uncertainties, however, the difference is not more than a factor of a few and it is clear that future detectors - such as GRAND200k and AugerPrime - will provide strong constraints for the viable PP spectral index and redshift evolution. The expected flux of cosmogenic neutrinos (Fig. 4) is not well constrained by the cosmic-ray fit alone and can vary by approx. a factor of 1000 within the 99.7% confidence interval. In the most pessimistic case, when the redshift evolution of the proton sources is strongly negative, the neutrino flux produced by PP protons is subdominant to the neutrinos from the default CR population at all energies \(E_{\nu}\lesssim 1\) EeV and the UHE flux is small. On the other hand, Figure 1: Predicted spectrum and composition at Earth for the three investigated scenarios, with Eros-LHC as hadronic interaction model. Left: “proton-dip” (2SC-dip). Right: “UHECR” best fit (2SC-uhecr). Best-fit parameter values are listed in Tab. 1. Dashed lines indicate the contributions of the separate mass groups from the mixed-composition sources, with \([A_{\rm min},A_{\rm max}]\). The additional protons from the second population are shown as a solid, orange line. Coloured bands indicate the 68% uncertainties. Figure 2: Contribution of the PP protons to the observed, differential UHECR flux as a function of energy, within \(1\sigma\) of the best fit to CR spectrum and composition (Fig. 1). for strong redshift evolutions, the expected neutrino flux saturates the flux observed by IceCube in the few-PeV energy range and exceeds significantly the limits above 10 PeV and at UHE. This includes the source configuration corresponding to the best UHECR spectrum and composition fit. By requiring that the neutrino limits are not violated (\(\Delta\chi^{2}_{\nu}<4\)) we can constrain the properties of the proton sources to \[\gamma^{\rm PP}\gtrsim 1.6\,\ m^{\rm PP}\lesssim 4\,\ {\rm and}\ L_{0}^{\rm PP }\lesssim 10^{44.5}\frac{\rm erg}{\rm Mpc^{3}\,yr}. \tag{6}\] Irrespective of the total level, the predicted neutrino flux exhibits a characteristic double-bump profile, with the first peak at \(E_{\nu}\approx 5\) PeV from photo-pion production of the cosmic-ray protons on the extragalactic background light, and the second peak at \(E_{\nu}\approx 1\) EeV from photo-pion production on the less energetic, but more abundant, CMB photons. Due to the soft spectrum of the UHE protons, both peaks are present at the same time and the UHE neutrino limits can be used to constrain the contribution of this cosmogenic neutrino flux to the observed IceCube HESE flux at 1.3 PeV to \(f_{\rm HESE}^{\rm PP}\lesssim 20\%\). ### 2SC-uhcer In the "UHECR best fit" model, the maximum proton energy is below the required level for photo-pion production with the bulk of CMB photons and the expected multimessenger signal is low. UHE gamma rays are at least three orders of magnitude below existing limits and at GeV-TeV energies, the contribution is subdominant compared to the cosmogenic photons from the MIX cosmic rays. The total contribution to the Fermi-LAT IGRB is \(<50\%\) even in the most optimistic scenario, although the upper limit in the highest energy bin is approximately saturated. While the neutrino signal of the UHE protons at the best fit is subdominant to the neutrinos from the mixed-composition cosmic rays, the shape of the neutrino spectrum is of particular interest. Unlike for the 2SC-dip model, few protons are present at lower energies and the low-energy peak originating from interactions with EBL photons is therefore absent. Only the peak from photo-pion production on the CMB remains. In this scenario, the observed IceCube neutrino flux at PeV energies and below, and the possible UHE neutrino flux are \begin{table} \begin{tabular}{l|c c c c} \hline Model & 1SC & \multicolumn{2}{c}{2SC-dip} & \multicolumn{2}{c}{2SC-uhcer} \\ & & CR & CR + MM & CR + MM \\ \hline \multicolumn{5}{l}{**Population 1**} \\ \hline \(R_{\rm max}\) [EV] & \(1.25^{+0.23}_{-0.19}\) & \(1.5^{+0.5}_{-0.4}\) & \(1.5^{+0.5}_{-0.4}\) & \(1.5^{+0.5}_{-0.4}\) \\ \(\gamma\) & \(-2.5^{+0.1}_{-0.10}\) & \(-1.20^{+0.22}_{-0.22}\) & \(-1.41^{+0.42}_{-0.22}\) & \(-1.41^{+0.22}_{-0.22}\) \\ \(m\) & \(1.9^{+0.6}_{-4.1}\) & \(-2.2^{+2.2}_{-2.2}\) & \(-1^{+1}_{-3}\) & \(1^{+1}_{-2}\) \\ L\({}_{0}\) [\(\frac{\rm erg}{\rm MeV^{3}}\)] & \(5.6^{+1.0}_{-3.4}\) & \(2.0^{+0.5}_{-0.5}\) & \(2.5^{+0.6}_{-1.0}\) & \(3.77^{+0.66}_{-1.39}\) \\ \hline \(f_{R}^{R}\) [\(\%\)] & \(6.6^{+5.4}_{-0.6}\) & \(\approx 0^{+6.5}_{-0.5}\) & \(\approx 0^{+6.6}_{-0.6}\) & \(\approx 0^{+7.2}_{-0.4}\) \\ \(f_{R}^{R}\) [\(\%\)] & \(48.1^{+7.6}_{-7.5}\) & \(68.5^{+3.9}_{-7.3}\) & \(69.6^{+4.6}_{-4.8}\) & \(70.2^{+4.1}_{-4.4}\) \\ \(f_{R}^{R}\) [\(\%\)] & \(40.1^{+4.9}_{-6.3}\) & \(26.6^{+6.6}_{-3.5}\) & \(28.0^{+4.5}_{-3.3}\) & \(23.6^{+6.30}_{-4.8}\) \\ \(f_{R}^{R}\) [\(\%\)] & \(4.8^{+0.7}_{-1.5}\) & \(4.5^{+0.5}_{-0.5}\) & \(4.8^{+0.3}_{-0.9}\) & \(5.8^{+0.6}_{-1.5}\) \\ \(f_{R}^{R}\) [\(\%\)] & \(0.45^{+0.05}_{-0.17}\) & \(0.38^{+0.10}_{-0.09}\) & \(0.33^{+0.10}_{-0.05}\) & \(0.42^{+0.07}_{-0.08}\) \\ \hline \multicolumn{5}{l}{**Population 2**} \\ \hline \(E_{\rm max}^{\rm PP}\) [EV] & & \(10^{5}\) (fix) & \(10^{5}\) (fix) & \(10\) (fix) \\ \(\gamma^{\rm PP}\) & & \(2.5^{+0.3}_{-0.3}\) & \(2.5^{+0.3}_{-0.3}\) & \(-0.25^{+0.50}_{-0.75}\) \\ \(m^{\rm PP}\) & & \(6^{+0.4}_{-10}\) & \(4^{+1}_{-10}\) & \(-3^{+9.4}_{-3.4}\) \\ \(\frac{\rm L_{0}^{\rm PP}}{\rm Mpc^{3}\,yr}\) & & \(4.5^{+1.6}_{-4.0}\) & \(1.8^{+0.6}_{-1.4}\) & \(0.12^{+1.8}_{-0.06}\) \\ \hline \(f^{\rm PP}\) (20 EeV) [\(\%\)] & & \(7.9^{+0.9}_{-0.9}\) & \(7.0^{+0.8}_{-0.5}\) & \(14.2^{+1.2}_{-0.5}\) \\ \(\chi^{2}\)/dof & 101.0/29 & 73.4/26 & 74.4/26 & 58.0/26 \\ \end{tabular} \end{table} Table 1: Best-fit parameters for the single- and two-population source models with EPOS-LHC used as the hadronic-interaction model describing air-shower development. The 1\(\sigma\) uncertainties include the penalty factor for the total best-fit quality proposed in Rosenfeld (1975). The “1SC” scenario is the benchmark model with only a single population of sources injecting mixed-composition cosmic rays. “Population 1” refers to the baseline source class that projects a mixed cosmic-ray flux of protons to iron, and “Population 2” denotes pure-proton sources. The best fit of UHECR spectrum and composition is given in the “CR” column, and the best fit after including neutrino and gamma-ray limits in the “CR + MM” columns. For the 2SC-uhcer model, the cosmic-ray best fit is compatible with existing multimessenger limits. Confidence intervals that extend to the edges of the sampled parameter range are indicated by an asterisk. Figure 3: Predicted cosmogenic gamma-ray signal for the “proton-dip” model (2SC-dip), with Eros-LHC as hadronic interaction model, in the GeV-TeV (top) and EeV (bottom) energy range. The photon flux for each source class corresponding to the UHECR best fit (Tab. 1, 2nd and 4th column) is indicated by a solid line. The 1, 2, 3\(\sigma\) contours, under the condition that \(\Delta\chi^{2}_{\nu}<4\), are indicated by brown bands in decreasing intensity for the contribution from the additional proton UHECRs, and by blue bands for the neutrino flux from the regular, mixed cosmic rays. These intervals do not include the best-fit penalty factor of Rosenfeld (1975). Observations include the Fermi-LAT (Ackermann et al., 2015) and HAWC (Albert et al., 2022) diffuse gamma-ray background in the GeV-PeV range, the 95% upper limits at UHE of Auger (Savina, 2021; Abreuetal et al., 2022) and TA (Abbasti et al., 2019), the optimistic 3-year sensitivity of the planned GRAND2006 (Akvarez-Muniz et al., 2020), and a combination of the latest Auger SD limit with the projected AugerPrime exposure for 10 years of observations under the assumption of 100% photon selection efficiency and zero background. decoupled. It is possible, for strongly positive redshift evolutions of the proton sources, to produce a large neutrino flux at UHE with a negligible contribution to the IceCube HESE flux. Redshift evolutions stronger than \(n^{\rm PP}\approx 4\) can be excluded by the current UHE neutrino limits of IceCube and Auger. ## 5 Exotic flux recovery scenario (2SC-rec) A combination of the 2SC-dip and 2SC-uhecr models is provided by a proton source population with large maximum energy, \(E_{\rm max}^{\rm PP}=10^{5}\) EeV, as in the "proton-dip" model, and hard injection spectrum, \(\gamma^{\rm PP}=1\), similar to the "UHECR" model. The quality of the UHECR fit (Tab. 2) is the worst out of all the three models and approaches the baseline single-source-class model. Compared to the dip model, the potential for fit improvement is limited since the protons contribute only at the highest energies, while the position of the observed proton peak is at too-high energies to provide an improvement of similar magnitude as in the 2SC-uhecr model. However, an interesting feature in the form of a "flux recovery" at trans-GZK energies can be observed (Fig. 5). We refer to this third model as the "recovery" or 2SC-rec model. A recovery is only possible if the nearest source(s) is(are) located within the GZK volume at no more than \(\sim~{}20\) Mpc (e.g. Gaisser et al., 2016) as otherwise, the GZK cutoff provides a natural suppression of the observable flux above \(\sim 10^{19.7}\) eV. Such a spectral recovery is not necessarily connected to a large UHE neutrino signal. In addition to a high \(E_{\rm max}^{\rm PP}\) and hard proton source spectra, the latter also requires strong redshift evolution of the source emissivity which is not a pre-requisite for a CR flux recovery. However, the observation of neutrinos with energy above \(10^{19}\)eV by future extremely-UHE neutrino detectors such as PUEO (Abarr et al., 2021) would provide a strong hint for the existence of a sizeable UHECR flux recovery beyond the GZK cutoff. We have noted in Sec. 4 for large \(E_{\rm max}^{\rm PP}\) that hard proton source spectra are excluded by existing neutrino limits, under the condition that only source configurations within \(3\sigma\) of the best fit to the UHECR spectrum and composition under the 2SC-dip model are considered. If this limitation is lifted, such as for the 2SC-rec model, we can identify scenarios where the predicted neutrino flux is sufficiently below existing limits, i.e. \(\Delta\chi_{\nu}^{2}<4\). This constrains the redshift evolution of the proton sources to \(n^{\rm PP}\lesssim 3\) (\(\lesssim 2\) if gamma-ray limits are included). In contrast to the 2SC-dip model, the combination of hard injection spectrum, high maximum proton energy and uniform source distribution with minimum distance \(z_{\rm min}=10^{-3}\), results in an increased flux of cosmogenic UHE gamma rays. We find that for spectral indices harder than \(\gamma^{\rm PP}\lesssim 1\) all possible realisations of the source model are excluded by the existing Auger UHE photon limits (Savina, 2021; Abreu et al., 2022). We conclude that the joint consideration of neutrino and UHE gamma-ray limits severely constrains the allowed proton injection spectrum and, by extension, the maximum allowed flux recovery from this second source population above the GZK cutoff. This motivates our choice of \(\gamma^{\rm PP}=1\) as benchmark spectral index for the 2SC-rec model. The spectrum and composition corre Figure 4: Same as Fig. 3 but for the predicted cosmogenic neutrinos in the 2SC-dip (top) and 2SC-uhecr model (bottom). The maximum allowed flux within \(3\sigma\) of the best CR fit but without including the multimessenger penalty is shown as a dashed line of the respective colour. The IceCube HESE flux (Abasi et al., 2021), upper limits from IceCube (Aartsen et al., 2018) and Auger (Aab et al., 2019; Pedreira, 2021), and predicted sensitivities of planned detectors (Aartsen et al., 2019; Alvarez-Muniz et al., 2020; Allison et al., 2016; Cummings et al., 2021) are shown as a reference. Figure 5: Same as Fig. 1 but for the “flux recovery” 2SC-rec scenario. Auger 90% upper limits above \(10^{20.4}\) eV were derived assuming an energy-independent exposure of \(60400\) km\({}^{2}\) yr sr (Aab et al., 2020). Expected 90% upper limits for GCOS (40k) after 10 years of operation (\(\epsilon\sim 10^{6}\) km\({}^{2}\) yr sr (Coleman et al., 2023)) are shown in purple. sponding to the maximum recovery allowed by the cosmic-ray fit and multimessenger constraints are shown in Fig. 5. The preferred source parameters stay unchanged except for the PP redshift evolution and luminosity density. Finally, we comment that the projected sensitivity of the proposed _Global Cosmic Ray Observatory_ (GCOS) (Coleman et al., 2023) would place strong constraints on the allowed UHE flux recovery. ## 6 Discussion A similar conclusion in terms of the allowed UHE proton fraction for Epos-LHC versus Shivll2.3c was reached in a recent paper by Muzio et al. (2023). The best fit obtained in this study is qualitatively similar to our 2SC-uhecr model with inverted proton injection spectra and low maximum energies. Their reported, best-fit observed proton fraction of the integral cosmic-ray flux above \(30\,\mathrm{EeV}\) for typical astrophysical source evolutions is \(5-10\%\) and \(2-3\%\) for Epos-LHC and Shivll2.3c respectively. These values are approximately compatible with our preferred integral fractions, \(F(\geq 30\,\mathrm{EeV})=10.2^{+1.4}_{-1.5}\%\) / \(3.1^{+2.6}_{-0.9}\%\). A direct comparison is difficult, however, since the authors assumed a mono-elemental injection of Silicon-like nuclei as the MIX sources and also included in-source photo-hadronic interactions, and the predicted cosmic-ray flux at Earth is not provided. A solution to the two-population model with source parameters similar to our 2SC-dip best fit was found in Das et al. (2021). However, the reported best-fit proton fraction at \(E_{\mathrm{ref}}=20\,\mathrm{EeV}\) is approx. \(20-25\%\) - a contribution that we found to be in strong tension with the observed UHECR composition, in particular the variance of shower maxima. The authors do not find the best fit of the UHECR spectrum and composition that we identify in our 2SC-uhecr model since they only consider soft spectral indices of the proton sources, \(\gamma^{\mathrm{PP}}\geq 2.2\), and no mention is made of a possible flux recovery beyond the GZK cutoff. Important information about the potential sources of the UHE pure-proton flux can be gained from the total emissivity \(L_{0}^{\mathrm{PP}}\) (luminosity density) required by the UHECR fit. Although the cosmic-ray emissivity of astrophysical objects is generally not known, other observable properties such as gamma-ray and X-ray emissivities can be used for relative calibration. For a summary of population emissivities see Murase and Fukugita (2019). Assuming equipartition of the available energy budget into gamma rays / X-rays and cosmic rays, we observe that all typically considered source classes (gamma-ray bursts, tidal disruption events, starburst galaxies, active galactic nuclei, BL Lacertae, flat-spectrum radio quasars, and radio galaxies) can satisfy the emissivity required of the pure-proton sources in the 2SC-uhecr and 2SC-dip models, although gamma-ray bursts and tidal disruption events are marginally challenged in the latter scenario. For the extreme 2SC-recovery model, only the entire AGN population and the population of all BL Lacs can easily meet the required emissivity. GRBs and TDEs, in contrast, are excluded unless their cosmic-ray emissivity exceeds the observed gamma-ray emissivity by at least a factor of ten. FSRQs and radio galaxies sit close to the minimum luminosity density required by the cosmic-ray fit. Given the hard spectrum and high maximum energy, it might be challenging that the UHE proton flux predicted by the 2SC-rec model is produced by astrophysical accelerators. An alternative explanation for the spectrum could be provided by the decay of hypothetical super-heavy dark matter (SHDM) with masses up to the Planck mass (Berezinsky et al., 1997; Kuzmin and Rubakov, 1998; Sigl et al., 1999; Bhattacharjee and Sigl, 2000; Ellis et al., 2006; Kalashev et al., 2009; Aloisio et al., 2015; Supanitsky and Medina-Tanco, 2019). These heavy particles can be produced gravitationally during the early stages of the Universe, e.g. as part of the reheating epoch from a hypothesised, decaying inflaton field, or from coherent oscillations of this field before the inflation phase (Kofman et al., 1994; Felder et al., 1999; Chung et al., 1999). If they never reached thermal equilibrium after production and the lifetime is larger than the age of the Universe then these heavy relics can provide a possible explanation for observed DM densities (Aloisio et al., 2015). Similar to the original proton-dip model (Berezinsky and Grigor'eva, 1988), "top-down" scenarios of decaying SHDM are disfavoured as the single origin of the observed UHECR flux (Aab et al., 2017; Rautenberg, 2021), and it was shown that decaying SHDM cannot explain the detected high-energy IceCube neutrino events if a hadronic decay channel is considered (Kuznetsov, 2017; Cohen et al., 2017; Kachelriess et al., 2018). Still, a subdominant contribution to the observed UHECR flux, and a possible flux recovery due to very hard decay spectra are not fully excluded. Crucially, existing upper limits on the post-GZK cosmic-ray flux provide only weak constraints on the allowed flux recovery, and UHE photon limits prove superior for \(M_{\mathrm{DM}}<10^{14}\,\mathrm{GeV}\)(Supanitsky and Medina-Tanco, 2019). We do not investigate the SHDM scenario further, however, we wish to point out several key differences compared to our assumed source model. If the additional protons are produced in the decay of super-heavy dark matter, a substantial anisotropy in arrival directions and extremely local production of the observed UHECRs should be expected since the signal is predicted to be dominated by dark matter in the Milky Way with a particular clustering around the Galactic centre (Abreu et al., 2023). This is in sharp contrast to our proposed continuous distribution of sources in redshift up to \(z_{\mathrm{max}}=4\) and minimum source distance of \(\sim 4\,\mathrm{Mpc}\). Consequently, in the SHDM scenario, the expected flux of cosmogenic neutrinos and low-energy gamma rays is severely reduced. In addition, we only consider the cosmogenic production of neutrinos and gamma rays while in the SHDM model the multimessenger signal is likely dominated by production during the decay of the dark matter. ## 7 Summary and Conclusions In this work, we have investigated the possible existence, and allowed parameter space, for an additional, proton-dominated component of UHECRs, produced by an independent astrophysical source population. We have presented the maximum contribution of such a \begin{table} \begin{tabular}{l|c c|c c} \hline \hline \multicolumn{1}{c|}{**Population**} & \multicolumn{1}{c|}{MIX} & \multicolumn{2}{c}{\(f_{\mathrm{A}}^{\mathrm{\,R}}\)[\%]} & \multicolumn{2}{c}{Pure-Proton} \\ \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{CR} & \multicolumn{1}{c}{CR + MM} \\ \hline \(R_{\mathrm{max}}\) [\(\mathrm{EeV}\)] & \(1.4^{+1.0}_{-0.6}\) & \(10.5^{+7.6}_{-10.2}\) & \(10^{5}\) (fix) & \(10^{5}\) (fix) \\ \(\gamma\) & \(-1.7^{+0.4}_{-0.4}\) & \(5.5^{+7.5}_{-3.6}\) & \(1\) (fix) & \(1\) (fix) \\ \(m\) & \(0^{+1}_{-1}\) & \(29.3^{+3.8}_{-4.8}\) & \(6^{+0.0}_{-2.2}\) & \(2^{+1}_{-4.4}\) \\ L\({}_{0}\) [\(v^{\mathrm{eff}}\frac{\mathrm{erg}}{\mathrm{kpc}^{\gamma}\,\gamma^{2}}\)] & \(3.3^{+0.8}_{-0.6}\) & \(4.3^{+1.0}_{-0.7}\) & \(12.0^{+1.7}_{-8.1}\) & \(1.5^{+0.6}_{-1.1}\) \\ & & \(0.22^{+0.01}_{-0.02}\) & & & \\ \hline \(f^{\mathrm{PP}}\) (20 EeV) [\%] & & & \(2.6^{+0}_{-0}\) & \(1.46^{+0.06}_{-0.20}\) \\ \(\chi^{2}\)/dof & & & \(87.3\)/\(27\) & \(91.9\)/\(27\) \\ \hline \end{tabular} \end{table} Table 2: Same as Tab. 1 but for the extreme 2SC-recovery model. The best-fit parameters of the mixed-composition sources are given for the CR-only fit, but the preferred values for the CR+MM scenario are compatible within quoted uncertainties. population to the UHECR flux at Earth, taking into account the fit to the UHECR spectrum and composition-sensitive observables. In addition, we have derived predictions for the spectral shape and redshift evolution of the independent UHE-proton population model as well as the expected secondary neutrino and photon fluxes produced by UHECR interactions and their detectability. This analysis was performed for two distinct choices of the maximum proton energy. For sources with maximum energy far beyond the GZK limit (2SC-dip model), the proton spectrum at Earth reproduces the predictions of the classic "proton-dip" model (Berezinsky and Grigor'eva, 1988), albeit with the proton flux subdominant to the contribution of the principal, mixed-composition cosmic rays. If instead maximum energies below \(10^{19.7}\,\mathrm{eV}\) are assumed (2SC-uber model), the cosmic-ray fit is improved by \(\Delta\chi^{2}\approx-15\) but the source spectrum must be hard and the associated multimessenger signature is generally small. In both scenarios, the redshift evolution of the proton sources cannot be constrained by the cosmic-ray fit alone. We find that the maximum proton contribution to the observed, diffuse UHECR flux depends strongly on the choice of hadronic interaction model for the interpretation of the extensive air showers, and on the maximum proton energy. With Sibyll2.3c a proton fraction of \(\lesssim 1\%\) is expected at 20\(\,\mathrm{E}\) in the 2SC-dip model and the improvement over the baseline model is negligible. Under the 2SC-uhecr model, a contribution of \(2-5\%\) is predicted with a \(1.1\sigma\) significance fit improvement. Assuming Eros-LHC instead, for the 2SC-dip model, approximately \(8\%\) of the UHECR flux is expected to be protons, with the contribution approximately constant over the entire energy range above the ankle. For the 2SC-uhecr model, where \(E_{\mathrm{max}}^{\mathrm{p}}=10\,\mathrm{E}\)V, the contribution to the observed UHECR flux peaks around \(E_{\mathrm{ref}}\approx 20\,\mathrm{E}\mathrm{eV}\) at up to \(15\%\), but the relative proton fraction decreases rapidly for energies away from the peak and the source spectra are required to be hard. The improvement of the two-population model over the baseline single-population scenario is \(2.2\sigma\) (2SC-dip) and \(3.7\sigma\) (2SC-uhecr). We demonstrated that for our fiducial high-\(E_{\mathrm{max}}^{\mathrm{p}}\) model a distinguishing feature of the independent UHE proton component is a soft spectral index (\(2.5\pm 0.3\)), which can be tested by AugerPrime or other facilities with event-by-event mass determination capabilities. In addition, the cosmogenic neutrino and UHE photon fluxes produced by this component are substantial and dominate over those from the mixed-composition population. Current neutrino upper limits from IceCube and Auger already weakly constrain the available parameter space for the proton population from the fit to the UHECR data alone. Finally, as an "exotic" scenario, we have considered proton sources with high maximum energy \(E_{\mathrm{max}}^{\mathrm{p}}\gg 10^{20}\,\mathrm{eV}\) and hard spectral index. We find that existing limits on the neutrino and UHE gamma-ray flux constrain the proton spectral index to \(\gamma^{\mathrm{p}}\gtrsim 1\) and therefore provide an upper limit on the possible cosmic ray flux beyond the GZK cutoff. However, a significant recovery is still allowed. ## Acknowledgments We thank Bjorn Eichmann, Michael Kachelriess, Marco Munzio, Pavlo Plotko, and Michael Unger for useful discussions. ## Data Availability No new observational data was generated as part of this study. The cosmic-ray, gamma-ray and neutrino propagation was simulated with the CRPropa 3 software package (Alves Batista et al., 2016, 2022), which is publicly available from crpropa_desy.de.
2308.01485
A new probabilistic analysis of the yard-sale model
In Chakraborti's yard-sale model of an economy, identical agents engage in trades that result in wealth exchanges, but conserve the combined wealth of all agents and each agent's expected wealth. In this model, wealth condensation, that is, convergence to a state in which one agent owns everything and the others own nothing, occurs almost surely. We give a proof of this fact that is much shorter than existing ones and extends to a modified model in which there is a wealth-acquired advantage, i.e., the wealthier of two trading partners is more likely to benefit from the trade.
Christoph Börgers, Claude Greengard
2023-08-03T00:48:58Z
http://arxiv.org/abs/2308.01485v1
# A new probabilistic analysis of the yard-sale model ###### Abstract In Chakraborti's _yard sale model_ of an economy [6], identical agents engage in trades that result in wealth exchanges, but conserve the combined wealth of all agents and each agent's _expected_ wealth. In this model, _wealth condensation_, that is, convergence to a state in which one agent owns everything and the others own nothing, occurs almost surely. We give a proof of this fact that is much shorter than existing ones and extends to a modified model in which there is a wealth-acquired advantage, i.e., the wealthier of two trading partners is more likely to benefit from the trade. ## 1 Background The yard-sale model, first proposed in [6], is a caricature of a set of agents trading with each other. The agents are identical, and there are \(N\) of them. Time proceeds in discrete steps labeled \(0,1,2,\ldots\), and in the \(n\)-th step the \(i\)-th agent has wealth \(X_{n}^{i}\geq 0\). We will write \[X_{n}=\left[X_{n}^{i}\right]_{1\leq i\leq N}\in\mathbb{R}^{N}.\] Without loss of generality, we assume \[\sum_{i=1}^{N}X_{0}^{i}=1.\] We think of \(X_{0}\) as deterministic. The \(X_{n}\) with \(n\geq 1\) are random, defined inductively as follows. Given \(X_{n-1}\), choose a random pair of integers, \((\mu_{n},\nu_{n})\), uniformly distributed in the set of all pairs \((i,j)\) with \(1\leq i,j\leq N\), \(i\neq j\). Without loss of generality, assume \(X_{n-1}^{\mu_{n}}\leq X_{n-1}^{\nu_{n}}\). So agent \(\nu_{n}\) is, at time \(n-1\), at least as wealthy as agent \(\mu_{n}\). Imagine that agents \(\mu_{n}\) and \(\nu_{n}\) now engage in an economic transaction. As a result of errors (perhaps over- or underpayments occurring because of lack of complete information for instance), a random fraction \(B_{n}\in(0,1)\) of the wealth of agent \(\mu_{n}\) (the poorer of the two) is transferred either from agent \(\mu_{n}\) to agent \(\nu_{n}\), or vice versa. The \(B_{n}\) are assumed to be identically distributed. The direction of the transfer is determined by a fair coin flip. Formally, let \(V_{n}=1\) with probability \(1/2\), and \(V_{n}=-1\) otherwise. Then \[X_{n}^{\mu_{n}}=X_{n-1}^{\mu_{n}}+V_{n}\,B_{n}\,X_{n-1}^{\mu_{n}}\quad\text{ and }\quad X_{n}^{\nu_{n}}=X_{n-1}^{\nu_{n}}-V_{n}\,B_{n}\,X_{n-1}^{\mu_{n}}.\] The pairs \((\mu_{n},\nu_{n})\), the fractions \(B_{n}\), and the signs \(V_{n}\) are assumed to be independent of each other and of the \(X_{k}\), \(B_{k}\), and \(V_{k}\) with \(k\leq n-1\). In reference [6], and in much of the literature on the yard-sale model, \(B_{n}\) is assumed to be a deterministic number \(\beta\in(0,1)\). We use a random \(B_{n}\in(0,1)\) more generally. Notice that wealth is conserved: \[\sum_{i=1}^{N}X_{n}^{i}=1\] for all \(n\). The model is known to have the following surprising property, which we will call the _yard-sale theorem_. **Yard-Sale Theorem**.: (a) _There almost surely exists an \(i\in\{1,\ldots,N\}\) with \(\lim_{n\to\infty}X_{n}^{i}=1\)._ (b) _For all \(i\in\{1,\ldots,N\}\),_ \[P\left(\lim_{n\to\infty}X_{n}^{i}=1\right)=X_{0}^{i}. \tag{1}\] In the limit, one agent owns everything. This sort of maximal inequality is called _wealth condensation_ in the literature. In the yard-sale model, wealth condensation is the inescapable result of random, statistically unbiased interactions. For a version of the model in which there is a _continuum_ of agents, rather than a finite number, a result of this kind was proved by Boghosian _et al_[3]. Chorro [7] pointed out that the theorem as stated above is an immediate consequence of Doob's martingale convergence theorem: For a fixed \(i\), the sequence \(\{X_{n}^{i}\}_{n=0,1,2,\ldots}\) is a bounded martingale, and therefore must converge. It is clear from the definition of the model that the \(X_{n}^{i}\) cannot all converge unless one converges to \(1\) and the others converge to \(0\). Equation (1) follows from the fact that \(E(X_{n}^{i})=E(X_{0}^{i})\) for all \(n\) and \(i\). Some interesting variations can be handled immediately using the same reasoning. For instance, different agents can be assumed to have different degrees of risk tolerance [5]. The amount of wealth transferred during the trade between agents \(\mu_{n}\) and \(\nu_{n}\) might be taken to be \(V_{n}B_{n}\lambda_{\mu_{n}}X_{n-1}^{\mu_{n}}\), where the \(\lambda_{i}\), \(1\leq i\leq N\), are fixed numbers in \((0,1)\); if \(\lambda_{i}\) is smaller, agent \(i\) is more risk averse. Obviously but remarkably, eq. (1) still holds for the modified model; risk aversion does not make an agent less or more likely to end up owning everything. Cardoso _et al._[5] have recently proposed a different argument, based on the Gini index, to derive related results for a broader class of models. Their analysis relies on what they call the _fair rule hypothesis_[5, Equation (8)]. For wealth-conserving models, it is the martingale property. Most, but not all, examples in [5] are wealth-conserving. The result in [5] is that the Gini index is monotonically increasing, and stationary if and only if it is \(1\). An interesting extension is obtained by adding a _wealth-acquired advantage_[4]. The coin flips that determine in which direction the wealth will flow in each interaction is biased in favor of the wealthier agent: \(V_{n}=1\) with probability \(p\), and \(V_{n}=-1\) otherwise, where \(p\) is no longer required to be \(\frac{1}{2}\), but is allowed to be anywhere in \(\left[\frac{1}{2},1\right)\). One should certainly expect wealth condensation for \(p>\frac{1}{2}\) if there is wealth condensation for \(p=\frac{1}{2}\). However, proofs that rely on the martingale property no longer work; the model is still wealth-conserving, but \(\{X_{n}^{i}\}_{n\geq 0}\) is no longer a martingale, nor a sub- or super-martingale. To the model with a (possible) wealth-acquired advantage, Boghosian _et al._[1, 2] added _wealth taxation_. Each agent is taxed a fraction \(\chi\in(0,1)\) of their fortune in each time step, uniformly re-distributing the total amount taken in: \[X_{n}^{i}=(1-\chi)\tilde{X}_{n}^{i}+\frac{\chi}{N}\ \ \ \text{for all }i\in\{1,\ldots,N\}.\] It is clear that wealth taxation, no matter how small \(\chi\in(0,1)\) may be, prevents wealth condensation: We now have \[X_{n}^{i}\geq\frac{\chi}{N}\] for all \(n\geq 1\) and \(i\in\{1,\ldots,N\}\), which precludes the existence of the limits \(\lim_{n\to\infty}X_{n}^{i}\). For a version of the model with wealth-acquired advantage and taxation in which there is a continuum of agents, Boghosian _et al_[4] have shown that for each \(p\in\left(\frac{1}{2},1\right)\), there is a threshold value \(\chi_{c}\) depending on \(p\), such that there will be _oligarchy_ for \(\chi<\chi_{c}\), but not for \(\chi>\chi_{c}\). Here _oligarchy_ means that a vanishingly small fraction of the population will own, in the long run, a non-vanishing fraction of total wealth. In this brief note, we propose a new probabilistic proof of part (a) of the Yard-Sale Theorem, which applies also if \(p\in\left(\frac{1}{2},1\right)\). Our proof does not use the martingale property, and therefore applies when \(p\in\left(\frac{1}{2},1\right)\). The tools used in our analysis are much lighter than those used in previously published proofs of the Yard-Sale Theorem or similar results. Instead of the Gini index, we use \(\|X_{n}\|^{2}\) as the measure of concentration, where \(\|\cdot\|\) denotes the Euclidean norm. We note that the idea of using the Euclidean norm of a probability vector as a measure of concentration is not new; it appears in quantum physics [8], political science [9], ecology [10], and antitrust regulation [11]. ## 2 Proof of almost sure wealth condensation We consider, from here on, the yard-sale model, possibly with a wealth-acquired advantage: \(\frac{1}{2}\leq p<1\), but without taxation. Write \(M_{n}=\max_{i}X_{n}^{i}\). We will prove \(M_{n}\to 1\) almost surely, which is part (a) of the Yard-Sale Theorem. We have no analogue of part (b) for \(p>\frac{1}{2}\). We begin with a preliminary calculation in which we determine the expected change in the concentration measure \(\|X_{n}\|^{2}\) in a single step. (As before, \(\|\cdot\|\) is the Euclidean norm.) Suppose that \(X_{n-1}^{i},\ 1\leq i\leq N\), are given, \(\mu_{n}\) and \(\nu_{n}\) have been chosen, with \(X_{n-1}^{\mu_{n}}\leq X_{n-1}^{\nu_{n}}\), and \(B_{n}\) has been chosen as well. We write \(\omega_{n}=B_{n}\,X_{n-1}^{\mu_{n}}\); this is the amount of wealth at stake in the trade between agents \(\mu_{n}\) and \(\nu_{n}\). We also write \(p=\frac{1}{2}+\delta\), with \(0\leq\delta\leq\frac{1}{2}\). Then the (conditional) expectation of \(\|X_{n}\|^{2}-\|X_{n-1}\|^{2}\) equals \[\left(\frac{1}{2}+\delta\right)\left(\left(X_{n-1}^{\mu_{n}}-\omega_{n}\right) ^{2}-\left(X_{n-1}^{\mu_{n}}\right)^{2}+\left(X_{n-1}^{\nu_{n}}+\omega_{n} \right)^{2}-\left(X_{n-1}^{\nu_{n}}\right)^{2}\right)+\] \[\left(\frac{1}{2}-\delta\right)\left(\left(X_{n-1}^{\mu_{n}}+\omega_{n}\right) ^{2}-\left(X_{n-1}^{\mu_{n}}\right)^{2}+\left(X_{n-1}^{\nu_{n}}-\omega_{n} \right)^{2}-\left(X_{n-1}^{\nu_{n}}\right)^{2}\right)=\] \[2\omega_{n}^{2}+4\delta\omega_{n}\left(X_{n-1}^{\nu_{n}}-X_{n-1}^{\mu_{n}} \right)\geq 2\omega_{n}^{2}.\] This implies \[E\left(\|X_{n}\|^{2}\right)-E\left(\|X_{n-1}\|^{2}\right)\geq 2E(\omega_{n}^{2}) \tag{2}\] where \(E\) denotes _unconditional_ expectations. We now proceed to the main argument, which makes use of inequality (2) at the end. To show \(M_{n}\to 1\) almost surely, it is enough to show that \(\omega_{n}\to 0\) almost surely. Therefore we have to show that for any \(\varepsilon>0\), it almost surely happens only finitely many times that \(\omega_{n}\geq\varepsilon\). By the Borel-Cantelli lemma, it is enough to show \[\sum_{n=1}^{\infty}P\big{(}\omega_{n}\geq\varepsilon\big{)}<\infty\] for any \(\varepsilon>0\). Since \(E\big{(}\omega_{n}^{2}\big{)}\geq P\big{(}\omega_{n}\geq\varepsilon\big{)} \varepsilon^{2}\), we have \[\sum_{n=1}^{\infty}P\big{(}\omega_{n}\geq\varepsilon\big{)}\leq\frac{1}{ \varepsilon^{2}}\sum_{n=1}^{\infty}E\big{(}\omega_{n}^{2}\big{)}.\] Using (2), \[\sum_{n=1}^{\infty}E\big{(}\omega_{n}^{2}\big{)}\leq\frac{1}{2}\sum_{n=1}^{ \infty}\big{(}E\big{(}\|X_{n}\|^{2}-E\big{(}\|X_{n-1}\|^{2}\big{)}\big{)}\leq \frac{1}{2}\left(\limsup_{n\to\infty}E\big{(}\|X_{n}\|^{2}\big{)}-E\big{(}\| X_{0}\|^{2}\big{)}\right)\leq\frac{1}{2}.\] This completes the proof.
2302.05860
Abelian varieties are not quotients of low-dimension Jacobians
We prove that for any two integers $g\geq 4$ and $g'\leq 2g-1$, there exist abelian varieties over $\overline{\mathbb{Q}}$ which are not quotients of a Jacobian of dimension $g'$. Our method in fact proves that most Abelian varieties satisfy this property, when counting by height relative to a fixed finite map to projective space.
Jacob Tsimerman
2023-02-12T06:20:06Z
http://arxiv.org/abs/2302.05860v1
# Abelian Varieties are not quotients of low-dimension Jacobians ###### Abstract We prove that for any two integers \(g\geq 4\) and \(g^{\prime}\leq 2g-1\), there exist abelian varieties over \(\overline{\mathbb{Q}}\) which are not quotients of a Jacobian of dimension \(g^{\prime}\). Our method in fact proves that most abelian varieties satisfy this property, when counting by height relative to a fixed finite map to projective space. ## 1 Introduction Jacobians are a subset of abelian varieties, and much work has been devoted to figuring out 'how large' that subset is. An elementary computation shows that for \(g\geq 4\) the generic abelian varieties is not a Jacobian. Over \(\mathbb{C}\), this immediately implies that the generic abelian variety is not isogenous to a Jacobian, either. This can be seen in many ways, and essentially reduces to the fact that \(\mathbb{C}\) is an uncountable field, and therefore the union of countably many subvarieties cannot be equal to an ambient variety of higher dimension. Katz and Oort conjectured that the same property holds over \(\overline{\mathbb{Q}}\). Over a countable field this problem becomes much more difficult, and was solved unconditionally in [12, 13] by relying on abelian varieties with complex multiplication, and appealing to the Andre-Oort conjecture. Later, a different proof was given in [11], by using the Pila-Zannier method and isogeny estimates of Masser-Wustholz[15]. A related fact in the positive direction is that every abelian variety \(A\) is a quotient of a Jacobian. Indeed, one may find a smooth curve inside of \(A\) not contained in a coset of a proper abelian subvariety and consider its Jacobian. One may ask for the lowest such number: **Question 1.1**.: _Let \(g\geq 4\), and let \(A\) be a generic abelian variety over an algebraically closed field \(k\). What is the lowest integer \(g^{\prime}\) such that there exists a Jacobian \(J\) of dimension \(g^{\prime}\) and a surjection \(\phi:J\to A\)?_ Question 1.1 is extremely difficult even for \(k=\mathbb{C}\). Comparing dimensions shows that \(g^{\prime}\geq\frac{g^{2}+g}{6}+1\), but this is far from optimal. In [10] it is proven that \(g^{\prime}\geq\frac{g(g-1)}{2}+1\), with equality iff \(g\in\{4,5\}\). For an upper bound there does not appear to be a better method known than taking a projective embedding and intersecting with hyperplanes to produce a curve, which gives an upper bound which looks like a power of \(g!\). There is therefore a huge chasm and to the author's knowledge there is not even a conjecture as to what the truth should be. Relatedly, see work of Voisin[14] and Pirola[10] on giving a lower bound for the _gonality_ of a curve in a generic abelian variety. The goal of this paper is to make progress on question 1.1 for \(k=\overline{\mathbb{Q}}\): **Theorem 1**.: _Let \(g\geq 4\). Then there exist abelian varieties over \(\overline{\mathbb{Q}}\) which are not quotients of a Jacobian of a curve of genus \(\leq 2g-1\)._ In fact, for our proof there is nothing special about the Torelli locus. We prove the following Theorem, where \(\mathcal{A}_{g}\) is the moduli space of Principally Polarized abelian varieties: **Theorem 2**.: _Let \(0\leq h<g\) be integers. Suppose that \(V\subset\mathcal{A}_{g+h}\) is a subvariety such that over \(\mathbb{C}\), the generic \(g\)-dimensional abelian variety is not a quotient of an abelian variety parameterized by \(V\)._ _Then there exist \(g\)-dimensional abelian varieties over \(\overline{\mathbb{Q}}\) which are not a quotient of an abelian variety parameterized by \(V\)._ In fact we shall show in Theorem 8 that in a precise sense (analogous to what Masser, Zannier[13] and Zywina [14] do) most abelian varieties are not quotients of abelian varieties parameterized by \(V\). ### Summary of the Proof Our proof follows the Pila-Zannier method, as employed by Masser and Zannier. In fact, in the case of \(h=0\) our proof essentially specializes to theirs1. The main ideas are as follows: Footnote 1: We find it convenient to use slightly different counting functions (with rationals instead of integers more in line with [14]), but the essence of our argument reduces to exactly what they do. 1. The main difficulty - and the main technical contribution of this paper - is to demonstrate 'Large Galois Orbits'. Concretely, suppose that \(A\in\mathcal{A}_{g}(K)\) is an abelian variety defined over a number field \(K\) and \(B\in V(\overline{\mathbb{Q}})\) such that \(A\) is a quotient of \(B\). Then there is a degree \(N\) isogeny \(\phi\) from \(A\times C\) to \(B\) for some \(h\)-dimensional abelian variety \(C\). We wish to show that if there is one such \(B\) then there are many - presuming \(A\) is generic in some way (see Lemma 3.1 for the specifics). In particular, we wish to show that \(B\) has a large field of definition so that we may consider all of the Galois conjugates of \(B\). The difficulty is that one has no a-priori control over either the height or field of definition of \(C\). We get around this by using the fact that \(g>h\), and specifically that there is no embedding \(\mathrm{GSp}_{2g}\) into \(\mathrm{GSp}_{2h}\). Morally, this means that the Galois action on the Tate module of \(C\) cannot 'interact' with the action on the Tate module of \(A\). Concretely, we let \(K\subset A\times C\) be the kernel of \(\phi\). Then we show (after some reductions) that for generic \(A\), we have \(K_{A}:=K\cap A\times\{0_{C}\}\) is large compared to \(N\). This means that the abelian sub-variety in \(B\) which is isogenous to \(A\) has large field of definition, and hence \(B\) must as well. 2. Armed with this we use the Pila-Wilkie Theorem to obtain an unlikely intersection in \(V\). We then use powerful transcendence theorems (Ax-Schanuel and Geometric Zilber-Pink, see SS2.5) to reduce to the case where \(V\) is contained in a weakly special subvariety. 3. The last difficult piece is handling the case where \(V\) is contained in a weakly special subvariety. We handle this in a somewhat unsatisfying (but very clean) ad-hoc method. Specifically, we show that if a weakly special subvariety contains a non-simple point with a Hodge-generic factor, then the generic point over it is non-simple2. This allows us to carry out an induction argument to finish. In actuality we reverse steps 2 and 3, but the above outline is the heart of the proof. We remark on the limitation of \(h<g\) for our method: The reason that this is a problem is the step where we establish large Galois orbits. Specifically, the issue is a possible counterexample to Frey-Mazur: Let \(A\) be a \(g\)-dimensional abelian variety and suppose that \(C\) is another one such that \(A[p]\cong C[p]\) as Galois modules. Then we may quotient out \(A\times C\) by the graph of this isomorphism to obtain a \(2g\)-dimensional abelian variety defined over \(\mathbb{Q}^{3}\). We are unable to rule this out from happening for arbitrarily large primes, and hence we cannot get a handle on \(g=h\). It seems to be a very difficult problem to rule such a thing out, even for a single \(A\)! If one were able to circumvent this issue and establish large Galois orbits, then the rest of the proof would go through unchanged. ### Outline of the Paper In SS2 we recall background about abelian varieties, the corresponding Galois representations, functional transcendence, and the Pila-Wilkie method. In SS3 we prove our 'large Galois orbits' result. In SS4 we prove a result about weakly special subvarieties which enables us to reduce to the case of \(V\) not being contained in a proper weakly special. Finally, in SS5 we combine everything to prove the main theorem. ### Acknowledgements It is a pleasure to thank Jonathan Pila and Ananth Shankar for enlightening discussions on this problem. ## 2 Background ### Polynomial boundedness We write \(A\prec_{C}B\) or \(B\succ_{C}A\) or non-negative functions \(A,B\) depending on \(C\) and possibly other variables, if there are constants \(c_{1},c_{2},c_{3}>0\) which depend only on \(C\), such that \(A<c_{1}B^{c_{2}}+c_{3}\). If \(A\prec_{g}B\) and \(B\prec_{g}A\) we write \(B\sim_{g}A\). If \(A\prec_{g}B\) we say that \(A\) is _polynomially bounded by \(B\)_. ### Abelian Varieties We fix some notation for abelian varieties: * A _quasi-morphism_ between abelian varieties \(A,B\) is an element of \(\operatorname{Hom}(A,B)\otimes\mathbb{Q}\). * A _quasi-isogeny_ is a quasi-morphism \(\phi\) such that for any integer \(n\) for which \(n\phi\) is a morphism, \(n\phi\) has finite kernel. * For a complex abelian variety \(\mathbb{C}\), we denote \(L_{A}:=H_{1}(A,\mathbb{Z}),V_{A}:=H_{1}(A,\mathbb{Q}),T_{A}:=V_{A}/L_{A}\). Note that \(T_{A}\cong A_{\operatorname{tor}}(\mathbb{C})\). * We denote the Tate module by \(T_{A,\ell}:=\varinjlim_{\longrightarrow}A[\ell^{n}]\). ### Principal Polarisations and Isogenies Recall that a polarization of a complex abelian variety \(A\) is an isogeny \(\phi:A\to A^{\vee}\) which is symmetric, and for which the pullback of the Poincare bundle along the associated graph is ample. Given a polarization, we get an associated symplectic pairing \(Q_{A,\phi}\) known as the _Weil pairing_ on \(L_{A}\), and also \(Q_{A,\phi,n}\) on \(A[n]\) for each positive integer \(n\). We say that \(\phi\) is principal if the degree of \(\phi\) is \(1\). We say that two polarizations \(\phi,\psi\) are _equivalent_ if there are positive integers \(m,n\) such that \(m\phi=n\psi\). Given another abelian variety \(B\) and an isogeny \(f:B\to A\) we define a polarization on \(B\) by setting \(f^{*}\phi:=f^{\vee}\circ\phi\circ f\). Given two polarized abelian varieties \((A,\phi),(B,\psi)\) we say that an isogeny \(f:A\to B\) is a _polarized isogeny_ from \((A,\phi)\) to \((B,\psi)\) if \(f^{*}\psi\) is equivalent to \(\phi\). If \(\phi,\phi^{\prime}\) are two polarizations on \(A\) then we obtain a quasi-isogeny \(f:=\phi^{-1}\circ\phi^{\prime}:A\to A\) such that the induced map on \(V_{A}\) respects the pairing \(Q_{A,\phi}\). In particular, \[Q_{A,\phi}(v_{1},fv_{2})=Q_{A,\phi}(fv_{2},v_{1}).\] ### Shimura Varieties See [14] for further background on this section. Let \(G\) be a reductive group over \(\mathbb{Q}\), \(\mathbb{S}=\operatorname{Res}_{\mathbb{C}/\mathbb{R}}\mathbb{G}_{m}\) the Deligne torus, and \(X\) be a conjugacy class of homomorphisms \(f:\mathbb{S}\to G_{\mathbb{R}}\) satisfying the Shimura axioms, and \(K=\prod_{p}K_{p}\subset G(\mathbb{A}_{f})\) be a neat compact subgroup. Let \(E(G,X)\) be the reflex field, and \(E\) be a field over which \(G\) splits. Associated to this data we get an algebraic variety \(S_{K}(G,X)\) defined over \(E(G,X)\) called a "Shimura Variety" whose complex points can be identified with \(G(\mathbb{Q})\backslash\left(X\times G(\mathbb{A}_{f})\right)/K\). We shall primarily be concerned with \(\mathcal{A}_{g}\) which is the Shimura variety corresponding to \(G=\operatorname{GSp}_{2g}\). This naturally gives \(\mathcal{A}_{g}\) the structure of the (coarse) moduli space of principally polarized abelian varieties of dimension \(g\). In this case \(E(G,X)=\mathbb{Q}\) so we consider \(\mathcal{A}_{g}\) as a \(\mathbb{Q}\)-variety. #### 2.4.1 Weakly Special Varieties We follow [13] in this section: Suppose \(S_{K}(G,X)\) is a Shimura variety for which we have: * A Shimura subvariety \(S_{K_{H}}(H,X_{H})\) * Shimura varieties \((H_{i},X_{i},K_{i})\) for \(i=1,2\) and a splitting \(H^{\text{ad}}\cong H_{1}\times H_{2}\) inducing a splitting \(X_{H}\cong X_{H_{2}}\times X_{H_{2}}\). Then we say that the image of \(X_{1}^{+}\times\{y\}\) is a _weakly special_ subvariety of \(S_{K}(G,X)\) for any connected component \(X_{1}^{+}\) of \(X_{1}\) and any point \(y\in X_{2}\). We say that a closed analytic subvariety \(S\subset X_{G}\) is weakly special if it is a connected component of the preimage of a weakly-special variety. ### Functional Transcendence Theorems We let \(\mathcal{H}_{g}:=\{Z\in M_{g}(\mathbb{C}):Z=Z^{t},\operatorname{im}Z>0\}\) denote the Siegel upper-half space, so that there is a natural uniformization map \(\pi_{g}:\mathcal{H}_{g}\to\mathcal{A}_{g}(\mathbb{C})\). Then we have a semi-algebraic structure on \(\mathcal{H}_{g}\) induced from the algebraic structure of \(M_{g}(\mathbb{C})\). It is easy to show that weakly special varieties \(V\) are bi-algebraic for the map \(\pi_{g}\), in the sense that all of the irreducible components of \(\pi_{g}^{-1}(V)\) are semi-algebraic. The Ax-Schanuel Theorem [12] shows that this is the only algebraic information that is preserved by \(\pi_{g}\): **Theorem 3**.: _Let \(V\subset\mathcal{H}_{g}\times\mathcal{A}_{g}\) be an irreducible algebraic variety, and let \(\Gamma\subset\mathcal{H}_{g}\times\mathcal{A}_{g}\) be the graph of the map \(\pi_{g}\). Let \(U\subset V\cap\Gamma\) be an irreducible component and assume that \(\dim U>\dim V+\dim\Gamma-\dim\mathcal{A}_{g}\). Then \(\pi_{g}(U)\) is contained in a proper weakly-special subvariety of \(\mathcal{A}_{g}\)._ We shall also require another theorem, which takes into account unlikely intersections with different weakly-special varieties at once. Namely, let \(V\subset\mathcal{A}_{g}\) be a variety, and we assume it is not contained in any proper weakly-special subvariety. We say that an irreducible subvariety \(X\subset V\) is _weakly-atypical_ if there is a weakly special subvariety \(W\subset\mathcal{H}_{g}\) containing \(X\)such that \[\dim X>\dim W+\dim V-\dim\mathcal{A}_{g}.\] We then have the following: **Theorem 4**.: \(V\subset\mathcal{A}_{g}\) _be a variety which is not contained in any proper weakly-special subvariety. Then there is a proper subvariety \(Z\subset V\) which contains all positive-dimensional weakly-atypical subvarieties of \(V\), such that for any irreducible component \(Z^{o}\subset Z\) we have_ 1. \(Z^{o}\) _is weakly atypical, or_ 2. \(Z^{o}\) _is contained in a proper special subvariety of_ \(\mathcal{A}_{g}\)_._ Proof.: Note that \(V\) itself is not weakly atypical, by our assumptions. The result now follows directly from [1, Thm 6.1] by noting that the adjoint group of \(\mathrm{GSp}_{2g}\) is simple, and therefore for case (ii) in Theorem 6.1 of that paper to occur \(Z^{o}\) must be contained in a proper special subvariety. The above is named the Geometric Zilber-Pink conjecture for the weakly special atypical locus by [1], and is proven there in the more general context of hodge variations. ### Pila-Wilkie Theorem Recall that the height of a rational point \(\frac{a}{b}\) with \(\gcd(a,b)=1\) is \(H(\frac{a}{b}):=\max(|a|,|b|)\). For a point \(q=(q_{1},\ldots,q_{n})\in\mathbb{Q}^{n}\) we define \(H(q):=\max_{i}H(q_{i})\). Finally, for any subset \(X\subset\mathbb{R}^{n}\) we define \[N(X,T):=\#\{q\in X\cap\mathbb{Q}^{n}\mid H(q)\leq T\}.\] Given a set \(X\subset\mathbb{R}^{n}\), we set \(X^{\mathrm{alg}}\) to be the union of all connected, positive dimensional, semi-algebraic subsets of \(X\). Then we have the following Theorem of Pila-Wilkie: **Theorem 5** (Pila-Wilkie [12, Thm 1.8]).: _For a set \(X\subset\mathbb{R}^{n}\) definable in an o-minimal structure, and any \(\epsilon>0\), we have_ \[N(X-X^{\mathrm{alg}},T)=T^{o(1)}.\] In fact, we shall need the slightly stronger version, valid in families. **Theorem 6** (Pila-Wilkie [11, Thm 3.6]).: _For a set \(X\times Y\subset\mathbb{R}^{n}\) definable in an o-minimal structure, we have that_ \[N(X_{y}-X_{y}^{\mathrm{alg}},T)=T^{o(1)}\] _uniformly in \(y\in Y\)._ We shall only use the o-minimal structure \(\mathbb{R}_{an,\mathrm{exp}}\) in which all the sets we work with are defined. For background on o-minimality see [13]. ### Height Estimates We use [13] for background. We work in the Siegel Upper-Half space \(\mathbb{H}_{g}\) of symmetric \(g\times g\) complex matrices with positive definite imaginary component. for a matrix \(Z=X+iY\in\mathbb{H}_{g}\) we define its height to be \[H(Z):=\max\big{(}|z_{i,j},\frac{1}{|Y|}\big{)}.\] We set \(\mathcal{F}_{g}\subset\mathbb{H}_{g}\) to be the standard fundamental domain[10]. The following is [13, Lemma 3.2] **Lemma 2.1**.: _Let \(Z\in\mathbb{H}_{g}\) and \(\gamma\in\mathrm{GSp}_{2g}(\mathbb{Z})\) such that \(\gamma Z\in\mathcal{F}_{g}\). Then the co-ordinates of \(\gamma\) and the Height of \(\gamma Z\) are \(\prec_{g}H(Z)\)._ As a result, we have the following simple consequence: **Corollary 2.2**.: _Let \(Z,W\in\mathcal{F}_{g}\) be such that there is a degree \(N\) polarized-isogeny \(\phi:A_{Z}\to A_{W}\). Then \(H(W)\prec_{g}H(Z)+N\). Moreover, there is an element \(M\in\mathrm{GSp}_{2g}(\mathbb{Q})\) such that \(MZ=W\) and \(H(M)\prec_{g}N\)._ Proof.: The existence of \(\phi\) means there is an element \(M\in\mathrm{GSp}_{2g}(\mathbb{Q})\cap M_{2g}(\mathbb{Z})\) with determinant \(N\) such that \(MZ=W\). It is well-known that these break up into finitely many right \(\mathrm{GSp}_{2g}(\mathbb{Z})\) cosets, and we may write \(M=\gamma\gamma_{0}\) for \(\gamma\in\mathrm{GSp}_{2g}(\mathbb{Z})\) and \(\gamma_{0}\) having entries bounded by \(N^{O_{g}(1)}\). The bound on \(\gamma_{0}\) follows for \(N\) a prime power by the explicit coset representatives computed in [12], and in general by multiplying and using that \((2g)^{\omega(N)}=N^{o_{g}(1)}\). This implies that \(H(\gamma_{0}Z)\prec(H(Z)+N)\), and so the bound on \(H(W)\) follows by Lemma 2.1. Finally it remains to prove the bound on \(H(M)\). This follows directly from [13, Thm 1.1] for the group \(G=\mathrm{GSp}_{2g}\) with its canonical representation. ### Galois Representations Let \(A\) be an abelian variety over a number field \(K\). We get an induced Galois representation \(\rho_{A,\ell}:G_{K}\to\mathrm{GL}(T_{A,\ell})\) for each prime \(\ell\), and therefore \(\rho_{A}:G_{K}\to\mathrm{GL}(T_{A})\). This must preserve every polarization and endomorphism defined over \(K\). For each \(\ell\), we obtain an \(\ell-adic\) Mumford-Tate Group \(MT_{A,\ell}\) by taking the Zariski closure of the image of \(\rho_{A,\ell}\). Note that the \(\ell\)-adic Mumford-Tate group depends on the number field but its identity component (as an algebraic group) does not. Given a polarized abelian variety \((A,\phi)\), we say that \(A\) is _Galois Generic_ if the image of \(\rho_{A}\) is open in \(\mathrm{GSp}(T_{A},\phi)\). We remark that this is independent of \(\phi\) since this property implies \(\mathrm{End}(A)=\mathbb{Z}\) and thus that \(A\) has a single polarization up to equivalency. We have the following result (see [1, Remark17.3.1]) **Proposition 2.3**.: _The identity component of \(MT_{A,\ell}\) is contained in the extension of scalars of the ordinary Mumford-Tate group \(MT_{A}\) from \(\mathbb{Q}\) to \(\mathbb{Q}_{\ell}\)._ ### Counting by Height The proof of our Theorem requires a notion of'most' abelian varieties, so we need a way to count a subset of points \(\mathcal{A}_{g}\) which are infinite, easy to get at, and have generic behaviour. To do this we borrow the mechanism used by Masser and Zannier[13]. We fix an open set \(U\subset\mathcal{A}_{g}\) and a quasi-finite dominant map \(\phi:U\to\mathbb{P}^{\dim\mathcal{A}_{g}}\) defined over \(\mathbb{Q}\). We may then count \(\phi\)-rational points by defining \[S_{\phi}(T):=\{t\in\mathcal{A}_{g}(\overline{\mathbb{Q}})\mid\phi(t)\in U( \mathbb{Q}),H(\phi(t))\leq T\}\] for each \(T>0\), where \(H\) is the usual height function on projective space. Note that the abelian varieties in \(S_{\phi}(T)\) are defined over number fields of bounded degree. We call such a \(\phi\) a _counting function_ on \(\mathcal{A}_{g}\), and we say that a property holds \(\phi\)_-generically_ on \(\mathcal{A}_{g}\) if the proportion of points in \(S_{\phi}(T)\) that it holds for tends to \(1\) as \(T\to\infty\). ## 3 Galois Orbits Let \(A\) be an abelian variety of dimension \(g\) defined over a number field \(K\). Let \(\rho_{A}:G_{K}\to\operatorname{GL}_{2g}(\hat{\mathbb{Z}})\) be the (isomorphism class of the ) Galois representation associated to an abelian variety. Of course, if it is principally polarized then the image is contained in \(\operatorname{GSp}_{2g}(\hat{\mathbb{Z}})\). If the image of \(\rho_{A}\) has finite index in \(\operatorname{GSp}_{2g}(\hat{\mathbb{Z}})\) then we call its index the _index of \(A\)_. Note that this implies that \(A\) is simple. We recall the following theorem: **Proposition 3.1**.: [10, Theorem 1.1]__ _Fix a counting function \(\phi\) as in 2.9. Then \(\phi\)-generically, for an abelian variety \(A\in\mathcal{A}_{g}\) we have that the index \([\operatorname{GSp}_{2g}(\hat{\mathbb{Z}}):\operatorname{im}\rho_{A}]\) is finite of size \(O_{g}(1)\). It follows in particular that \(A\) is Hodge-generic._ Proof.: This follows from Theorem 1.1 of [10] in exactly the same way as Theorem 1.2 of [10] does, by applying Theorem 1.1 to the restriction of scalars after passing to a Galois cover. The purpose of this sections is to prove a large Galois orbits result: **Theorem 7**.: _Let \(A\) be a ppav of dimension \(g\) over number field \(K\) of degree \(O_{g}(1)\) over \(\mathbb{Q}\) such that \([\operatorname{GSp}_{2g}(\hat{\mathbb{Z}}):\operatorname{im}\rho_{A}]=O_{g}(1)\). Fix \(0\leq h<g\). Let \(B\) be a \(\overline{\mathbb{Q}}\) ppav of dimension \(g+h\) such that the lowest degree isogeny to \(B\) from an abelian variety of the form \(A\times C\), for a ppav \(C\) of dimension \(h\), is \(N\)._ _Then the degree of the field of definition of \(B\) is \(\succ_{g}N\). Moreover, \(B\) has a **polarized isogeny** from \(A\times C^{\prime}\) for a (possibly different) ppav \(C^{\prime}\) of degree \(\prec_{g}N\)._ Proof.: Note that since \(g>h\) and \(A\) is simple that \(\operatorname{Hom}(A,C)=\operatorname{Hom}(C,A)=0\), and thus \(\operatorname{End}(A\times C)\cong\mathbb{Z}\oplus\operatorname{End}(C)\). We thus record elements of \(\operatorname{End}(A\times C)\) as pairs \((m,r)\) for \(r\in\operatorname{End}(C)\). We let \(L_{B}\subset V_{A}\times V_{C}\) be the Lattice corresponding to \(H_{1}(B,\mathbb{Z})\) under the isogeny to \(B\). We identify \(V_{B}\) with \(V_{A}\times V_{C}\). Note that \([L_{B}:L_{A}\times L_{C}]=N\). Now there is a natural form symplectic form \(Q\) on \(V_{A}\times V_{C}\) corresponding to the principal polarizations \(\phi_{A},\phi_{C}\) on \(A,C\), and this gives us an isomorphism \(B^{\vee}(\mathbb{C})\cong V_{A,C}/L_{B}^{*}\) where \(L_{B}^{*}\) denotes the dual of \(L_{B}\) under \(Q\). Corresponding to the principal polarization \(\phi_{B}\) of \(B\), there is therefore an endomorphism \(f=(m,r)\in End(A\times C)\) such that \(fL_{B}=L_{B}^{*}\), and by 2.3 we have that \(f\) respects \(Q\). **Step 1: Proving \(m\succ_{g}N\)** Note first that \(m^{g}(\deg r)^{\frac{1}{2}}=N.\) We have the lattices \[\pi_{A}(L_{B})\supset L_{B}\cap V_{A}\supset L_{A},\pi_{C}(L_{B})\supset L_{B} \cap V_{C}\supset L_{C}\] and by duality of \(L_{B}\) and \(fL_{B}\), we have the relations \[m\pi_{A}(L_{B})=(V_{A}\cap L_{B})^{*},r(V_{C}\cap L_{B})=\pi_{C}(L_{B})^{*},r \pi_{C}(L_{B})=(V_{C}\cap L_{B})^{*}\] Note that \(\pi_{C}(L_{B})/(V_{C}\cap L_{B})\cong\pi_{A}(L_{B})/(V_{A}\cap L_{B})\) and thus is of size \(d\leq m^{2\dim A}\). Dualizing we get \[d=[(V_{C}\cap L_{B})^{*}:\pi_{C}(L_{B})^{*}]=[(V_{C}\cap L_{B})^{*}:r(V_{C} \cap L_{B})]\] Let \(C^{\prime}:=V_{C}/(V_{C}\cap L_{B})\). Then \(C^{\prime\prime}\cong V_{C}/\pi_{C}(V_{C}\cap L_{B})^{*}\) and \(r:C^{\prime}\to C^{\prime\prime}\) is a polarization on \(C^{\prime}\) of degree \(d\). As a result we may find an polarized isogeny \(C^{\prime\prime}\to C^{\prime}\) of degree \(\leq d\) where \(C^{\prime\prime}\) is principally polarized. Then the natural isogeny \(A\times C^{\prime\prime}\to B\) is of degree at most \(dm^{2g}\leq m^{4g}\). By virtue of the minimality of \(N\), we conclude that \(\deg r\leq m^{6g}\) and thus that \(m\geq N^{\frac{1}{4g}}\). **Step 2: Getting a polarized isogeny** Replacing \(C\) by the \(C^{\prime}\) above (at the cost of replacing \(N\) by \(N^{\prime}\prec_{g}N\)) we may assume that \(r\) is multiplication by some positive integer \(m^{\prime}\). We are now in the setup where \(m\prec_{g}N\) and \(r\) is multiplication by a positive integer \(m^{\prime}\). We now observe that we may replace \((C,\phi_{C})\) by \((C^{\prime},\phi_{C}\cdot\frac{1}{k})\) by changing \(L_{C}\) to \(L_{C}^{\prime}\) where \(L_{C}^{\prime}\) mod \(k\) is maximally isotropic for the well-pairing. This has the effect of changing \(m^{\prime}\to m^{\prime}k\). We may also replace \(L_{C}\) by \(\frac{1}{a}L_{C}\) and \(\phi_{C}\) by \(k^{2}\phi_{C}\) which sends \(m^{\prime}\to m^{\prime}/a^{2}\). Thus by changing \(C\) and the polarization in this way we may reduce to \(m^{\prime}=m\) by first scaling by \(k=mm^{\prime}\) and then dividing by \(a=m^{\prime}\). This gives a polarized isogeny \(A\times C\to B\) of size \(\prec_{g}N\). **Step 3: Large Galois Orbit** We finally prove that the \(B\) is defined over a large field of definition. To prove this, we note that \(B\) contains a unique abelian subvariety \(D\) which is isogenous to \(A\). It suffices to show therefore that \(D\) has a large field of definition. We now consider again the minimal isogeny \(f=(m,r):A\times C\to B\), and write let \(G:=L_{B}/(L_{A}\times L_{C})\). Note that there is a sequence \[f^{-1}(L_{A}\times L_{C})\supset L_{B}\supset L_{A}\times L_{C}\] whose successive quotients are \(G\) and \(G^{\vee}\), and note that the latter is isomorphic to \(G\) as an abelian group. Moreover, the total quotient is \(A[m]\times C[r]\). It follows that \(\#G[m]^{2}\geq\#A[m]\cdot\#C[(r,m)]\). But now \(G^{\vee}[m]\subset A[m]\times C[(r,m)]\) and therefore \[\#(G^{\vee}\cap A[m])\geq\big{(}\frac{\#A[m]}{\#C[(r,m)]}\big{)}^{\frac{1}{2} }\geq m\] where the last inequality follows from \(g\geq h+1\). It follows that \(G^{\vee}\cap A\) contains an element of order \(e\geq m^{\frac{1}{2g}}\succ_{g}N\). Now, by minimality of \(N\) we see that \(G^{\vee}\cap A[m]\) does not contain any \(A[\ell]\) for a prime \(\ell\). Thus we see that \(\#(G^{\vee}\cap A[e])\leq e^{2g-1}\). On the other hand, the orbit under \(\operatorname{GSp}_{2g}(\mathbb{Z})\) of \(G^{\vee}\cap A[e]\) contains all of \(A[e]\) which is of size \(e^{2g}\). It follows that there are at least \(e\) such orbits under \(\operatorname{GSp}_{2g}(\mathbb{Z})\). It follows by the assumption on \(\rho_{A}\) that \(G^{\vee}\cap A[e]\) has \(\succ_{g}N\) orbits under the action of Galois. The claim now follows from the isomorphism \(D\cong A/(G^{\vee}\cap A)\) together with the simplicity of \(A\). Geometric Arguments: Weakly Special Subvarieties We shall require the following: **Proposition 4.1**.: _Let \(B\) be a ppav of dimension \(g+h\) which has a quotient \(A\) which is Galois generic of dimension \(g\). Let \(S\) be a proper special subvariety of \(\mathcal{A}_{g}\) which contains \([B]\). Then the universal abelian variety over \(S\) is not geometrically simple._ Proof.: Let \(G\subset\operatorname{GSp}_{2(g+h),\mathbb{Q}}\) be the subgroup corresponding to \(S\). Now the \(\ell\)-adic derived Mumford state group \(MT_{\ell}^{\operatorname{der}}(B)\) surjects onto \(MT_{\ell}^{\operatorname{der}}(A)\) and thus onto \(\operatorname{Sp}_{2g,\mathbb{Q}_{\ell}}\). Since \(\operatorname{Sp}_{2g}\) is a simple algebraic group, it follows by Goursat's Lemma that \(MT_{\ell}(A)\) contains \(\operatorname{Sp}_{2g}\) and hence by Proposition 2.3 that \(MT(A)\) does as well, and hence so does \(G\). By Lemma 4.2 it follows that \(G\) preserves a non-trivial \(\mathbb{Q}\)-symplectic splitting, and therefore the universal abelian variety over \(S\) is not geometrically simple. The following Lemma is almost certainly not original but we could not find it in the literature: **Lemma 4.2**.: _Let \(V,W\) be symplectic spaces over \(\mathbb{C}\) and let \(G\subset\operatorname{GSp}(V\oplus W)\) be a connected reductive group which contains \(\operatorname{Sp}(V)\). Then either \(G\) contains \(\operatorname{Sp}(V\oplus W)\) or else there is a proper symplectic subspace \(W^{\prime}\subset W\) such that \(G\) preserves \(V\oplus W^{\prime}\). Moreover, if \(V,W,G\) are defined over a field \(F\subset\mathbb{C}\) then \(W^{\prime}\) can be taken to be as well._ Proof.: We first claim that its is enough to conclude that \(G\) preserves any proper subspace containing \(V\). To see this, suppose \(G\) preserves \(K\oplus V\) for \(K\subsetneq W\). Now let \(K^{\prime}\subset K\) be the subspace which pairs to \(0\) with all of \(K\). Then as \(G\) is symplectic, it preserves \(K^{\prime}\), and since \(G\) is reductive there is a complement \(W^{\prime}\subset K\) such that \(K^{\prime}\oplus W^{\prime}=K\) and \(G\) preserves \(W^{\prime}\oplus V\). As \(W^{\prime}\) is a complement to \(K^{\prime}\) it must be symplectic, and the claim is proven. Next we claim that if \(G^{\operatorname{der}}\) preserves a proper subspace containing \(V\), then so does \(G\). Indeed, \(G=Z(G)\cdot G^{\operatorname{der}}\). Since \(Z(G)\) commutes with \(\operatorname{Sp}(V)\) it must preserve \(W\), and hence also \(V\) as its symplectic. Now let \(K\subsetneq W\) be minimal such that \(V\oplus K\) is preserved by \(G^{\operatorname{der}}\). Then for any \(z\in Z(G)\) the subspace \(V\oplus zK\) is also preserved by \(G^{\operatorname{der}}\) and hence so is \(V\oplus(K\cap zK)\). The claim follows by the minimality assumption. Therefore, we may replace \(G\) by its derived subgroup and assume its semisimple and contained in \(\operatorname{Sp}(V\oplus W)\). We now consider the Lie algebra \(L:=\operatorname{Lie}(\operatorname{GSp}(V\oplus W))\) as a \(\operatorname{Sp}(V)\)-module via the adjoint action. We may write elements of \(L\) in matrix form as \(\left(\begin{smallmatrix}A&G\\ \mathcal{A}&B\end{smallmatrix}\right)\) where \(A\in\mathfrak{sp}(V),B\in\operatorname{Hom}(W,V),C\in\operatorname{Hom}(V,W),D\in\mathfrak{sp}(W)\) satisfy the condition \(B=-C^{V}\) where we use the symplectic pairing to identify \(V,W\) with \(V^{\vee},W^{\vee}\) respectively. We may therefore write \(L=\left(\mathfrak{sp}(V)\oplus\operatorname{Hom}(W,V)\right)\oplus\mathfrak{ sp}(W)\) where the two summands constitute a decomposition of \(L\) into isotypic \(\operatorname{Sp}(V)\)-components, with the first summand isomorphic to a power of \(V\) and the second summand a power of the trivial representation. Now consider \(M:=Lie(G)\subset L\) as a \(\operatorname{Sp}(V)\) representation, and write it as \[M=\mathfrak{sp}(V)\oplus R\oplus S\] where \(R\subset\operatorname{Hom}(V,W),S\subset\mathfrak{sp}(W)\). Since \(L\) is a lie algebra it follows that \(S\) is also a lie algebra. Note that as \(R\) is preserved by \(\operatorname{Sp}(V)\) it follows that \(R=\operatorname{Hom}(V,K)\) for some \(K\subset W\). Suppose first that \(K\subsetneq W\). Note that the lie action of \(S\) on \(R\) is simply right composition, and therefore \(S\) must preserve \(K\). It follows that elements of \(M\) preserve \(K\oplus V\), and thus that \(G\) preserves \(K\oplus V\), as desired. On the other hand, suppose that \(K=W\). Then \(M\) contains all of \(\operatorname{Hom}(W,V)\). We prove now that \(M\) must be all of \(\mathfrak{sp}(V)\). Indeed, for any elements \(w_{1}\in W,v_{1}\in V\) we have the element \(w\to\langle w,w_{1}\rangle v_{1}\) of \(\operatorname{Hom}(W,V)\), which corresponds to the element \(v\to\langle v,v_{1}\rangle w_{1}\) of \(\operatorname{Hom}(V,W)\). Applying the lie bracket for two such elements and projecting to \(S\) gives us the element \[w\to\langle w,w_{1}\rangle\langle v_{1},v_{2}\rangle w_{2}-\langle w,w_{2} \rangle\langle v_{2},v_{1}\rangle w_{1}\] of \(S\cap M\). Picking \(w_{1}=w_{2},\langle v_{1},v_{2}\rangle=\frac{1}{2}\) gives us the element \[w\to\langle w,w_{1}\rangle w_{1}.\] It is well known that these elementary unipotents generate all of \(S\), which shows \(M\supset S\) and completes the proof. For the final part of the claim, suppose that \(V,W,G\) are defined over \(F\). Now consider all the subspaces \(K\subset W\) such that \(V\oplus K\) is preserved by \(G\). Then this set is closed under intersections and \(\operatorname{Aut}(\mathbb{C}/F)\) and therefore the minimal such \(K_{0}\) is defined over \(F\). Now let \(K_{1}\subset K_{0}\) be the subspace that pairs to \(0\) with all of \(K_{0}\). Then \(K_{1}\) is also preserved by \(G\), and hence there is a complementary \(K_{2}\oplus K_{1}=K_{0}\) such that \(V\oplus K_{2}\) is preserved by \(G\). By the minimality it must be the case that \(K_{2}=K_{0}\) and thus \(K_{0}\) is symplectic. Since \(K_{0}\) is defined over \(F\), the claim is proven. ## 5 Proof of Theorem 2 The purpose of this section is to prove Theorem 2. We shall in fact find it convenient to prove a slightly stronger theorem. To state it, we fix a counting function \(\phi\) on \(\mathcal{A}_{g}\) as in section SS2.9. **Theorem 8**.: _Let \(h<g\) be positive integers. Suppose that \(V\subset\mathcal{A}_{g+h}\) is a subvariety such that over \(\mathbb{C}\), the generic \(g\)-dimensional abelian variety is not a quotient of an abelian variety parameterized by \(V\). Then a \(\phi\)-generic abelian variety \(A\) is not a quotient of an abelian variety parameterized by \(V\)._ Proof.: We assume the opposite by contradiction. Let \(h\geq 0\) me minimal such that the statement is false, and pick a minimal counterexample \(V\subset\mathcal{A}_{g+h}\). **Lemma 5.1**.: \(V\) _is not contained in a proper weakly special subvariety of \(\mathcal{A}_{g}\)._ Proof.: Indeed, if it is, then by Proposition 4.1 the abelian scheme over \(V\) must be is not-simple, and thus \(V\) admits a finite-to-one map to \(\mathcal{A}_{g+h^{\prime}}\times\mathcal{A}_{h-h^{\prime}}\) for some \(0\leq h^{\prime}<h\). In that case if we let \(V_{0}\) be the projection of \(V\) to \(\mathcal{A}_{g+h^{\prime}}\) then the \(\phi\)-generic \(A\in\mathcal{A}_{g}\) is not a quotient of anything represented by \(V_{0}\) by induction. It follows that the \(\phi\)-generic \(A\) is also not a quotient of anything in \(V\) since such an \(A\) is simple by Proposition 3.1. Now we set \(\mathcal{A}_{g},\mathcal{F}_{g},\pi_{g}\) as in SS2, and we set \(\mathcal{F}_{V}:=\pi_{g}^{-1}(V)\cap\mathcal{F}_{g}\). **Lemma 5.2**.: \(V\) _is such that \(\dim V+\dim\mathcal{A}_{h}<\dim\mathcal{A}_{g+h}\)_ Proof.: Suppose not. Let \([A]\in\mathcal{A}_{g}\) be an abelian variety which is quotient of something parameterized by \(V\). Then \(g([A]\times\mathbb{H}_{h})\cap\pi_{g}^{-1}(V)\neq 0\). By our dimension assumption, this is also true for a generic nearby \(x\in\mathcal{A}_{g}\). Picking \(x\) to be a generic \(\mathbb{C}\)-point of the \(\mathbb{Q}\)-variety \(\mathcal{A}_{g}\) contradicts our assumptions on \(V\) Next, for \(x\in\mathcal{F}_{g}\) we set \[I_{x}:=\{g\in\mathrm{GSp}_{2g+2h}(\mathbb{C})\mid g(x\times\mathcal{F}_{h})\cap \mathcal{F}_{V}\neq\emptyset\}.\] Note that \(I\to\mathcal{F}_{g}\) is a definable family. Moreover, for each \(g\in\mathrm{GSp}_{2g+2h}(\mathbb{C})\) the number of connected components of \(g(x\times\mathcal{F}_{g})\cap\mathcal{F}_{V}\) is \(O_{g}(1)\). Now, we let \([A]\in\phi^{-1}(U(\mathbb{Q}))\) be a \(\phi\)-generic point, defined over \(K\), which is a quotient of an abelian variety parameterized by \(V\). Then by Theorem 7, \([\mathrm{GSp}_{2g}(\hat{\mathbb{Z}}):\rho_{A}(G_{k})]=O_{g}(1)\). Let \([B]\in V\) be such that \(A\) is a quotient of \(V\), let \(N_{A}\) is the minimal degree polarized isogeny of \(B\) to \(A\times\mathcal{A}_{h}\). **Lemma 5.3**.: _For \(N_{A}\gg 1\), There is a positive-dimensional weakly atypical subvariety \(W\subset V\) which contains a conjugate of \([B]\) over \(\mathbb{Q}(A)\)._ Proof.: By Theorem 7 the field of definition of \(B\) has degree \(\succ_{g}N_{A}\) over \(\mathbb{Q}\), and hence over \(\mathbb{Q}(A)\). Write \(\phi:B\to A\times C\) for the isogeny, and by a slight abuse of notation we use \([A],[C],[B]\) to refer to the representatives in the fundamental domains \(\mathcal{F}_{g},\mathcal{F}_{h},\mathcal{F}_{g+h}\). By Corollary 2.2 it follows that there exists an \(M\in I_{[A]}(\mathbb{Q})\) of height \(H(M)\prec_{g}N_{A}\) such that \(M([A]\times[C])=[B]\). In fact, we obtain such an element \(M_{B^{\prime}}\) for each Galois conjugate \(B^{\prime}\) of \(B\). Suppose the number of distinct \(M_{B^{\prime}}\) we obtain in this way is \(N_{A}^{o(1)}\). Then for some such element \(M\) we must have that \(M\cdot([A]\times\mathcal{F}_{h})\cap\mathcal{F}_{V}\) contains \(\succ_{g}N_{A}\) distinct conjugates of \(B\). Since this is a definable family as \(M\) varies, it follows that for \(N_{A}\gg 1\) that it must consist of at least one positive dimensional component \(U\) containing a conjugate of \(B\) (in fact, many such conjugates). By Lemma 5.2 we see that \(U\) is an unlikely intersection. By Theorem 3 it follows that \(U\) is contained in a weakly atypical subvariety of \(V\). Thus we assume that the number of distinct \(M_{B^{\prime}}\) is \(\prec_{g}N_{A}\). By Theorem 6, we conclude that for \(N_{A}\gg 1\), \(I_{[A]}\) contains a semi-algebraic curve \(C_{0}\) containing some \(M_{B^{\prime}}\) for \(B^{\prime}\) a conjugate of \(B\). Let \(C\) be the complex algebraic curve containing \(C_{0}\). We now have two cases. Suppose that for some conjugate \(B^{\prime}\) we have \(M_{B^{\prime}}\cdot([A]\times\mathcal{F}_{h})\cap\mathcal{F}_{V}\) is positive dimensional, and let \(U\) denote an analytic component of such an intersection. We then conclude as above. Alternatively, we see that for a generic \(c\in C\) the variety \(c\cdot([A]\times\mathcal{F}_{h})\cap\mathcal{F}_{V}\) is finite. Since this is a definable family of (0-dimensional )varieties as \(c\) varies, it follows that thse intersections are of size \(O_{g}(1)\). Hence the intersections must vary with \(c\in C\) and thus \(C\cdot([A]\times\mathcal{F}_{h})\cap\mathcal{F}_{V}\) is positive dimensional. Let \(U\) be an analytic component of the intersection. Again, by Lemma 5.2 we know that \(U\) is an unlikely intersection, so we conclude this case in the same way. We now apply our Geometric Zilber-Pink result 4 to conclude that a conjugate of \(B\) is contained in a proper subvariety \(V_{1}\subset V\) which contains all the positive-dimensional weakly atypical subvarieties of \(V\). It follows that for \(\phi\)-generic \(A\) with \(N_{A}\gg 1\), if \(A\) is a quotient of something represented by \(V\) then \(A\) is a quotient of something represented by \(V_{1}\). By our minimality assumption on \(V\), we conclude that for a \(\phi\)-generic \(A\) which is a quotient of something represented by \(V\), the corresponding \(N_{A}\) is bounded. Now, for a fixed \(N\) the set of \(A\) with \(N_{A}=N\) lies in the projection of \(T_{N}\cdot V\cap(\mathcal{A}_{g}\times\mathcal{A}_{h})\) to \(\mathcal{A}_{g}\) for \(T_{N}\) the Hecke-correspondence for polarized isogenies of degrees \(N\). This must be a proper subvariety, by the assumption on \(V\), and hence has density 0. This yields the desired contradiction.
2306.06646
Fractional Barrier Lyapunov Functions with Application to Learning Control
Barrier Lyapunov functions are suitable for learning control designs, due to their feature of finite duration tracking. This paper presents fractional barrier Lyapunov functions, provided and compared with the conventional ones in the error-constraint learning control designs. Two error models are adopted and the desired compensation control approach is applied for a non-parametric design, allowing two kinds of uncertainties involved in the error dynamics. Theoretical results about existence of the solution and convergence of the learning control schemes are presented. It is shown that fully-saturated learning algorithms play important role in assuring boundedness of the estimates, by which the error constraint objective can be achieved. Moreover, the robust technique is developed through modifying the discontinuous action involved in the learning control scheme that yields the expected tracking performance in the presence of residual.
Mingxuan Sun
2023-06-11T10:43:12Z
http://arxiv.org/abs/2306.06646v1
# Fractional Barrier Lyapunov Functions ###### Abstract Barrier Lyapunov functions are suitable for learning control designs, due to their feature of finite duration tracking. This paper presents fractional barrier Lyapunov functions, provided and compared with the conventional ones in the error-constraint learning control designs. Two error models are adopted and the desired compensation control approach is applied for a non-parametric design, allowing two kinds of uncertainties involved in the error dynamics. Theoretical results about existence of the solution and convergence of the learning control schemes are presented. It is shown that fully-saturated learning algorithms play important role in assuring boundedness of the estimates, by which the error constraint objective can be achieved. Moreover, the robust technique is developed through modifying the discontinuous action involved in the learning control scheme that yields the expected tracking performance in the presence of residual. Barrier Lyapunov functions; convergence; robustness; time-variant parametrization; iterative learning control. ## I Introduction Iterative learning control (ILC) features time-variant-signal learning, and exhibits the ability to improve the tracking performance of the closed-loop system undertaken, when executing tasks in a repetitive manner. Conventional ILC designs are carried out for _input learning_[1], while adaptive ILC schemes conduct _parameter learning_ for unknowns that involved in system dynamics [2]. An adaptive ILC control design, on the basis of the certainty-equivalence principle, underlines time-varying parametrization for the system model or its controller. The conventional integral adaptation and the iterative updating can be combined for parameter adaptation [3, 4]. This in certain sense clarifies the connection of the learning methodology with the conventional adaptive systems [5]. The projection-based learning algorithms for assuring bounded-estimates were reported in [2, 6]. The simple and direct versions are the partially-saturated algorithms [7, 8] and the fully-saturated ones [9]. The mentioned input-constraint schemes lead to the boundedness of the estimates themselves, not in the sense of \(L_{2}\), where the energy-like functional was shown to be suitable to the analysis of learning systems [8, 10]. Recently, in [11], the convergence performance was re-examined in the absence of residual. Desired compensation control (DCC) designs were presented in [2, 12] where the unknowns to be learnt are constants that can be dealt with by the conventional integral adaptation mechanisms. As such, the parametrization for the control design is required. As a unified way, this approach was also shown to be applicable for addressing the repetitive control problem of robotic systems [2]. The works reported in [6, 13] are closely related, where by assuming the desired dynamics, no direct parametrization is required. It should be noted that in the above-mentioned, the learning process involves no residuals. However, fewer works dealt with the situation of residuals. To tackling the norm-bounding uncertainties, the robust treatment with the use of the signum function is efficient so that the residuals can be avoided [13, 14]. The robustness improvement in case of nonzero approximation error is made possible due to the use of a deadzone modified Lyapunov functional in [15]. Besides the input constraint, the other critical issue is the state/output constraint within a prescribed region, due to the finite-interval-operation feature of ILC. The barrier Lyapunov function (BLF) based control designs are helpful to the constraint purpose. One of the pioneer works was reported in [16] who formally exploited the barrier-Lyapunov synthesis. The published results are available but bits and pieces, and not received attentions they deserve, addition to the suffers a few available barrier functions. Logarithmic-barrier function is the typical one, appeared in many related works. Other one is the tangent-barrier function [17], the first effort made for the constraint learning control design. Note that both logarithmic- and tangent-barrier functions are transcendental functions. For easy implementation, the barrier functions that give strong barrier actions but need less online computation are wanted. Moreover, the design problem of the constraint learning control in the presence of residuals is still open. In this paper, we suggest novel fractional BLFs to realize the error-constraint learning control. Our preliminary result of this effort was reported in [18], where we proposed to use a particular fractional BLF. Error models are convenient for adaptive system design and analysis [5], where the error dynamics are representatives for a broad class of practical systems. We shall show how the DCC approach is helpful to handle uncertainties involved in error dynamics, whenever it is available. Conventional DCC designs underline that the desired system dynamics are assumed. We shall clarify why in our learning control designs, the conventional parametrization can be avoided. To this end, there are two kinds of uncertainties to be handled. One is time-varying but iteration independent variables, with the regressor one, and the other is the norm-bounded term, which will vanish as the error tends zero. We shall establish the existence of solution and convergence of the learning control schemes. This approach can be compared with the existing techniques. The adaptive system requires parametrization, and the robust technique needs norm-bounded functions. Comparing with the existing related works, the contributions of this paper lie in: i) fractional BLFs for learning control designs, in order to restrict the error variables in the closed-loop; ii) non-parametric DCC-based design method for the error models, needing not applying the parametrization method; and iii) learning control method in the presence of residuals, together with fully-saturated learning algorithms assuring boundeness of the estimates. ## II Logarithmic BLF Revisited We usually carry out the constraint control redesign with a BLF, and a Lyapunov function, \(V\), is available for the nominal system undertaken. Here it is assumed that \(0\leq V(0)<b_{V}\), where \(b_{V}>0\) is a chosen barrier bound by designer. The currently conventionally used logarithmic BLF (LBLF) is given by \[f_{\rm LI}(V)=\log\frac{b_{V}}{b_{V}-V}. \tag{1}\] One may wish the BLF undertaken to satisfy the infinite barrier property (IBP). **Definition 1**: _A BLF \(f(V,b_{V})\) is said to satisfy the infinite barrier property, if_ \[\lim_{b_{V}\to\infty}f(V,b_{V})=cV,\] _with \(c>0\) being a constant, \(V\) being a given Lyapunov function and \(b_{V}\) indicating the barrier bound._ Whenever we need not provide any barrier, we can set the bound \(b_{V}\) to be larger enough, according to this property. Obviously, IBP fails for the above-mentioned BLF, \(f_{\rm LI}(V)\), because \(\lim_{b_{V}\to\infty}f_{\rm LI}(V)=0\). We suggest the following LBLF, for which IBP holds, \[f_{\rm LI}(V)=\log\frac{b_{V}e^{V}}{b_{V}-V}. \tag{2}\] In [17], such IBP was examined for an arctangent barrier function. We prefer the BLFs with the larger values as well as larger derivatives, which offer stronger barring action than those with the smaller values. According to Tab. I, \(f_{\rm LI}(V)\) and its derivative are of larger values than the values of \(f_{\rm LI}(V)\) and its derivative, although both the second-order derivatives are equal. Hence, for the constraint control design, the function \(f_{\rm LI}(V)\) is observed to be suggested. Here, we introduce the notation that \(f_{\rm LI}(V)\preceq f_{\rm LI}(V)\), for denoting the characterized property. ## III Fractional Barrier Lyapunov Functions Novel barrier functions, associated with useful properties, are presented here. We shall illustrate how a fractional BLF (FBLF) is formed, when a Lyapunov function is available, and explain why they are effective and efficient. A typical fractional BLF, applied in the learning control design [18], is in the form of \[f_{\rm FI}(V)=\frac{V}{b_{V}-V}. \tag{3}\] It follows that, by the inequality, \(\log x\leq x-1\), for \(x\geq 0\), \[f_{\rm LI}(V)\leq f_{\rm FI}(V). \tag{4}\] In addition, by noting that \(\frac{b}{b-x}\geq 1\), for \(0\leq x<b\), \[\frac{d}{dV}f_{\rm LI}(V)\leq\frac{d}{dV}f_{\rm FI}(V), \tag{5}\] \[\frac{d^{2}}{d^{2}V}f_{\rm LI}(V)\leq\frac{d^{2}}{d^{2}V}f_{\rm FI }(V). \tag{6}\] From (4)-(6), \(f_{\rm FI}(V)\) and its derivatives are of larger values than the values of \(f_{\rm LI}(V)\) and its derivatives. Hence, \(f_{\rm LI}(V)\preceq f_{\rm FI}(V)\), and \(f_{\rm FI}(V)\) is one FBLF which deserves to be fully explored and understood. In addition, IBP holds for the following fractional FBLFs: \[f_{\rm FI}(V) = \frac{b_{V}V}{b_{V}-V}, \tag{7}\] \[f_{\rm FIII}(V) = \frac{b_{V}+1-V}{b_{V}-V}V,\] (8) \[f_{\rm FIV}(V) = \frac{2b_{V}-V}{b_{V}-V}V, \tag{9}\] and \[f_{\rm FV}(V) = \frac{b_{V}+1}{b_{V}-V}V. \tag{10}\] The first- and second-order derivatives of the mentioned-above FBLFs are listed in Tab. II. Comparing with \(f_{\rm FIII}(V)\), we obtain the following relationships: \[f_{\rm FI}(V) \preceq f_{\rm FIII}(V), \tag{11}\] \[f_{\rm LI}(V) \preceq f_{\rm FIII}(V). \tag{12}\] In the situation that \(b_{V}\leq 1\) (requiring that \(V\leq 1\)), we have \[f_{\rm FI}(V) \preceq f_{\rm FIII}(V); \tag{13}\] in comparison with \(f_{\rm FIV}(V)\), \[f_{\rm FII}(V) \preceq f_{\rm FIV}(V). \tag{14}\] Moreover, in comparison with \(f_{\rm FV}(V)\), \[f_{\rm FI}(V) \preceq f_{\rm FV}(V), \tag{15}\] \[f_{\rm FII}(V) \preceq f_{\rm FV}(V),\] (16) \[f_{\rm FIII}(V) \preceq f_{\rm FV}(V). \tag{17}\] **Remark 1**: _The fundamental form of LBLFs is given by (1), and (3) represents the basic form of FBLFs. We see that (3) is simpler in form than (1), besides the advantageous properties stated by (4)-(6). FBLF (7) is obtained by modifying (3). It follows from (8) that \(f_{\rm FIII}(V)=V+f_{\rm FI}(V)\); from (9), \(f_{\rm FIV}(V)=V+f_{\rm FII}(V)\); and from (10), \(f_{\rm FV}(V)=f_{\rm FI}(V)+f_{\rm FII}(V)\). These FBLFs are useful for the control design with a fixed barrier bound. IBP is a helpful property such that FBLFs (7)-(10) are applicable as the barrier bound is set to be large enough._ ## IV Constraint Iterative Learning Control In our present work we apply the DCC approach for the uncertainty compensation for the specified error models, along with the constraint control designs. By applying the DCC approach, the parametrization is not needed and the desired dynamics is not involved in the design, but the problem that emerges is how to handel the estimation for time-varying nonlinearities. ### _Control objective_ We will begin with our discussion about the error models. _Error model I_ The nonlinear error model is described by \[\dot{e}_{k} = f(e_{k},t)+g(e_{k},t)(u_{k}+\delta w_{k}(t)+\theta(t)) \tag{18}\] where \(t\in[0,T]\) and \(k\) is the iteration index; \(x_{k}\) is the \(n-\)dimensional vector of the system state, and \(u_{k}\) is the \(m\)-dimensional vector of the control input, \(e_{k}=x_{k}-x_{d}\) represents the state error of the \(k\)th cycle, with \(x_{d}\) being the desired trajectory given _a priori_ on \([0,T]\); \(\delta w_{k}(t)(=w(x_{k},t)-w(x_{d},t))\), \(w(\cdot,\cdot)\) indicating the lumped uncertainty, and \(\theta(t)=w(x_{d},t)\). Both \(f(\cdot,\cdot)\) and \(g(\cdot,\cdot)\) represent the nonlinearities of the error system undertaken. The expressions for \(\delta w_{k}(t)\) and \(\theta(t)\) in (18) are due to the DCC approach. For such error model, we face two kinds of uncertainties to be coped with. Here, \(\theta(t)\) is time-varying but state-independent, with the regressor being 1. It is assumed for dealing with \(\delta w(x_{k},x_{d},t)\) that \(\|\delta w_{k}(t)\|\leq\rho(x_{k},x_{d},t)\), and \(\rho(x_{k},x_{d},t)\) tends to zero, as \(x_{k}\) tends to \(x_{d}\). **Remark 2**: _The learning control designs, presented in this paper, have to tackle the problem arisen from the norm-bounded uncertainty, besides time-varying parameter estimation. Two typical cases are as follows: i) As \(w\) is continuously differentiable on \([0,T]\) for all \(k\), \(\rho(x_{k},x_{d},t)=l_{w}\|e_{k}\|\). This implies a global Lipschitz condition. The control design can apply the bound directly, when \(l_{w}\) is known. Improved one is to conduct estimation for \(l_{w}\). And ii) \(\rho(x_{k},x_{d},t)=\rho_{e}(x_{k},x_{d},t)\|e_{k}\|\), a Lipschitz-like condition. The bound \(\rho_{e}(x_{k},x_{d},t)\) is usually assumed to be known. However, throughout this paper, we do not assume such Lipschitz-like conditions._ Let the origin be an equilibrium of the nominal system, due to that \(f(0,t)=0,\forall t\in[0,T]\). Assume that the origin of the nominal system is globally asymptotically stable. Then there exists a continuous differential function, \(V(e_{k},t)\), such that \[\alpha_{1}(\|e_{k}\|)\leq V(e_{k},t)\leq\alpha_{2}(\|e_{k}\|) \tag{19}\] \[\frac{\partial V(e_{k},t)}{\partial t}+L_{f}V(e_{k},t)\leq-\alpha (\|e_{k}\|) \tag{20}\] where \(\alpha_{1}(\cdot),\alpha_{2}(\cdot)\) and \(\alpha(\cdot)\) are class \(K_{\infty}\) functions. The control objective of this paper is to find \(u_{k}\) such that the error \(e_{k}\) converges to a neighborhood of the origin, as faithfully as possible on \([0,T]\), as \(k\) increases. At the same time, \(e_{k}\) is enforced within a pre-specified region, for any \(t\in[0,T]\) and for all \(k\). In order to achieve this objective, the category of initial conditions are taken into account throughout this paper, that the actual initial state is set to be the same as the desired one, i.e., \(x_{k}(0)=x_{d}(0)\), for all \(k\). **Remark 3**: _As \(e_{k}(0)=0\), for all \(k\), \(\alpha_{1}(e_{k}(0))=0\) and \(\alpha_{2}(e_{k}(0))=0\), implying that \(V(e_{k}(0),t)=0\), for any \(t\in[0,T]\)._ We will also address the control design problem for the following error model, based on the DCC approach, _Error model II_ A simple form of the error model is as follows: \[\dot{e}_{k} = Ae_{k}+b(u_{k}+\delta w_{k}(t)+\theta(t)) \tag{21}\] where both \(A\) and \(b\) are with appropriate dimensions. The matrix \(A\) is stable so that there exists positive definite matrix \(P\) such that \(A^{T}P+PA=-Q\), for the given positive definite matrix \(Q\). **Remark 4**: _There are practical examples which are expressed with the presented error models. Special models were undertaken, and thus the results of this paper are applicable for robotic systems [7]._ With the use of the proposed fractional BLFs, the error-constraint learning control designs for both models I and II are carried out, respectively. In addition, the theoretical results about existence of the solutions, stability and convergence of the closed-loop systems are presented. ### _Robust ILC with discontinuous action_ Using the given fractional BLF, the control law applied for model I is proposed as \[u_{k} = -\theta_{k}-\varsigma_{k} \tag{22}\] \[\varsigma_{k} = \left\{\begin{array}{ll}\frac{z_{k}}{\|z_{k}\|}\rho_{k},&\mbox{ if }\ \ z_{k}\neq 0\\ 0,&\mbox{ if }\ \ z_{k}=0\end{array}\right. \tag{23}\] with the following learning law used for the parameter adaptation, for each \(t\in[0,T]\), \[\theta_{k}(t) = \mbox{sat}(\theta_{k}^{*}(t)) \tag{24}\] \[\theta_{k}^{*}(t) = \mbox{sat}(\theta_{k-1}^{*}(t))+\gamma z_{k}(t) \tag{25}\] where \(\gamma>0\), \(z_{k}=\frac{b_{k}^{2}}{(b_{V}-V_{k})^{2}}L_{g}V_{k}\), and the notation \(V_{k}(t)=V(e_{k},t)\), is used for simplicity of presentation. Here, \(\frac{b_{V}V_{k}}{b_{V}-V_{k}}\) is the BLF \(f_{\rm FII}\) we suggest, and \(b_{V}\) is the bound on \(V_{k}\) we wish to set up. The following lemma is provided to aid the theoretical analysis. **Lemma 1**: _For positive sequences \(r_{k}\) and \(s_{k}\), if_ \[r_{k}\leq r_{k-1}-s_{k} \tag{26}\] _and \(r_{0}\) is bounded, then the sequences \(r_{k}\) and \(s_{k}\) are bounded for all \(k\), and \(\lim_{k\to\infty}s_{k}=0\)._ **Theorem 1**: _The solution of the ILC system, consisting of the error system (18), the control law (22)-(23), together with the learning law (24)-(25), exists on \([0,T]\) for all \(k\). Moreover, the tracking error \(e_{k}(t)\) converges to zero uniformly on \([0,T]\), as \(k\to\infty\)._ Proof:: To begin with we appeal for the existence theorem, from which there exists \(t_{1},0<t_{1}<T\), such that the solution exists on the interval \([0,t_{1})\). Suppose that \([0,t_{1})\) is the maximal interval of existence of the solution, and it cannot be continued up. With (22)-(23), the derivative of the chosen BLF along (18) can be calculated as \[\frac{d}{dt}\frac{b_{V}V_{k}}{b_{V}-V_{k}} \leq -\frac{b_{V}^{2}}{(b_{V}-V_{k}(t))^{2}}\alpha_{k}\] \[+z_{k}^{T}(u_{k}+\delta w_{k}+\theta)\] \[= -\frac{b_{V}^{2}}{(b_{V}-V_{k})^{2}}\alpha_{k}+z_{k}^{T}\tilde{ \theta}_{k} \tag{27}\] where \(\tilde{\theta}_{k}=\theta_{k}-\hat{\theta}_{k}\). With (24)-(25), we obtain \(\gamma z_{k}^{T}\tilde{\theta}_{k}-(\theta_{k}-\theta_{k-1})^{T}\tilde{\theta }_{k}=(\theta-\mbox{sat}(\theta_{k}^{*}))^{T}(\theta_{k}^{*}-\mbox{sat}( \theta_{k}^{*}))\). It follows from (20) in [11] that \((\theta-\mbox{sat}(\theta_{k}^{*}))^{T}(\theta_{k}^{*}-\mbox{sat}(\theta_{k }^{*}))\leq 0\). As such, \[\gamma z_{k}^{T}\tilde{\theta}_{k}\leq(\theta_{k}-\theta_{k-1})^{T}\tilde{ \theta}_{k} \tag{28}\] by which (27) can be rewritten as \[\frac{d}{dt}\frac{b_{V}V_{k}}{b_{V}-V_{k}} \leq -\frac{b_{V}^{2}}{(b_{V}-V_{k})^{2}}\alpha_{k}+\frac{1}{\gamma}( \theta_{k}-\theta_{k-1})^{T}\tilde{\theta}_{k}.\] Noting that \(V_{k}(0)=0\) gives rise to \[\frac{b_{V}V_{k}(t)}{b_{V}-V_{k}(t)} \leq \frac{1}{\gamma}\int_{0}^{t}(\theta_{k}(s)-\theta_{k-1}(s))^{T} \tilde{\theta}_{k}(s)ds \tag{29}\] for \(t\leq t_{1}\). Since \(\theta_{k}\) is uniformly bounded on \([0,t_{1})\), the right-hand side of (29) is bounded, implying that the term \(\frac{b_{V}V_{k}}{b_{V}-V_{k}}\) is bounded. In turn, \(V_{k}(t)\leq b_{V}\) for \([0,t_{1})\), due to that \(\frac{b_{V}}{b_{V}-V_{k}(t)}>1\). It follows from (19) that \(e_{k}(t)\) is bounded on \([0,t_{1})\). This contradicts to that \([0,t_{1})\) is the maximal interval of existence of solution. Hence, the solution can be continued up the boundary, and the solution exists on \([0,T]\) for each \(k\). Eq. (29) holds on \([0,T]\), and \(V_{k}(t)\leq b_{V}\) for all \(t\in[0,T]\), whenever \(V_{k}(0)\leq b_{V}\). By (19) \(e_{k}\) is bounded on \([0,T]\). In turn, \(z_{k}\) is bounded on \([0,T]\), according to its definition, and by (22) \(u_{k}\) is bounded on \([0,T]\). To proceed for establishing the convergence result, let us choose the Lyapunov-Krasovskii functional \(L_{k}(t)=\frac{b_{V}V_{k}(t)}{b_{V}-V_{k}(t)}+\frac{1}{2}\int_{0}^{t}(\bar{ \theta}_{k}^{T}(s)\tilde{\theta}_{k}(s))ds\). Using the equality \(\bar{\theta}_{k}^{T}\tilde{\theta}_{k}-\tilde{\theta}_{k-1}^{T}\tilde{\theta }_{k-1}=-2\bar{\theta}_{k}^{T}(\theta_{k}-\theta_{k-1})-(\theta_{k}-\theta_{ k-1})^{T}(\theta_{k}-\theta_{k-1})\), the difference between \(L_{k}(t)\) and \(L_{k-1}(t)\) can be calculated as \[\Delta L_{k}(t)(=L_{k}(t)-L_{k-1}(t))\] \[\leq \frac{b_{V}V_{k}(0)}{b_{V}-V_{k}(0)}-\frac{b_{V}V_{k-1}(t)}{b_{V }-V_{k-1}(t)}\] \[-\int_{0}^{t}\frac{b_{V}^{2}}{(b_{V}-V_{k}(s))^{2}}\alpha_{k}(s) ds+\int_{0}^{t}z_{k}^{T}(s)\tilde{\theta}_{k}(s)ds\] \[-\frac{1}{\gamma}\int_{0}^{t}(\theta_{k}(s)-\theta_{k-1}(s))^{T }\tilde{\theta}_{k}(s)ds\] \[-\frac{1}{2\gamma}\int_{0}^{t}(\theta_{k}(s)-\theta_{k-1}(s))^{T }(\theta_{k}(s)-\theta_{k-1}(s))ds.\] It follows from (28) that \[\Delta L_{k}(t) \leq \frac{b_{V}V_{k}(0)}{b_{V}-V_{k}(0)}-\frac{b_{V}V_{k-1}(t)}{b_{V} -V_{k-1}(t)}\] \[-\int_{0}^{t}\frac{b_{V}^{2}}{(b_{V}-V_{k}(s))^{2}}\alpha_{k}(s)ds\] \[-\frac{1}{2\gamma}\int_{0}^{t}(\theta_{k}(s)-\theta_{k-1}(s))^{T}\] \[(\theta_{k}(s)-\theta_{k-1}(s))ds\] which implies, by \(V_{k}(0)=0\), \[\Delta L_{k}\leq-\frac{b_{V}V_{k-1}}{b_{V}-V_{k-1}}\] With the fact that \(\frac{b_{V}}{b_{V}-V_{k}(s)}>1\), \[\Delta L_{k}\leq-V_{k-1}. \tag{30}\] \(L_{k}(t)\) is monotonically decreasing for each \(t\in[0,T]\). \(L_{0}\) is bounded, due to it continuity on \([0,T]\), which renders the boundedness of \(L_{k}\). As such, the limit of \(L_{k}\) exits, leading to \(\lim_{k\to\infty}V_{k}(t)=0\), for \(t\in[0,T]\). In turn, we conclude that from (19), \(\lim_{k\to\infty}e_{k}(t)=0\), for \(t\in[0,T]\). These convergence results coincide to Lemma 1. It follows from (18) that \(\dot{e}_{k}(t)\) is bounded on \([0,T]\), implying that \(\lim_{k\to\infty}e_{k}(t)=0\) uniformly on \([0,T]\). This completes the proof. **Remark 5**: _By Theorem 1, \(V_{k}\) is enforced to be within a pre-specified region, i.e., \(V_{k}\leq b_{V}\). Although we make the constraint on \(V_{k}\), not directly for \(e_{k}\), it follows from (19) that \(\|e_{k}\|\leq\alpha_{1}^{-1}(b_{V})\) whenever \(V_{k}\leq b_{V}\)._ According to the result presented in Theorem 1, the implementation for model II can be given. The barrier Lyapunov function \(W_{k}=\frac{1}{2}\frac{b_{e}^{2}e_{k}^{T}Pe_{k}}{b_{e}^{2}-e_{k}^{T}Pe_{k}}\), a typical form of of \(f_{\rm FIII}\), is undertaken. The ILC system undertaken consists of the error dynamics (21), and the learning algorithm (22)-(23), and (24)-(25), by viewing \(z_{k}=\frac{b_{e}^{2}e_{k}^{T}Pb}{(b_{e}^{2}-e_{k}^{T}Pe_{k})^{2}}\). Under that \(e_{k}^{T}(0)Pe_{k}(0)<b_{e}^{2}\), the derivative of the chosen function along (21) can be given as \[\dot{W}_{k} = -\frac{1}{2}\frac{b_{e}^{2}e_{k}^{T}Qe_{k}}{(b_{e}^{2}-e_{k}^{T}Pe_{ k})^{2}}\] \[+z_{k}(u_{k}+\delta w_{k}+\theta)\] \[\leq By (20) in [11], \(\gamma z_{k}^{T}\tilde{\theta}_{k}-(\theta_{k}-\theta_{k-1})^{T}\tilde{\theta}_{k}\leq 0\). Then \[\dot{W}_{k} \leq -\frac{1}{2}\frac{b_{e}^{2}e_{k}^{T}Qe_{k}}{(b_{e}^{2}-e_{k}^{T}Pe_ {k})^{2}}+\frac{1}{\gamma}(\theta_{k}-\theta_{k-1})^{T}\tilde{\theta}_{k}. \tag{31}\] Since \(\theta_{k}\) is uniformly bounded, the right-hand side of (31) is bounded. Hence, \(\dot{W}_{k}(t)\) is bounded, leading to that \(W_{k}(t)\) is bounded, due to the boundedness of \(W_{k}(0)\). Since \(\frac{b_{e}^{2}}{b_{e}^{2}-e_{k}^{T}Pe_{k}}>1\), then \(e_{k}^{T}Pe_{k}<b_{e}^{2}\), which renders that \(e_{k}(t)\) is bounded. Hence, the solution can be continued up to the boundary, and the solution of the ILC system exists on \([0,T]\) for each \(k\). In turn, \(z_{k}\) is bounded according to its definition, and by (22) \(u_{k}\) is bounded. To proceed for establishing the convergence, we appeal for the following relationship, \[W_{k}(t) \leq -\int_{0}^{t}\frac{b_{e}^{2}e_{k}^{T}(s)Qe_{k}(s)}{(b_{e}^{2}-e_{ k}^{T}(s)Pe_{k}(s))^{2}}ds\] \[+\int_{0}^{t}z_{k}^{T}(s)\tilde{\theta}_{k}(s)ds\] where the condition \(e_{k}(0)=0\) is used. The deference between \(L_{k}(t)\) and \(L_{k-1}(t)\) along (21) can be written as \[\Delta L_{k}(t)\] \[\leq -W_{k-1}(t)-\frac{1}{2}\int_{0}^{t}\frac{b_{e}^{2}e_{k}^{T}(s)Qe_{ k}(s)}{(b_{e}^{2}-e_{k}^{T}(s)Pe_{k}(s))^{2}}ds\] \[+\int_{0}^{t}z_{k}^{T}(s)\tilde{\theta}_{k}(s)ds\] \[-\frac{1}{\gamma}\int_{0}^{t}\tilde{\theta}_{k}^{T}(s)(\theta_{k} (s)-\theta_{k-1}(s))ds\] \[-\frac{1}{2\gamma}\int_{0}^{t}(\theta_{k}(s)-\theta_{k-1}(s))^{T} (\theta_{k}(s)-\theta_{k-1}(s))ds.\] With the learning law, we obtain \[\Delta L_{k} \leq -W_{k-1}(t)-\frac{1}{2}\int_{0}^{t}\frac{b_{e}^{2}e_{k}^{T}(s)Qe _{k}(s)}{(b_{e}^{2}-e_{k}^{T}(s)Pe_{k}(s))^{2}}ds\] \[-\frac{1}{2\gamma}\int_{0}^{t}(\theta_{k}(s)-\theta_{k-1}(s))^{T}\] \[(\theta_{k}(s)-\theta_{k-1}(s))ds\] implying that \[\Delta L_{k} \leq -W_{k-1}. \tag{32}\] Note that \(\frac{b_{e}^{2}}{b_{e}^{2}-e_{k}^{T}Pe_{k}}>1\), as \(b_{e}^{2}-e_{k}^{T}Pe_{k}>0\). It follows from (32) that \[\Delta L_{k} \leq -\frac{1}{2}e_{k-1}^{T}Pe_{k-1}.\] \(L_{0}\) is bounded on \([0,T]\), due to its continuity. By Lemma 1 and the boundedness of \(\dot{e}_{k}\), the uniform convergence of \(e_{k}\) on \([0,T]\) can be established. ## V Robust ILC with continuous action The control law (22) involves the dis-continuous term. We modify it to tackle the issue, by viewing \[\varsigma_{k} = \frac{\mu_{k}}{\|\mu_{k}\|+\epsilon}\rho_{k} \tag{33}\] where \(\epsilon>0\), \(\mu_{k}=z_{k}\rho_{k}\), and \(z_{k}^{T}=\frac{b_{V}(b_{V}+1)}{(b_{V}-V_{k})^{2}}L_{g}V_{k}\). The same form of learning law (24)-(25) is applied. Here, for constraint we choose BLF \(f_{\rm{FIV}}=\frac{(b_{V}+1)V_{k}}{b_{V}-V_{k}}\), and \(b_{V}\) is the bound on \(V_{k}\). The following technical lemma is helpful for finalizing the performance analysis to be presented. **Lemma 2**: _Given the sequence \(d_{k}\), suppose that for positive sequences \(r_{k}\) and \(s_{k}\),_ \[r_{k}\leq r_{k-1}-s_{k}+d_{k}. \tag{34}\] _with \(r_{0}\) being bounded, and both satisfy that \(s_{k}\) tends to zero whenever \(r_{k}\) does. Then, i) \(s_{k}\) is bounded for all \(k\), and \(\limsup_{k}s_{k}\leq\bar{d}\), as \(d_{k}\) satisfies that \(|d_{k}|\leq\bar{d}\), for all \(k\); and ii) \(\lim_{k\rightarrow\infty}s_{k}=0\), as \(\lim_{k\rightarrow\infty}d_{k}=0\)._ We refer to [19] for the proof. **Theorem 2**: _The solution of the modified ILC system exists on \([0,T]\) for all \(k\). Moreover, the tracking error \(e_{k}(t)\) converges to a neighborhood of the origin, with the radius proportional to the given \(\epsilon\), on \([0,T]\), as \(k\rightarrow\infty\)._ At first, we assume that \([0,t_{1})\), \(0<t_{1}<T\), is the maximal interval of existence of the solution. We calculate the derivative of the chosen BLF, which satisfies \[\frac{d}{dt}\frac{(b_{V}+1)V_{k}}{b_{V}-V_{k}} \leq -\frac{b_{V}(b_{V}+1)}{(b_{V}-V_{k})^{2}}\alpha_{k}\] \[+z_{k}^{T}(u_{k}+\delta w_{k}+\theta).\] Applying the modified control law, \[z_{k}^{T}(u_{k}+\delta w_{k}+\theta)\] \[\leq z_{k}^{T}\left(\tilde{\theta}_{k}-\frac{\mu_{k}}{\|\mu_{k}\|+ \epsilon}\rho_{k}\right)+\|z_{k}\|\rho_{k}\] \[\leq z_{k}^{T}\tilde{\theta}_{k}+\frac{\epsilon}{\|\mu_{k}\|+\epsilon}\] \[\leq z_{k}^{T}\tilde{\theta}_{k}+\epsilon\] which leads to \[\frac{d}{dt}\frac{(b_{V}+1)V_{k}}{b_{V}-V_{k}} \leq -\frac{b_{V}(b_{V}+1)}{(b_{V}-V_{k})^{2}}\alpha_{k} \tag{35}\] \[+z_{k}^{T}\tilde{\theta}_{k}+\epsilon.\] With the modified learning law, (35) can be rewritten as \[\frac{d}{dt}\frac{(b_{V}+1)V_{k}}{b_{V}-V_{k}} \leq -\frac{b_{V}(b_{V}+1)}{(b_{V}-V_{k})^{2}}\alpha_{k}\] \[+\frac{1}{\gamma}(\theta_{k}-\theta_{k-1})^{T}\tilde{\theta}_{k}+\epsilon\] \[\leq \frac{1}{\gamma}(\theta_{k}-\theta_{k-1})^{T}\tilde{\theta}_{k}+\epsilon\] implying that, by \(V_{k}(0)=0\), \[\frac{(b_{V}+1)V_{k}(t)}{b_{V}-V_{k}(t)}\leq\frac{1}{\gamma}\int_{0}^{t}(\theta_{k} (s)-\theta_{k-1}(s))^{T}\tilde{\theta}_{k}(s)ds+\epsilon t \tag{36}\] for \(t\leq t_{1}\). Since \(\theta_{k}\) is uniformly bounded on \([0,t_{1})\), the right-hand side of (36) is bounded. Note that \(\frac{b_{V}+1}{b_{V}-V_{k}(t)}>1\), assuring that \(V_{k}(t)\leq b_{V}\). In turn, \(e_{k}(t)\) is bounded on \([0,t_{1})\), which contradicts to the assumption that \([0,t_{1})\) is the maximal interval of existence of solution. Hence, the solution of the modified ILC system exists on \([0,T]\) for all \(k\). For the convergence analysis, we choose the barrier Lyapunov-Krasovskii functional, \(L_{k}(t)=\frac{(b_{V}+1)V_{k}(t)}{b_{V}-V_{k}(t)}+\frac{1}{2\gamma}\int_{0}^{t} \tilde{\theta}_{k}^{T}(s)\tilde{\theta}_{k}(s)ds\). It follows from (35) that \[\frac{(b_{V}+1)V_{k}(t)}{b_{V}-V_{k}(t)} \leq \frac{(b_{V}+1)V_{k}(0)}{b_{V}-V_{k}(0)}\] \[-\int_{0}^{t}\frac{b_{V}(b_{V}+1)}{(b_{V}-V_{k}(s))^{2}}\alpha_{k }(s)ds\] \[+\int_{0}^{t}z_{k}^{T}(s)\tilde{\theta}_{k}(s)ds+\epsilon t.\] Then, the difference of \(L_{k}\) and \(L_{k-1}\) can be derived, which satisfies \[\Delta L_{k}(t) \leq \frac{b_{V}+1}{b_{V}-V_{k}(0)}V_{k}(0)-\frac{b_{V}+1}{b_{V}-V_{k -1}(t)}V_{k-1}(t)\] \[-\int_{0}^{t}\frac{b_{V}(b_{V}+1)}{(b_{V}-V_{k}(s))^{2}}\alpha_{ k}(s)ds-\frac{1}{2\gamma}\int_{0}^{t}(\theta_{k}(s)\] \[-\theta_{k-1}(s))^{T}(\theta_{k}(s)-\theta_{k-1}(s))ds\] \[+\epsilon T.\] Since \(V_{k}(0)=0\), then \[\Delta L_{k}\leq-\frac{b_{V}+1}{b_{V}-V_{k-1}}V_{k-1}+\epsilon T.\] According to Lemma 2, we can conclude that \[\overline{\lim}_{k\rightarrow\infty}\frac{b_{V}V_{k}}{b_{V}-V_{k}}\leq \epsilon T.\] Due to \(\frac{b_{V}}{b_{V}-V_{k}}\geq 1\), we have \[\overline{\lim}_{k\rightarrow\infty}V_{k}\leq\epsilon T.\] Hence, using (19), \(\overline{\lim}_{k\rightarrow\infty}\|e_{k}\|\leq\alpha_{1}^{-1}(\epsilon T)\). This completes the proof. The implementation can be made for the ILC system, by viewing \(z_{k}^{T}=\frac{b_{V}^{2}(b_{V}^{2}+1)}{(b_{V}^{2}-e_{k}^{T}Pe_{k})^{2}}c_{k }^{T}Pb\), where we choose the barrier Lyapunov function \(W_{k}=\frac{1}{2}\frac{(b_{V}^{2}+1)e_{k}^{T}Pe_{k}}{b_{V}^{2}-e_{k}^{T}Pe_{k }}\), a typical form of \(f_{\rm{FIV}}\). Assume that the interval \([0,t_{1}),0<t_{1}<T\), is the maximal interval of existence of the solution. The derivative of the chosen function can be calculated as \[\dot{W}_{k} = -\frac{1}{2}e_{k}^{T}Qe_{k}\frac{b_{e}^{2}(b_{e}^{2}+1)}{(b_{e}^ {2}-e_{k}^{T}Pe_{k})^{2}}\] \[+z_{k}^{T}(u_{k}+\delta w_{k}+\theta).\] Similar derivations to arrive at (36), we obtain \[\dot{W}_{k} \leq -\frac{1}{2}e_{k}^{T}Qe_{k}\frac{b_{e}^{2}(b_{e}^{2}+1)}{(b_{e}^ {2}-e_{k}^{T}Pe_{k})^{2}}\] \[+\frac{1}{\gamma}(\theta_{k}-\theta_{k-1})^{T}\tilde{\theta}_{k}+\epsilon\] which gives rise to \[\dot{W}_{k} \leq \frac{1}{\gamma}(\theta_{k}-\theta_{k-1})^{T}\tilde{\theta}_{k}+\epsilon. \tag{37}\] Since \(\theta_{k}\) is uniformly bounded on \([0,t_{1})\), the right-hand side of (37) is bounded. Hence, \(\dot{W}_{k}(t)\) is bounded, implying that \(W_{k}(t)\) is bounded, due to the boundedness of \(W_{k}(0)\). Since \(\frac{b_{e}^{2}}{b_{e}^{2}-e_{k}^{T}Pe_{k}}>1\), \(e_{k}^{T}Pe_{k}<b_{e}^{2}\) for \([0,t_{1})\). In turn, \(e_{k}\) is bounded on \([0,t_{1})\). This contradicts that \([0,t_{1})\) is the maximal interval of existence of solution, and the solution of the modified ILC system exists on \([0,T]\) for each \(k\). With the chosen BLF, the deference between \(L_{k+1}(t)\) and \(L_{k}(t)\) is calculated, which satisfies, by \(e_{k}(0)=0\), \[\Delta L_{k}(t) \leq -W_{k-1}(t)\] \[-\frac{1}{2}\int_{0}^{t}e_{k}^{T}(s)Qe_{k}(s)\frac{b_{e}^{2}(b_{e }^{2}+1)}{(b_{e}^{2}-e_{k}^{T}(s)Pe_{k}(s))^{2}}ds\] \[-\frac{1}{2\gamma}\int_{0}^{t}(\theta_{k}(s)-\theta_{k-1}(s))^{T} (\theta_{k}(s)-\theta_{k-1}(s))ds\] \[+\epsilon T\] implying that \[\Delta L_{k}\leq-W_{k-1}+\epsilon T. \tag{38}\] It follows from (38) that, by Lemma 2, \[\overline{\lim}_{k\rightarrow\infty}\frac{b_{e}^{2}e_{k}^{T}Pe_{k}}{b_{e}^{2}-e _{k}^{T}Pe_{k}}\leq 2\epsilon T.\] Using the fact \(\frac{b_{e}^{2}}{b_{e}^{2}-e_{k}^{T}}>1\), we obtain \[\overline{\lim}_{k\rightarrow\infty}e_{k}^{T}Pe_{k}\leq 2\epsilon T.\] By the convergence result of \(e_{k}^{T}Pe_{k}\), \(e_{k}\) converges to a neighborhood of the origin on \([0,T]\), as \(k\rightarrow\infty\), with the radius proportional to \(\sqrt{2\epsilon T/\lambda_{\min}}\), and \(\lambda_{\min}\) being the minimum eigenvalue of the matrix \(P\). **Remark 6**: _It is seen that Lemma 2 plays a crucial role in finalizing the analysis for an ILC system, in the presence of residuals. It should be noted that Lemma 2 is applicable for various iterative processes with residuals. Lemma 1 can be considered as a corollary of Lemma 2, by setting \(d_{k}=0\) in (34)._ **Remark 7**: _The fully-saturated learning algorithm (24)-(25) ensure the uniform boundedness of the estimates, by which the establishment for existence of solution and the convergence assessment can be carried out._ **Remark 8**: _The constraint control technique is more suitable for an ILC system, because such a system runs over a finite interval, and the output has to be limited within the specified region. By Theorem 1, \(e_{k}(t)\) is enforced to be within a pre-specified region, as \(V_{k}(t)\leq b_{V}\). This in turn takes effect to restrict the state variables, as we shall show by the simulation result._ **Remark 9**: _The control law (22)-(23) may cause the chattering phenomenon, when implementing it. We suggest to apply a modified control law, with the modified term (33), which makes the control smooth, and the chattering phenomenon can be avoided. However, only the bounded-error convergence can be assured, with the convergence bound proportional to the \(\epsilon\), an adjustable design parameter. The applied DCC approach is different from the conventional robust techniques, since the norm-bound \(\rho_{k}\) decreases with respect to the tracking error, and will approach zero, as the error tends to zero._ ## VI Conclusion This paper has presented novel fractional barrier Lyapunov functions, and the error-constraint ILC designs have been conducted for two error models. The DCC approach has been shown applicable for developing the learning control schemes, whenever the parametrization is not available. Theoretical results about the existence of the solution and the convergence of the learning control algorithms have been presented. It has been shown that fully-saturated learning algorithms are effective to assure the boundedness of the estimates, such that the objective of the error constraint can be achieved. In addition, the robust control technique, through modifying the discontinuous action, has been shown to yield the expected tracking performance in the presence of residuals.
2305.09025
Soft Prompt Decoding for Multilingual Dense Retrieval
In this work, we explore a Multilingual Information Retrieval (MLIR) task, where the collection includes documents in multiple languages. We demonstrate that applying state-of-the-art approaches developed for cross-lingual information retrieval to MLIR tasks leads to sub-optimal performance. This is due to the heterogeneous and imbalanced nature of multilingual collections -- some languages are better represented in the collection and some benefit from large-scale training data. To address this issue, we present KD-SPD, a novel soft prompt decoding approach for MLIR that implicitly "translates" the representation of documents in different languages into the same embedding space. To address the challenges of data scarcity and imbalance, we introduce a knowledge distillation strategy. The teacher model is trained on rich English retrieval data, and by leveraging bi-text data, our distillation framework transfers its retrieval knowledge to the multilingual document encoder. Therefore, our approach does not require any multilingual retrieval training data. Extensive experiments on three MLIR datasets with a total of 15 languages demonstrate that KD-SPD significantly outperforms competitive baselines in all cases. We conduct extensive analyses to show that our method has less language bias and better zero-shot transfer ability towards new languages.
Zhiqi Huang, Hansi Zeng, Hamed Zamani, James Allan
2023-05-15T21:17:17Z
http://arxiv.org/abs/2305.09025v1
# Soft Prompt Decoding for Multilingual Dense Retrieval ###### Abstract. In this work, we explore a Multilingual Information Retrieval (MLIR) task, where the collection includes documents in multiple languages. We demonstrate that applying state-of-the-art approaches developed for cross-lingual information retrieval to MLIR tasks leads to sub-optimal performance. This is due to the heterogeneous and imbalanced nature of multilingual collections - some languages are better represented in the collection and some benefit from large-scale training data. To address this issue, we present KD-SPD, a novel soft prompt decoding approach for MLIR that implicitly "translates" the representation of documents in different languages into the same embedding space. To address the challenges of data scarcity and imbalance, we introduce a knowledge distillation strategy. The teacher model is trained on rich English retrieval data, and by leveraging bi-text data, our distillation framework transfers its retrieval knowledge to the multilingual document encoder. Therefore, our approach does not require any multilingual retrieval training data. Extensive experiments on three MLIR datasets with a total of 15 languages demonstrate that KD-SPD significantly outperforms competitive baselines in all cases. We conduct extensive analyses to show that our method has less language bias and better zero-shot transfer ability towards new languages. Multilingual retrieval; Prompt-based learning; Knowledge distillation; Dense retrieval + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: preprint: AIP/12/15 + Footnote †: preprint: preprint: AIP/12/15 different languages, the performance of multilingual pre-trained models varies by language in many downstream tasks (Wang et al., 2019; Wang et al., 2020). MLIR models built on such pre-trained models can inherit language bias, leading to inconsistent ranking results. To demonstrate this case, we pair the test queries from TREC 2020 Deep Learning Track (Chen et al., 2020) with their relevant passages translated into Arabic and Russian by mMARCO (Chen et al., 2020). Then for each language, we score query-document pairs using the multilingual dense passage retriever (mDPR) (Wang et al., 2020). Figure 1 illustrates the difference in ranking the same set of relevant documents in these two languages. We observe that mDPR scores Russian documents higher than their Arabic version. We argue that such language bias in MLIR would lead to sub-optimal ranking results, e.g., highly relevant documents in Arabic have lower scores than slightly relevant documents in Russian. To address these issues, we present KD-SPD,1 a multilingual dense retrieval model based on knowledge distillation (KD) and soft prompt decoder (SPD) for the MLIR task. KD-SPD does not require any multilingual relevance labels for training, thus automatically solving the data scarcity issue in low-resource languages. Our approach solely requires monolingual retrieval training data in English, which we obtain from MS MARCO (Wang et al., 2019), and a large collection of parallel and comparable documents. Note that such data is abundant and easily collected through automatic bi-text mining algorithms (Dosov et al., 2016). We use CCAligned (Dosov et al., 2016) in our experiments. Footnote 1: KD refers to the model training framework, and SPD refers to the model architecture. We first train a monolingual dense retrieval model \(M\), such as ANCE (Wang et al., 2019), for the English language. Since this model has the relevance matching ability, we freeze its document encoder and then minimize the distance between the representations learned by \(M\) for any English document and the representations learned by KD-SPD for its parallel or comparable version in other languages. In fact, \(M\) acts as a monolingual teacher model for the multilingual student model KD-SPD. Therefore, our approach implicitly "translates" the representation of documents in different languages into the same language embedding space. We hypothesize that although different languages possess unique properties such as distinct grammar or vocabulary, they also have common traits for expressing similar meanings. To capture these unique and shared features, KD-SPD uses decomposable soft prompt, which is derived as the product of a shared matrix and a low-rank language-specific matrix for each language. Our proposed encoder-decoder architecture transforms documents into contextualized token embeddings and decodes the outputs with language-specific prompts to obtain a final representation. Through joint training across multiple languages, we observe that the learned prompts are capable of reducing language bias and possess the transferable capacity to generalize to unseen languages. We performed extensive experiments on three MLIR datasets with a total of 15 languages from diverse linguistic families, including both high- and low-resource languages. We also conduct experiments on different relevant distributions with respect to language. In terms of mean average precision (MAP), our proposed method significantly outperforms several strong baseline methods in all multilingual settings, including a 20.2% improvement over mDPR and a 9.6% improvement over a multilingual knowledge distillation method from Sentence-BERT (SBERT) (Wang et al., 2019). Further analysis demonstrates that KD-SPD has less language bias and better zero-shot transfer ability toward new languages. ## 2. Related Work ### Neural Matching Models for MLIR With respect to language settings, MLIR and CLIR are closely related. CLIR mostly focuses on retrieval between two particular languages, while MLIR considers multiple language pairs between query and document. In general, information retrieval involving a multilingual setting has two sub-tasks: translation and query-document matching. One method involves translating the query into the language of the document set, then using a monolingual retrieval model to evaluate relevance. The translation sub-task can be performed using Statistical Machine Translation (SMT) (Chen et al., 2020) or Neural Machine Translation (NMT) (Wang et al., 2020). The two-step process of translation followed by retrieval is widely used; however, with the advent of bilingual word representation (Chen et al., 2020; Wang et al., 2020) and multilingual pre-trained language models (Chen et al., 2020; Chen et al., 2020), it is possible to bypass the translation step and match the query and document in different languages within a shared representation space. Multilingual pre-trained language models usually prepend a special token to the input sequence to support downstream applications. Because the special token embedding is contextualized based on other tokens in the sequence, once finetuned, they are effective across various tasks, including retrieval tasks (Wang et al., 2019; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020). Named cross-encoder, the model takes the concatenation of the query and document as input. An embedding of the "[CLS]" token is fed into a feed-forward layer to produce a score for the input pair (Wang et al., 2020). With multilingual knowledge from pre-training, these language models help bridge the vocabulary between query and document languages. Like monolingual retrieval, multilingual retrieval models based on cross-encoder are computationally expensive and usually rely on a lexical-based sparse retrieval as the first step to finding relevant information. Dense retrieval based on a bi-encoder architecture is proposed to overcome the sparse retrieval bottleneck (Wang et al., 2019; Wang et al., 2020; Wang et al., 2020). With the separation of the query document encoders, dense retrieval has already shown success on monolingual retrieval tasks (Wang et al., 2020; Wang et al., 2020). By replacing the underlying language model with its multilingual version, dense retrieval is extended to a multilingual setting (Wang et al., 2020). Figure 1. Average score given to parallel documents in Arabic and Russian by mDPR (Wang et al., 2020). Queries and relevant judgments are from the TREC 2020 Deep Learning Track. Passages are translated by mMARCO (Chen et al., 2020) However, the translation gap prevents the multilingual retrieval models from achieving the same level of performance as models in the monolingual (i.e., English-to-English) setting (Kang et al., 2019). Supporting the model with abundant multilingual retrieval data is one way to reduce the effect of the translation gap. Sasaki et al. (2019) constructed large-scale, weakly supervised CIR collections based on the linked foreign language articles from Wikipedia pages. Bonifacio et al. (Bonifacio et al., 2019) built MLIR training data using neural machine translation models. Besides retrieval data, approaches like utilizing external knowledge in language-specific modules are also suggested to close the language gap. Bonab et al. (2019) showed that when fine-tuned with retrieval data, dictionary-oriented word embedding could improve the performance of a CLIR model. Huang et al. (2019) proposed a mixed attention transformer architecture to learn relevance judgments and word-level translation knowledge jointly. Yang et al. (2019) designed a language adapter component to efficiently transfer models based on monolingual data to a cross-lingual setting. These approaches mostly focus on improving CLIR performance where the query and the target documents are from two particular languages. In this work, we focus on MLIR, a more general setting where the document collection comprises a diverse mix of languages, which is gaining increasing attention recently (Kang et al., 2019). While being able to bridge the translation gap between multiple languages, the model for MLIR task also needs to be language-agnostic when ranking documents in different languages. Our approach implicitly "translates" documents in different languages into an embedding space tuned for English retrieval. ### Multi-task & Prompt-based Learning The goal of multi-task learning is to leverage the shared underlying structure of different tasks to improve the performance of each task (Kang et al., 2019). A common approach is to transfer the knowledge from a model fine-tuned on multiple source tasks to the target task (Beng et al., 2019; Sasaki et al., 2019; Sasaki et al., 2019). For example, Aghajanyan et al. (2019) introduce a pre-finetuning stage that involves multi-task learning steps on diverse NLP tasks. They show that training stability can be improved by applying task-heterogenous batches with task-rebalancing loss scaling. Recent works show that the zero-shot and few-shot performance of pre-trained large language models can be boosted by prompted multi-task learning (Sandhi et al., 2019; Liu et al., 2019; Liu et al., 2019). For instance, Sanh et al. (Sanh et al., 2019) develop a system that maps any NLP task into a human-readable prompt form where each supervised dataset contains multiple prompts with diverse wording. The experiments imply that the multi-task training on these prompted datasets can improve the zero-shot performance of the pre-trained models. Other works (Sandhi et al., 2019) focus on zero-shot classification (ZAC), introducing a meta-tuning training paradigm to optimize the zero-shot classification objective via fine-tuning. They consolidate various classification tasks into a single QA format, compiling a dataset of classification tasks with human-authored prompts for meta-tuning. Soft Prompt tuning has shown great potential to adapt large language models to downstream tasks (Beng et al., 2019; Sasaki et al., 2019; Sasaki et al., 2019; Sasaki et al., 2019). Vu et al. (Vu et al., 2020) further study the generalizability and transferability of the soft prompts. They first learn a prompt on one or more source tasks and use it as the initialized prompt for a target task. The simple target prompt initialization method can match or outperform full fine-tuning across all model sizes. Asai et al. (2019) extend the work by training an attention module to interpolate the source prompts and newly initialized target prompt for each downstream task. During the multi-task training, only the target prompt and attention weights are updated, while the soft prompts and original language model's parameters are frozen. A recent approach (Wang et al., 2019) learns a transferable shared prompt by applying matrix decomposition and knowledge distillation from multiple source task-specific prompts and using the low-ranking matrix updating for target task adaption. KD-SPD builds upon the idea of prompt-oriented, parameter-efficient multi-task learning. It treats retrieval in each language as a distinct task while jointly modeling them to capture shared underlying structures. The primary insight is that languages, despite unique properties, share common features and concepts. We utilize decomposable prompts to model these aspects. Unlike conventional parameter-efficient approaches, experiments show that updating prompts jointly with model parameters enhances retrieval performance. ### Knowledge Distillation Proposed by Hinton et al. (2010), knowledge distillation is a method to train a model, called the student, using valuable information provided by the output of another model, called the teacher. This way, the teacher model's knowledge can be transferred into the student model. The idea of knowledge distillation is wildly used in the field of computer vision (Kang et al., 2019; Liu et al., 2019; Liu et al., 2019), natural language processing (Kang et al., 2019; Liu et al., 2019) and information retrieval (Sandhi et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019). In the field of information retrieval, it is common for the teacher model to be a complex reranker model with higher capacity but lower efficiency compared to the efficient dual-encoder based student model. Santhanam et al. (Santhanam et al., 2019) apply the KL divergence loss to align query-document scores between teacher and student models. Another approach is balanced topic-aware query sampling (Sandhi et al., 2019), which shows further improvement on top of the original knowledge distillation loss. To address the performance gap, Zeng et al. (Zeng et al., 2019) propose a curriculum learning based knowledge distillation framework that trains a student model with increasing difficulty. In addition to monolingual retrieval, multilingual distillation frameworks have also been proposed. Li et al. (Li et al., 2019) explore using query-document scores as the distillation signals. The cross-lingual token alignment task has also been studied as an optimal transport problem, with Huang et al. (Huang et al., 2019) proposing a distillation framework to build a CLIR model via bitext data. Our model training framework is also an extension of knowledge distillation. A typical framework for knowledge distillation relies on a teacher model to solely provide target distributions (Sandhi et al., 2019; Liu et al., 2019). Our approach has different sources of knowledge: the major knowledge is from the teacher model, and we also consider the cross-lingual knowledge shared by the prompt matrix. Moreover, from the language perspective, rather than focusing on one CLIR task, our model simultaneously learns retrieval knowledge for multiple CLIR tasks. ## 3. Methodology Our goal is to incorporate the knowledge of query-document matching from a well-trained monolingual retrieval model into a multilingual transformer-based retrieval architecture, such that it is capable of generating contextual representations under the MLIR setting and thus performing query-document matching in different languages. In this section, we first define the MLIR task and outline our approach. Then we present the key component of our model: a soft prompt-based encoder-decoder architecture. Finally, we introduce the model training via a knowledge distillation framework and build the MLIR model with components from both the teacher and student models. Due to space limitations, we focus on the MLIR case of searching a multilingual collection with an English query as an example to describe our method. It is worth noting that English may also be included in the multiple collection. ### Overview Given a query \(q\) in language \(X\) and a target collection \(\mathcal{D}_{Y}\) which contains documents in language set \(Y=\{Y_{1},Y_{2},\ldots Y_{K}\}\), suppose \(d_{ki}\)--the \(i^{\text{th}}\) document in language \(Y_{k}\)--has the ground truth relevance label \(Rel(q,d_{ki})\), then the aim is to design an MLIR model \(f\) that retrieves a list of documents from \(\mathcal{D}_{Y}\) such that \[f(q,d_{ki})\geq f(q,d_{lj}),\quad\forall\ Rel(q,d_{ki})\geq Rel(q,d_{lj}) \tag{1}\] where \(f(\cdot,\cdot)\) indicates the ranking score calculated by the model. To build model \(f\), we first assume there exists an oracle model \(g\) for the retrieval task in language \(X\). Thus, given \(q\) and monolingual collection \(\mathcal{D}_{X}\), \(g\) satisfies: \[g(q,d_{xi})\geq g(q,d_{xj}),\quad\forall\ Rel(q,d_{xi})\geq Rel(q,d_{xj}) \tag{2}\] We can achieve (1) with model \(f^{\prime}\) if for any \(d_{*}\) in \(Y\) and its translation \(d_{x}\) in \(X\), the model matches the oracle: \[f^{\prime}(q,d_{*})=g(q,d_{x}) \tag{3}\] Suppose both \(f^{\prime}\) and \(g\) follow the architecture of dense retrieval, the ranking score calculation is the dot-product of the query and document embeddings, thus: \[f^{\prime}_{E}(q)f^{\prime}_{D}(d_{*})^{\top}=g_{E}(q)g_{D}(d_{x})^{\top} \tag{4}\] where \(f^{\prime}_{E}\) and \(g_{E}\) are query encoders; \(f^{\prime}_{D}\) and \(g_{D}\) are document encoders for \(f^{\prime}\) and \(g\) respectively. We then reuse \(g_{E}\) as the query encoder of \(f^{\prime}\). With \(f^{\prime}_{E}=g_{E}\), we have: \[g_{E}(q)\big{(}f^{\prime}_{D}(d_{*})-g_{D}(d_{x})\big{)}^{\top}=0 \tag{5}\] It is safe to assume \(g_{E}(q)\) is a nonzero vector. Therefore the goal of finding \(f^{\prime}\) is equivalent to reducing the embedding distance between parallel documents. In our method, we retrain \(g_{D}\) as the teacher model by removing its parameters from the computational graph and train \(f^{\prime}_{D}\) as the student model. Note that in practice, the oracle model \(g\) does not exist. We can use an off-the-shelf English-to-English (monolingual) dense retrieval model as a substitute for \(g\). Because \(g_{D}\) is fixed, the essence of knowledge distillation training is to push multilingual document representations generated by \(f^{\prime}_{D}\) toward their corresponding English document representations generated by \(g_{D}\). Moreover, Equation (5) suggests that the training of \(f^{\prime}_{D}\) does not rely on either query \(q\) or ground truth relevant judgment. A group of parallel or comparable sentences from English to any other language involved in the collection is adequate to train \(f^{\prime}_{D}\). Parallel or comparable sentences between two languages are often referred as bitext data. Unlike multilingual retrieval data, which often require relevance labels, bitext data are easier to acquire, especially for low-resource languages (Zang et al., 2018; Zhang et al., 2019). ### Soft Prompt Decoder We focus on the design of the document encoder of the student model, \(f^{\prime}_{D}\), which handles multilingual documents. In general, the function of \(f^{\prime}_{D}\) is similar to a neural machine translation model. The difference is that \(f^{\prime}_{D}\) translates the input text into an embedding in the target language rather than natural language text. Thus, we build \(f^{\prime}_{D}\) based on the encoder-decoder architecture. For the encoder component of \(f^{\prime}_{D}\), we exploit multilingual pre-trained language models (i.e., mBERT or XLM-R). The token representation generated by the encoder is then fed to the decoder component. However, unlike the decoder with an autoregressive generation process, we propose a soft prompt-based decoder (SPD) architecture. **Soft Prompt Matrix.** We consider \(f^{\prime}_{D}\) as a multitask model where translating (mapping) each language in the multilingual collection into the target language space is viewed as a single task. Using the language name as the task identifier, a prompt \(\mathbf{P}_{k}\in\mathbb{R}^{\times d}\) for language \(Y_{k}\) with the same dimension as the token embedding \(d\) and vector length as \(l\) is used as input to the decoder. Thus, the prompt matrix serves as the language-based decoding initialization vector. Inspired by the prompt decomposition from multitask prompt tuning (Song et al., 2019), we decompose \(\mathbf{P}_{k}\) into two parts, as shown in Figure 1(a): language-specific low-rank vectors \(\mathbf{u}_{k}\in\mathbb{R}^{l}\) and \(\mathbf{v}_{k}\in\mathbb{R}^{d}\) for language \(Y_{k}\); And a shared prompt \(\mathbf{P}^{*}\in\mathbb{R}^{\times bd}\) across all languages. The language-specific prompt can be parameterized as \(\mathbf{W}_{k}=\mathbf{u}_{k}\cdot\mathbf{v}_{k}^{\top}\), which has the same dimension as the shared prompt \(\mathbf{P}^{*}\). The final prompt \(\hat{\mathbf{P}}\) for language \(Y_{k}\) is then formulated as follows. \[\hat{\mathbf{P}}_{k}=\mathbf{P}^{*}\odot\mathbf{W}_{k}=\mathbf{P}^{*}\odot( \mathbf{u}_{k}\cdot\mathbf{v}_{k}^{\top}) \tag{6}\] where \(\odot\) denotes the Hadamard product between two matrices. The shared prompt enables efficient knowledge sharing across all source languages and commonalities across translation tasks. Meanwhile, the language-specific vectors still allow each translation task to maintain its own parameters to encode language-specific knowledge. Additionally, prior studies on multitask prompt learning also showed that soft prompt learned from multitask data can be efficiently transferred to a new task (Zhang et al., 2019; Zhang et al., 2019). In section 5.5, we show that with a shared prompt, the SPD has a better zero-shot transfer ability toward new languages. Figure 2. SPD model architecture. **Cross-attention Decoder.** The decoder network follows a cross-attention-based multi-layer transformer architecture. Each layer has two sub-layers. The first is a multi-head query-key-value (QKV) cross-attention module, and the second is a position-wise fully connected feed-forward network. We employ residual connection and layer norm around each of the sub-layers. Let \(\mathbf{T}_{d_{k}}\in\mathbb{R}^{|d_{k}|\times d}\) denote the token representations generated by the encoder component for document \(d_{k}\) in language \(Y_{k}\), where \(|d_{k}|\) is the number of tokens in \(d_{k}\). The first decoder layer applies the cross-attention module between \(\mathbf{T}_{d_{k}}\) and prompt matrix \(\mathbf{\hat{P}}_{k}\). On the \(m\)th head, the attention mechanism is defined as follows: \[\text{Attention}_{m}=\text{Softmax}\Big{(}\frac{W_{m}^{q}\mathbf{\hat{P}}_{k }\cdot W_{m}^{k}\mathbf{T}_{d_{k}}}{\sqrt{d/M}}\Big{)}W_{m}^{o}\mathbf{T}_{d_ {k}}\] where \(M\) is the number of heads and \(W_{m}^{q}\). \(W_{m}^{k}\) and \(W_{m}^{o}\) are matrices with dimension \(d/M\times d\). Thus, the prompt matrix has different attention weights over encoder token representations in each subspace projection (head). The output of multi-head QKV cross-attention module is the concatenation of \(M\) heads with linear projection: \[\text{CrossAttention}(\mathbf{\hat{P}}_{k},\mathbf{T}_{d_{k}})=W^{o}\big{[} \text{Attention}_{1},\dots,\text{Attention}_{M}\big{]}\] We further define the output of the attention-based sub-layer with the residual connection and layer norm: \[\mathbf{h}_{d_{k}}=\text{LN}(\mathbf{\hat{P}}_{k}+\text{CrossAttention}( \mathbf{\hat{P}}_{k},\mathbf{T}_{d_{k}}))\] where \(\text{LN}(\cdot)\) denotes the layer norm operation. Because \(\mathbf{\hat{P}}_{k}\) is the query element in the cross-attention module, we use the prompt matrix to query the information from the encoder output and store it in a hidden representation \(\mathbf{h}_{d_{k}}\) which has the same dimension as the prompt matrix. Next, we apply the second sub-layer and generate the output of the first decoder layer for \(d_{k}\), \(\mathbf{H}_{d_{k}}^{1}\in\mathbb{R}^{|\times d}\): \[\mathbf{H}_{d_{k}}^{1}=\text{DecoderLayer}_{1}(\mathbf{\hat{P}}_{k},\mathbf{ T}_{d_{k}})=\text{LN}(\mathbf{h}_{d_{k}}+\text{FFN}(\mathbf{h}_{d_{k}}))\] where \(\text{FFN}(\cdot)\) denotes the fully connected feed-forward network with a rectified activation function. Then we use the hidden representation from the previous layer (i.e. \(\mathbf{H}_{d_{k}}^{1}\)) to query the encoder output again in the next layer, that is: \[\mathbf{H}_{d_{k}}^{n+1}=\text{DecoderLayer}_{n+1}(\mathbf{H}_{d_{k}}^{n}, \mathbf{T}_{d_{k}})\] until reaching the maximum layer \(N\) designed for the decoder. Finally, we average \(\mathbf{H}_{d_{k}}^{N}\) over the prompt vector dimension as the document embedding in the target language space. A complete architecture of \(f_{D}^{\prime}\) is depicted in Figure 1(b). \[f_{D}^{\prime}(d_{k})=\text{MeanPool}(\mathbf{H}_{d_{k}}^{N})\] ### Multilingual Dense Retrieval **Knowledge Distillation Training.** Assume that \(d_{\text{En}}\) is the English version of \(d_{k}\). From the property of \(g\), we know that the document embedding of \(d_{\text{En}}\) generated by \(g_{D}\) contains rich knowledge for query-document matching in English. Equation (3) suggests that if we could let \(f_{D}^{\prime}\) "behave" like \(g_{D}\), namely, if for any \(d_{k}\), the output of \(f_{D}^{\prime}(d_{k})\) is close to the output of \(g_{D}(d_{\text{En}})\), then the document embedding generated by \(f_{D}^{\prime}\) can have a similar retrieval performance as \(g\) in the English domain. Therefore, we require the English document encoder \(g_{D}\) as the teacher model and our multilingual document encoder \(f_{D}^{\prime}\) as the student. During training, we define the distillation loss as the mean square error (MSE) between two embeddings and sample \(B\) examples from each language to form a batch. \[loss:=\frac{1}{KB}\sum_{k=1}^{K}\sum_{l=1}^{B}|f_{D}^{\prime}(s_{ki})-g_{D}(e_{ ki})|^{2} \tag{5}\] where \(s_{ki}\) is a sentence in language \(Y_{k}\) and \(e_{ki}\) is its parallel (translation) in English. **Query-document matching.** In this section, we discuss an MLIR task of searching multilingual collections using an English query to introduce the KD-SPD framework. The query encoder in the final retrieval model can be directly copied from the teacher model in the English domain. Specifically, at test time, the matching score of \(q\) and \(d_{*}\) is calculated based on the dot-product between \(g_{E}\) and \(f_{D}^{\prime}\): \[f^{\prime}(q,d_{*})=g_{E}(q)f_{D}^{\prime}(d_{*})^{\top}\] An overview of our MLIR model building pipeline is shown in Figure 3. In fact, we can also apply KD-SPD to other language settings in MLIR task. For example, suppose the task requires searching an English collection using queries in multiple languages. In this case, KD-SPD can be built as a query encoder, and the retrieval model can reuse the teacher's document encoder. More generally, if the MLIR task involves a query language set \(\mathbf{X}\) and a collection language set \(\mathbf{Y}\), we can consider English as a bridge to build KD-SPD via two knowledge distillations: \(\mathbf{X}\) to English for query encoder and \(Y\) to English for document encoder. ## 4. Experimental Setup ### Dataset **Evaluation data.** We focus on retrieval from multilingual collections with English queries. To comprehensively evaluate model performance on this MLIR task, we create three test sets with various combinations of collection size, relevance distribution, and language settings. Note that some multilingual evaluation datasets have separate query sets per language, which does not thoroughly evaluate the MLIR performance. Thus, we focus on a setting where Figure 3. Model building pipeline for MLIR. the same set of test queries is evaluated on all languages in the collection. Table 1 shows the statistics of our evaluation datasets. * C200 topics, we only consider a topic with human-annotated relevant documents in all three languages as a valid query, leading to 133 queries in total. * **mTREC**. The query and relevance judgments are from the test split of the passage ranking task from the TREC 2020 Deep Learning Track (Han et al., 2020). There are three relevance judgment levels marked by 3,2,1. We build the multilingual collection from mMARCO (Chen et al., 2019), which is a machine-translated version of the MS MARCO passage collection (Friedman et al., 2017). We select translated passages in four languages: Arabic, Chinese, Russian, and Indonesian, to form a large-scale multilingual collection. Because translation leads to parallel relevant documents, this evaluation set allows us to study the effect of relevant distribution over languages. We first equally distributed relevant documents on each relevance level among four languages. In section 5.2, we explore biased relevant distribution. * **LAReQA**. LARQA (LARQA, 2019) is a benchmark for language-agnostic answer retrieval from a multilingual candidate pool. It is built based on two multilingual question-answering datasets: XQuAD (Chen et al., 2019) and MLQA (Lar et al., 2019). The query is formed using the question, and the collection is formed by breaking contextual paragraphs into sentences. Each query (question) appears in 11 different languages2 and has 11 parallel relevant sentences (answers). To match our MLIR setting, we evaluate English queries on a collection of sentences in 11 languages (including English). Footnote 2: Languages in LARQA (ISO code): ar, de, el, en, es, hi, ru, th, tr, vi, zh **Bitext training data.** To support the multilingual knowledge distillation, we use the parallel sentences from the CCAligned dataset (Larson et al., 2019). To train one KD-SPD model covering all three evaluation datasets (15 languages3), we sample 4 million parallel sentences per language except English. For English, to be consistent with other languages, we sample another 4 million sentences and pair each sentence with itself. Thus, our training data comprises 60 million sentence pairs in 15 languages. We append a language code to each sentence for SPD to identify the language of the input document. Footnote 3: List of training languages (ISO code): ar, de, el, en, es, fr, h, id, it, pt, ru, th, tr, vi, zh **Retrieval fine-tuning data.** For a competitive baseline, we further fine-tune mDPR (Zaman et al., 2019) baseline (see section 4.3.2) using cross-lingual triples from mMARCO (Chen et al., 2019). We sample 6 million cross-lingual triples per language to form a multilingual training set for languages in CLEF and mTREC. Because languages in LARQA are not fully covered by mMARCO, we use mDPR on LARQA without fine-tuning. Note that our KD-SPD model does not use this data. ### Implementation Details We initialize the encoder component of the SPD model using the pretrained XLM-R model (Chen et al., 2019) (base-sized) and the decoder component (including prompt matrices) using the Xavier initialization (Xu et al., 2019). We train the SPD as a student model using bitext data. To learn the retrieval knowledge in the English domain, we employ the document encoder of ANCE (Larson et al., 2019) as the teacher. When testing, the query encoder of the final model is also a reuse of the query encoder of ANCE (except in section 5.4, where we investigate the impact of different teachers). For hyper-parameters, we set the length of the prompt token vector \(l=30\) and the number of SPD decoding layers \(N=6\). We truncate the input sequence length at 180 tokens and sample 4 examples per language to build a mini-batch. The model is trained with a learning rate of \(2\times 10^{-5}\) for one epoch of all bitext data. For evaluation on the CLEF dataset, where the document length is usually longer than 180 tokens, we split long documents into overlapping passages of fixed length with a stride of 90 tokens and compute the score for each query passage pair. Finally, we select a document's maximum passage score as its ranking score (Larson et al., 2019). **Evaluation.** We examine the top 100 ranked documents and report comprehensive metrics, including mean average precision (MAP), normalized discounted cumulative gain (nDCG@10), precision (P@10), mean reciprocal rank (MRR), and recall (R@100). We determine statistical significance using the two-tailed paired \(t\)-test with p-value less than 0.05 (i.e., 95% confidence level). ### Compared Methods From a modeling perspective, we compare KD-SPD with both non-neural and neural approaches. From the system design perspective, we compare KD-SPD with end-to-end solutions and pipeline solutions via rank list merging. #### 4.3.1. Non-neural baselines For non-neural baselines, we generally consider a three-step pipeline to address MLIR. First, we break the collection into subsets by language and translate the query to each subset language. Since the translated queries and subset collection are in the same language, we then use a lexical-based sparse retrieval technique (e.g., BM25) to obtain a ranked list for each language. Finally, we merge language-specific ranked lists into a final ranked list. We investigate different strategies of translation and ranked list merging that we elaborate below. **SMT:** We translate the query based on a statistical machine translation (SMT) method. Specifically, we first build a translation table from the parallel corpus for each language pair using GIZA++ (Zaman et al., 2019). Then we select the top 10 translations from the translation table for each query term and apply Galago's4 _#combine_ operator to form a translated query. Finally, we run BM25 with default parameters to retrieve documents in the same language as the query translation. \begin{table} \begin{tabular}{l c c c} \hline \hline Dataset Statistics & CLEF & mTREC & LARQA \\ \hline Query size & 133 & 54 & 1,190 \\ Collection size & 241K & 35.2M & 13,014 \\ Languages in collection & 3 & 4 & 11 \\ Avg. \#d\({}^{*}\)/q & 13.5 & 66.8 & 1.0 \\ \hline \hline \end{tabular} \end{table} Table 1. Summary of MLIR evaluation datasets. Avg. #d\({}^{+}\)/q denotes the average number of relevant documents per query **NMT:** We translate the query into collection languages using Google Translation5 (a neural-based commercial machine translation system). Then we run BM25 with default parameters to retrieve documents from each subset collection using the translated query. Footnote 5: [https://translate.google.com/](https://translate.google.com/) **+Round Robin:** We merge multiple rank lists in the round-robin style, that is, iteratively extracting the top-ranked document from \(K\) languages in random order to be the next \(K\) of the final rank list. **+Score:** We merge multiple rank lists by ranking scores generated by the retrieval component. Scores within each rank list are first min-max normalized to \([0,1]\). The non-neural baselines are the combination of translation with merging strategies: SMT+Round Robin, SMT+Score, NMT+Round Robin, and NMT+Score. #### 4.3.2. Neural baselines As a dense retriever, we compare KD-SPD with other dense retrieval methods in the following: **mDPR:** Models that follow the dense passage retriever (DPR) paradigm has proven to be effective for many retrieval tasks. Zhang et al. (Zhu et al., 2018) extended DPR to non-English languages by changing the underlying pre-trained language model from BERT to multilingual BERT (mBERT). We adopt the checkpoint of mDPR trained on MS MARCO dataset (Zhu et al., 2018). For CLEF and mTREC, which have fewer languages in the collections, we further fine-tune mDPR using the mMARCO dataset (Chen et al., 2019). We apply mDPR to MLIR in two ways: First, we break the MLIR task into multiple CLIR tasks by language and use mDPR to retrieve documents from subset collections. Then we merge the rank lists from different CLIR tasks, named mDPR+Round Robin and mDPR+Score, respectively. Second, we apply mDPR as an end-to-end solution for MLIR, in which we use it to directly index and search from the multilingual collection. **KD-Encoder:** There are methods that can transfer the knowledge from a model built for a monolingual task to a multilingual model, enabling it to address the same task in a multilingual setting. Reimers and Gurevych (Reimers and Gurevych, 2018) proposed a knowledge distillation method to create multilingual versions from the same monolingual models. We refer to this idea as the KD-Encoder and apply it to the MLIR task. To compare with our approach, we adopt the same teacher model and train KD-Encoder with the same bitext data. ## 5. Experimental Results ### Retrieval Performance Table 2 lists the evaluation results on the three MLIR datasets. Comparing non-neural approaches, given BM25 as the same retrieval component, we can see that methods based on NMT outperform those based on SMT. For document collections with mostly high-resource languages, NMT based method can also achieve higher nDCG, precision, and MRR scores than end-to-end neural approaches (i.e., NMT+Score on CLEF). It highlights that translation quality is an important factor in MLIR. Usually, for a pipeline approach, the error can accumulate for each step and lead to a sub-optimal result (Kolmogorov et al., 2017; Kolmogorov et al., 2018). In MLIR, without evaluating the content with respect to the query, merging rank lists only based on the score or rank within sub-collection will cause errors from multiple languages to accumulate. However, comparing the pipeline with the end-to-end approach of mDPR, we can see that end-to-end mDPR does not show a consistent advantage over the pipeline mDPR. There are two plausible reasons. First, similar to other multilingual models, mDPR based on a multilingual pre-trained language model also inherits the language bias in the pre-training step. Second, the fine-tuning steps of mDPR only focus on ranking documents within the same language space. Even trained with multilingual retrieval data, the candidate documents are still monolingual, and the score comparison is between two particular languages. These two reasons cause the ranking score generated by mDPR to be inconsistent across languages. Moreover, KD-Encoder performs better than mDPR on mTREC and LAReQA, On CLEF, it also scores higher nDCG, precision, and MRR than mDPR. Such results suggest that mapping parallel text from different languages to the same location in the vector space via knowledge distillation can efficiently transfer monolingual retrieval knowledge to multilingual settings. Finally, with the support of soft prompt decoding, KD-SPD achieves the best retrieval performance among all compared methods. In terms of precision-oriented metrics, it consistently and significantly outperforms both mDPR and KD-Encoder. ### Biased Relevant Distribution In the MLIR task, some queries strongly prefer one language over others and some do not. Thus, different queries tend to have different relevant document distributions among languages. This special \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Retrieval Method**} & \multicolumn{4}{c}{**CLEF**} & \multicolumn{4}{c}{**mTREC**} & \multicolumn{4}{c}{**LAReQA**} \\ \cline{2-13} & MAP & nDCG@10 & P@10 & MRR & R@100 & MAP & nDCG@10 & P@10 & MRR & R@100 & MAP & nDCG@10 & P@10 & MRR & R@100 \\ \hline SMT-Bound Robin & 0.1348 & 0.2540 & 0.2429 & 0.4017 & 0.3732 & 0.2042 & 0.0557 & 0.0630 & 0.1592 & 0.0778 & 0.2678 & 0.3858 & 0.2332 & 0.6610 & 0.4415 \\ SMT-Score & 0.1459 & 0.2737 & 0.2241 & 0.4679 & 0.3508 & 0.0187 & 0.0468 & 0.0688 & 0.1080 & 0.2690 & 0.3407 & 0.2128 & 0.6527 & 0.3506 \\ NMT-Round Robin & 0.1738 & 0.3732 & 0.3474 & 0.5798 & 0.4118 & 0.0653 & 0.1735 & 0.1870 & 0.3965 & **0.1872** & 0.5717 & 0.6178 & 0.5566 & 0.7179 & 0.3345 \\ NMT-Score & 0.1950 & 0.3806 & 0.3474 & 0.6140 & 0.4206 & 0.0522 & 0.1570 & 0.1685 & 0.3970 & 0.1691 & 0.5063 & 0.5671 & 0.5178 & 0.7091 & 0.8002 \\ \hline mDPR-Round Robin & 0.1823 & 0.3412 & 0.3165 & 0.5448 & 0.4330 & 0.0490 & 0.1358 & 0.1537 & 0.2913 & 0.1324 & 0.4935 & 0.5223 & 0.5163 & 0.6498 & 0.8394 \\ mDPR-Score & 0.1941 & 0.3433 & 0.3203 & 0.5364 & 0.4401 & 0.0492 & 0.1459 & 0.1574 & 0.3154 & 0.1300 & 0.4852 & 0.5142 & 0.4462 & 0.6452 & 0.8418 \\ mDPR & 0.2025 & 0.3466 & 0.3195 & 0.3567 & 0.4504 & 0.0549 & 0.1675 & 0.1870 & 0.3954 & 0.1291 & 0.4452 & 0.5031 & 0.4462 & 0.7653 & 0.7790 \\ KD-Encoder & 0.1973 & 0.3883 & 0.3594 & 0.3641 & 0.4135 & 0.0639 & 0.2208 & 0.2209 & 0.4356 & 0.1629 & 0.5931 & 0.5958 & 0.5730 & 0.7673 & 0.8058 \\ KD-SPD & **0.2200\({}^{\dagger\dagger}\)** & **0.4160\({}^{\dagger\dagger}\)** & **0.3714\({}^{\dagger\dagger}\)** & **0.6356\({}^{\dagger\dagger}\)** & **0.4689\({}^{\dagger\dagger}\)** & **0.0748\({}^{\dagger\dagger}\)** & **0.2414\({}^{\dagger\dagger}\)** & **0.2556\({}^{\dagger\dagger}\)** & **0.5067\({}^{\dagger\dagger}\)** & **0.1705\({}^{\dagger}\)** & **0.6263\({}^{\dagger\dagger}\)** & **0.6316\({}^{\dagger\dagger}\)** & **0.609\({}^{\dagger\dagger}\)** & **0.7904\({}^{\dagger\dagger}\)** & **0.8912\({}^{\dagger}\)** \\ \hline \hline \end{tabular} \end{table} Table 2. A comparison of model performance. The highest value is marked with bold text. For KD-SPD, statistically significant improvements are marked by \(\dagger\) (over mDPR) and \(\ddagger\) (over KD-Encoder). feature requires the retrieval system to rank documents independent of their language. In the experiment on mTREC shown in Table 2, relevant documents were distributed equally among four languages for each query. The parallel translations of mTREC allow us to test with different relevant document distributions. In this section, we simulate the language preference in MLIR task: For each query, we first randomly select a language as the primary language and assign 60% of the top relevant documents (sorted by relevance judgment level) to that language. And the other three become the minor languages for this query, among which we equally distribute the remaining 40% of the relevant documents. Table 3 shows the results on biased distributed relevant documents of mTREC. As expected, the performance of methods based on round-robin merge drop significantly. The reason is that the rank list from minor languages introduces more errors compared to the scenario where languages are uniformly distributed. We can see that KD-SPD is also affected by the change in distribution yet still performs the best among all compared methods. ### Analysis of Knowledge Distillation To study how SPD behaves after knowledge distillation, we compare the rank distance and score difference of parallel relevant documents in the rank lists generated by different models. In this experiment, again, we take advantage of parallel translations in mTREC and build _duplicate_ relevant documents in four languages. Thus, for each query, there are semantically similar relevant documents in different languages. Given a query, we locate all parallel relevant documents in four languages within the top 1,000 candidates from rank lists generated by mDPR, KD-Encoder, and KD-SPD, respectively. Then we compute the maximum rank distance and score difference among the four parallel documents. The equation to compute the score difference is as follows: \[\mathcal{S}=\frac{1}{|\mathcal{Q}|}\sum_{i=1}^{|\mathcal{Q}|}\frac{1}{| \mathcal{R}_{q_{i}}|}\sum_{d_{kj}\in\mathcal{R}_{q_{i}}}\Big{(}\max_{k\in Y} f^{\prime}(q_{i},d_{kj})-\min_{k\in Y}f^{\prime}(q_{i},d_{kj})\Big{)}\] where \(\mathcal{Q}\) is the query set, \(\mathcal{R}_{q_{i}}\) is the set of relevant documents for the query \(q_{i}\), and \(Y\) is the language set. The averaging rank distance can also be obtained in a similar way. Figure 4 shows the results averaged over 54 queries in mTREC. We can see that KD-SPD has the smallest rank distance and score difference over parallel documents. The rank and score of parallel documents reflect the language bias in MLIR models. Thus, KD-SPD is less biased toward languages when ranking documents from a multilingual collection. Moreover, because the query embedding is fixed given the same query, the low mean and standard deviation values indicate that KD-SPD is able to generate similar embeddings for parallel documents in different languages. This matches the model design purpose. ### Ablation Study In this section, we conduct experiments on two aspects that could affect the performance of KD-SPD: The number of layers in the decoder and the choice of the teacher model for distillation. **Decoder architecture.** Following the idea of weights share in Transformers (Han et al., 2017; Zhang et al., 2018), we replace the multi-layer (6-layer) decoder with a recurrent decoder block. Instead of \(N\) distinct layers, a decoder block has the same architecture as one decoder layer and is called recurrently for \(N=12\) steps. The weights of a decoder block are shared between steps. After each step, we add a temporal embedding \(\mathbf{\tau}\in\mathbb{R}^{l\times d}\) to the hidden states. \[\mathbf{H}_{d_{k}}^{n+1}=\mathbf{\tau}_{n}+\text{DecoderBlock}(\mathbf{H}_{d_{ k}}^{n},\mathbf{T}_{d_{k}})\] This approach significantly reduces the size of model parameters. Named universal transformer-based SPD (UTSPD), Table 4 shows its performance, compared to KD-Encoder and KD-SPD. We can see that only with 2.1% more parameters, KD-UTSPD performs better than KD-Encoder. By reducing the parameter size, we show that the performance gain in SPD mainly relies on the prompt design and decoder component based on the cross-attention module. Because reducing parameters limits the model's generalization ability, there is a performance drop from distinct layers to shared weights. **Teacher model** The teacher model bounds the retrieval performance of KD-SPD. We hypothesize that a better teacher model in the English domain can lead to a better SPD model for MLIR task. Based on the leaderboard of MS MARCO passage ranking, we replace ANCE (Zaman et al., 2017) with coCondenser (Zaman et al., 2017) for knowledge distillation. To be consistent with coCondenser, we also change the pre-trained multilingual language model used in SPD from XLM-R to mBERT. The evaluation of SPD trained with different teacher models is shown in Table 5. In general, KD-SPD learned from coCondenser performs better than the one learned from ANCE. This suggests that improvements with respect to the retrieval performance in the English domain can be transferred to MLIR task via KD-SPD. Figure 4. Parallel document analysis for MLIR models. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Retrieval Method**} & \multicolumn{5}{c}{**Biased mTREC**} \\ \cline{2-6} & MAP & mDCG@10 & F@10 & MRR & R@100 \\ \hline SMT-Round Robin & 0.0134 & 0.0304 & 0.0426 & 0.0759 & 0.0621 \\ SMT-Score & 0.0149 & 0.0356 & 0.0426 & 0.1038 & 0.0698 \\ NMT-Round Robin & 0.0331 & 0.0393 & 0.1278 & 0.2902 & 0.1500 \\ NMT+Score & 0.0438 & 0.1055 & 0.1389 & 0.3430 & **0.1751** \\ \hline mDPR+Round Robin & 0.0301 & 0.0902 & 0.1074 & 0.2922 & 0.1150 \\ mDPR+Score & 0.0516 & 0.1576 & 0.1778 & 0.3655 & 0.1206 \\ mDPR & 0.0580 & 0.1571 & 0.1759 & 0.3652 & 0.1174 \\ KD-Encoder & 0.0681 & 0.2028 & 0.2078 & 0.4055 & 0.1494 \\ KD-SPD & 0.0753\({}^{\dagger}\) & 0.2317\({}^{\dagger\ddagger}\) & 0.2352\({}^{\dagger\ddagger}\) & 0.4579\({}^{\ddagger}\) & 0.1684\({}^{\dagger\ddagger}\) \\ \hline \hline \end{tabular} \end{table} Table 3. Performance comparison of biased distributed relevant documents in mTREC. Significance tests are marked by \(\dagger\) (over mDPR) and \(\ddagger\) (over KD-Encoder). ### Zero-shot Transfer We explore the zero-shot ability of KD-SPD. For documents in languages that are not observed in the training data, we first define the language-specific vectors by averaging all trained language-specific vectors from known languages. Then KD-SPD follows the same steps as other languages to generate a prompt matrix for the new language. Hence, observed languages' knowledge transfers to the new language via the shared prompt matrix. In this study, we focus on Finnish as the target language and use a collection of 54,694 Finnish documents from the CLEF dataset. It's worth mentioning that Finnish, a member of the Uralic language family, is distinct from the 15 languages used in training. Among the 133 English queries in the CLEF dataset, 50 have relevant annotations in the Finnish collection, forming a new set of test queries. The results in Table 6 show the performance of KD-SPD in Cross-Language Information Retrieval (CLIR) between English and Finnish, and we observe that KD-SPD significantly outperforms other methods, demonstrating the transferability of knowledge from the prompt matrices to new languages. Next, we expand the evaluation to a more challenging setting, combining Finnish with German and Italian. The resulting collection contains both observed and unobserved languages. Table 7 shows KD-SPD's zero-shot performance in the multilingual information retrieval (MLIR) setting, where it still achieves the best results. This highlights KD-SPD's strong ability to transfer knowledge in a zero-shot scenario. ## 6. Conclusions and Future Work In this work, we presented a knowledge distillation (KD) framework based on soft prompt decoding (SPD) to address the multilingual information retrieval (MLIR) task. Using the soft prompt matrix as a task indicator, KD-SPD can implicitly translate documents from multiple languages into the same embedding space as the query language. We proposed prompt decomposition to enable efficient knowledge sharing across all source languages. Our knowledge distillation framework transfers knowledge from a well-trained monolingual retrieval model to KD-SPD, greatly reducing the retrieval data requirements for training MLIR models. Our comprehensive experimental results show that KD-SPD significantly outperforms other baselines on three qualitatively different MLIR evaluation datasets. Further analysis demonstrates that KD-SPD has less language bias and better zero-shot transfer ability toward new languages. For future work, as a general knowledge transfer framework, we are interested in extending KD-SPD to transfer other monolingual task-specific knowledge into the multilingual space. Exploring the applications of KD-SPD to multimodal information retrieval is also an exciting future direction. ###### Acknowledgements. This work was supported in part by the Center for Intelligent Information Retrieval and This research is based upon work supported in part by the Center for Intelligent Information Retrieval, and in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Retrieval**} & \multicolumn{4}{c}{**CLEF Finnish**} \\ \cline{2-7} & MAP & nDCG@10 & P@10 & MBR & R@100 \\ \hline SMT & 0.0739 & 0.1179 & 0.0900 & 0.1390 & 0.1828 \\ NMT & 0.1613 & 0.2562 & 0.1560 & 0.4591 & 0.4251 \\ \hline mDPR & 0.1682 & 0.2143 & 0.1300 & 0.3095 & 0.5010 \\ KD-Encoder & 0.1845 & 0.2796 & 0.1920 & 0.4537 & 0.5237 \\ KD-SPD & **0.2286(\({}^{\ddaggeragger}\))** & **0.3321\({}^{\ddagger}\)** & **0.2229\({}^{\ddagger}\)** & **0.5092\({}^{\ddagger}\)** & **0.5958\({}^{\ddagger}\)** \\ \hline \hline \end{tabular} \end{table} Table 6. Zero-shot CLIR: English-to-Finnish. Significance tests are marked by \(\dagger\) (over mDPR) and \(\ddagger\) (over KD-Encoder). \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Parameter**} & \multicolumn{4}{c}{**CLEF**} & \multicolumn{4}{c}{**LAReQA**} \\ \cline{2-10} & **Size** & MAP & nDCG@10 & P@10 & MBR & R@100 & MAP & nDCG@10 & P@10 & MBR & R@100 \\ \hline KD-Encoder & 278.6M & 0.1973 & 0.3883 & 0.3594 & 0.5641 & 0.4315 & 0.5931 & 0.6058 & 0.5730 & 0.7673 & 0.8805 \\ KD-SPD & 320.0M (+14.8) & 0.2200 (+11.5) & 0.4160 (-7.1) & 0.3714 (-3.3) & 0.6356 (+12.7) & 0.4689 (+8.7) & 0.6265 (+5.6) & 0.6316 (+4.2) & 0.6049 (+5.6) & 0.7904 (+3.0) & 0.8912 (+1.2) \\ KD-UTSPD & 284.5M (+2.1) & 0.2075 (+5.2) & 0.4023 (+3.6) & 0.3722 (+3.6) & 0.5964 (+5.7) & 0.4576 (+6.0) & 0.6212 (+4.7) & 0.6279 (+5.6) & 0.5996 (+4.6) & 0.7674 (+0.0) & 0.8870 (+0.7) \\ \hline \hline \end{tabular} \end{table} Table 4. Ablation I: Decoder architecture. The numbers in the bracket are differences in percentage to KD-Encoder. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Retrieval**} & \multicolumn{4}{c}{**CLEF Finnish**} \\ \cline{2-7} & MAP & nDCG@10 & P@10 & MBR & R@100 \\ \hline SMT-Bound Robin & 0.1099 & 0.2245 & 0.208 & 0.4996 & 0.2909 \\ SMT-Score & 0.1269 & 0.2242 & 0.218 & 0.3726 & 0.2974 \\ NMT-Round Robin & 0.1263 & 0.2748 & 0.234 & 0.5399 & 0.3384 \\ NMT-Score & 0.1447 & 0.2806 & 0.258 & 0.5101 & 0.344 \\ \hline mDPR+Round Robin & 0.1481 & 0.2734 & 0.268 & 0.391 & 0.3974 \\ mDPR+Score & 0.1728 & 0.3002 & 0.282 & 0.4816 & 0.4083 \\ mDPR & 0.1952 & 0.3377 & 0.306 & 0.5175 & 0.4107 \\ KD-Encoder & 0.1953 & 0.4262 & 0.382 & 0.6735 & 0.4152 \\ KD-SPD & **0.2174\({}^{\ddagger}\)** & **0.4494\({}^{\ddaggeragger}\)** & **0.4109\({}^{\ddagger}\)** & **0.7099\({}^{\ddagger}\)** & **0.4545\({}^{\ddagger}\)** \\ \hline \hline \end{tabular} \end{table} Table 7. Zero-shot MLIR. Significance tests are marked by \(\dagger\) (over mDPR) and \(\ddagger\) (over KD-Encoder). \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Teener**} & \multicolumn{4}{c}{**CLEF**} & \multicolumn{4}{c}{**LAReQA**} \\ \cline{2-10} & MAP & nDCG@10 & P@10 & MBR & R@100 & MAP & nDCG@10 & P@10 & MBR & R@100 \\ \hline KD-SPD (ANCE) & 0.2200 & 0.4160 & 0.3714 & 0.6356 & 0.4689 & 0.5265 & 0.6316 & 0.6049 & 0.7904 & 0.8912 \\ KD-SPD (soCondenser) & 0.2487\({}^{\ddagger}\)** & 0.4564\({}^{\ddaggeragger}\)** & 0.4008\({}^{\ddagger}\)** & 0.6528\({}^{\ddagger}\)** & 0.4976\({}^{\ddagger}\)** & 0.6501\({}^{\ddagger}\)** & 0.6694\({}^{\ddagger}\)** & 0.6536\({}^{\ddagger}\)** & 0.8012 & 0.9172 \\ \hline \hline \end{tabular} \end{table} Table 5. Ablation II: Effect of Teacher model. Significance tests with respect to KD-SPD (ANCE) are marked in \(\blacktriangle\). No. 2019-19051600007 under Univ. of Southern California subcontract no. 124338456. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.
2303.04857
Breaking Symmetries Leads to Diverse Quadrupedal Gaits
Symmetry manifests itself in legged locomotion in a variety of ways. No matter where a legged system begins to move periodically, the torso and limbs coordinate with each other's movements in a similar manner. Also, in many gaits observed in nature, the legs on both sides of the torso move in exactly the same way, sometimes they are just half a period out of phase. Furthermore, when some animals move forward and backward, their movements are strikingly similar as if the time had been reversed. This work aims to generalize these phenomena and propose formal definitions of symmetries in legged locomotion using group theory terminology. Symmetries in some common quadrupedal gaits such as pronking, bounding, half-bounding, and galloping have been discussed. Moreover, a spring-mass model has been used to demonstrate how breaking symmetries can alter gaits in a legged system. Studying the symmetries may provide insight into which gaits may be suitable for a particular robotic design, or may enable roboticists to design more agile and efficient robot controllers by using certain gaits.
Jiayu Ding, Zhenyu Gan
2023-03-08T19:48:43Z
http://arxiv.org/abs/2303.04857v3
# Breaking Symmetries Leads to ###### Abstract Symmetry manifests itself in legged locomotion in a variety of ways. No matter where a legged system begins to move periodically, the torso and limbs coordinate with each other's movements in a similar manner. Also, in many gaits observed in nature, the legs on both sides of the torso move in exactly the same way, sometimes they are just half a period out of phase. Furthermore, when some animals move forward and backward, their movements are strikingly similar as if the time had been reversed. This work aims to generalize these phenomena and propose formal definitions of symmetries in legged locomotion using group theory terminology. Symmetries in some common quadrupedal gaits such as pronking, bounding, half-bounding, and galloping have been discussed. Moreover, a spring-mass model has been used to demonstrate how breaking symmetries can alter gaits in a legged system. Studying the symmetries may provide insight into which gaits may be suitable for a particular robotic design, or may enable roboticists to design more agile and efficient robot controllers by using certain gaits. ## I Introduction Legged locomotion has become an increasingly prominent area of research, as it has applications in biomechanics, robotics, and evolutionary biology. By studying how animals move, researchers can gain insight into the motion patterns of animals and develop better mobile robots. There has been considerable interest in the studies of the periodic motion patterns of legged systems, also known as _gaits_. These gaits have been found to have a number of distinct properties depending on the terrain, speed and size of the organism. It has been hypothesized that gaits play a key role in maintaining balance and stability, as well as saving energy by enabling animals to move in a more efficient manner at various speeds [1]. Pioneers such as Milton Hildebrand and Robert McNeill Alexander studied the locomotion of hundreds of quadrupedal vertebrates and discovered that these animals use a large variety of gaits depending upon their species and morphology [2, 3]. In order to reduce the complexity of the motions and compare them with a clear fashion, researchers usually distinguish the gaits using the concept of _symmetries_. There are more than one type of symmetries in legged locomotion. For example, Hildebrand used the phase delays among the leg pairs (front or hind) to categorize all the quadrupedal gaits. If the phase delay in the gait is equal to half of the stride cycle, he called it a symmetrical gait, otherwise, an asymmetrical gait [2]. He suggested that all the symmetrical gaits and asymmetrical gaits formed two distinct continua [4, 5] and animals like horses can smoothly switch from one gait to another if they are closely related. Another type of symmetry was brought up by Marc Raibert from his early studies of running legged robots [6]. He found that when the robot was moving backward, it was almost the same as moving forward in a time-reversed fashion. He also pointed out in his work that this type of symmetry is also abundant in animal locomotion such as human running or cat galloping [7]. Moreover, neurologists hypothesized that the rhythmic behavior of animals is created by neural oscillators called central pattern generators (CPG) [8, 9] and by varying the parameters in the neural network they are able to create different rhythms. More recent studies on the CPGs used the group theory and bifurcation theory to explain the symmetries in the oscillation modes of different gaits [10, 11, 12]. Because of the differences between the definitions and applications of symmetry in locomotion, it is sometimes difficult for people to communicate effectively. For example, the quadrupedal pronking gait in Hildebrand's definition is an asymmetrical gait. However, comparing with some symmetrical gaits such as walking or trotting, in pronking all four legs are moving in a fully synchronized fashion. In some sense, the pronking has more symmetries than other gaits. Also, even though Marc Raibert demonstrated the amazing time-reversal symmetries in his robots while running, it was still unclear whether or not the same symmetry can be applied to other gaits or other robots. Additionally, it is elegant to show the symmetries in neural oscillators using the group theory, however, many symmetries in the legged locomotion are deeply embedded into the nonlinear dynamic phenomena in mechanics. For instance, for the walking gait alone with the same footfall pattern, it has been shown that by using the spring-loaded inverted pendulum (SLIP) model, Fig. 1: (A) A1 quadrupedal robot from Unitree Robotics with bilateral symmetry and (B) a simplistic spring-mass model. there exist distinct oscillation modes [13]. By only varying the values of states and the total energy, symmetries in these modes vary rapidly [14]. In this study, we seek to develop a universal approach to study the symmetries in legged locomotion. We tend to leverage the symmetry group theory proposed in [10, 11, 12] and generalize all the commonly observed symmetries in legged systems. Throughout this study, we will provide formal definitions for each type of symmetry in the existing literature using a general floating base model for the legged system and reveal the underlying relationship among these symmetries. Moreover, we will also provide one detailed example of a quadrupedal SLIP model and utilize the numerical continuations to find a large variety of gaits. Through this process, we will demonstrate how the symmetries in gaits can be broken and how the breaking of symmetries leads to new gait patterns. Our study of the symmetries in locomotion will provide us with deeper insights into how legged animals move in nature; reduce the amount of computational time required to search for periodic patterns of locomotion; and facilitate the design of efficient robots by selecting more appropriate gaits based on their own structures. ## II Methods Symmetries of mechanical systems are a natural example of group action and they often provide a useful mechanism to reduce the complexity of the problem by reducing the number of variables. In this section, we seek to generalize the commonly discussed symmetries in a legged system and provide a systematic approach to analyze the symmetries of a specific gait pattern. Furthermore, we will introduce a simplistic model that can reproduce a large number of quadrupedal gaits and use it to demonstrate how symmetries break through parameter bifurcations. ### _Model of a Legged system and gaits_ A quadrupedal legged system can be approximated by a floating-base model (FBM) consisting of rigid bodies with mass and inertia, connected via joints to form an open kinematic chain in 3-dimensional Euclidean space. Let \(\mathcal{Q}\) be the N-dimensional configuration space of the robot. We use variables \(\mathbf{q}_{T}\!\!:=\![q_{x},q_{y},q_{z},q_{\text{gaw}},q_{\text{f pitch}},q_{\text{roll}}]^{\intercal}\) to represent the Cartesian position of the torso's geometrical center in the inertial coordinate frame (I) and the torso's intrinsic Euler angles in the \(z\)-\(y\)-\(x\) order, respectively. For \(i\)th leg, the joint vector (shape coordinates) \(\mathbf{q}_{i}\) refers to the relative angles of the joints measured in their own body coordinate frames. And the index \(i\in L=\{\text{LH},\text{LF},\text{RF},\text{RH}\}\) stands for the left hind, the left front, the right front, and the right hind legs. By collecting the leg configurations \(\mathbf{q}_{L}\!\!:=\![\mathbf{q}_{\text{LH}}^{\intercal},\mathbf{q}_{\text{LF}}^{ \intercal},\mathbf{q}_{\text{RF}}^{\intercal},\mathbf{q}_{\text{RR}}^{\intercal}]^{\intercal}\), the generalized coordinates of the FBM are aggregated in a single vector and defined as follows: \[\mathbf{q}\!\!:=\![\mathbf{q}_{T}^{\intercal},\mathbf{q}_{L}^{\intercal}]^{\intercal}\in \mathcal{Q}, \tag{1}\] **Remark 1**: _The head (anterior) direction of the system is indicated by \(q_{\text{gaw}}\) when it is in motion. Whenever the robot moves forward or backward, its velocity \(\mathbf{v}_{\text{B}}\) is in the sagittal plane. However, it can also be out of the plane when the robot moves sideways or diagonally which can be captured by the angle \(\beta\) measured in the body frame (B). An example of a quadrupedal robot A1 developed by Unitree is shown in Fig. 1A._ By applying the Euler-Lagrange equation, the equations of motion of the FBM of a legged robot can be expressed in the following form: \[\mathbf{M}(\mathbf{q})\ddot{\mathbf{q}}+\mathbf{C}(\mathbf{q},\dot{\mathbf{q}})\dot{\mathbf{q}}+\mathbf{G}( \mathbf{q})=\mathbf{S}\mathbf{\tau}+\mathbf{J}^{\intercal}(\mathbf{q})\mathbf{\lambda}. \tag{2}\] where \(\mathbf{M}(\mathbf{q})\) is the inertia matrix; \(\mathbf{C}(\mathbf{q},\dot{\mathbf{q}})\) is the Coriolis matrix; and \(\mathbf{G}(\mathbf{q})\) is the gravitational vector. \(\mathbf{\tau}\) denotes the vector of joint motor torques and \(\mathbf{S}\) is the selection matrix that assigns motor torques to the generalized coordinates. When a leg is in stance, the resulting ground reaction force \(\mathbf{\lambda}\) is mapped to the joints through Jacobi-transpose mapping \(\mathbf{J}^{\intercal}(\mathbf{q})\). The state space vector is taken as \(\mathbf{x}\!\!:=\![\mathbf{q}^{\intercal},\mathbf{\dot{q}}^{\intercal}]^{\intercal}\in \mathcal{X}\). With the model of a robot, we can give the following definition for gaits of a legged system: **Definition 1**: _A solution \(\mathbf{\varphi}(t):[t_{\infty},\infty)\to\mathcal{X}\) of the hybrid system \(\Sigma\) is periodic if there exists a finite \(T>0\) such that \(\mathbf{\varphi}(t+T)=\mathbf{\varphi}(t)\). A gait of a legged robot is a periodic orbit \(\mathcal{O}\subset\mathcal{X}\) if \(\mathcal{O}=\{\mathbf{\varphi}(t)\mid t\geq t_{o}\}\) for a periodic solution \(\mathbf{\varphi}(t)\)._ **Remark 2**: _The hybrid model \(\Sigma\) consists of a set of differential-algebraic equations \(\mathcal{F}\) describing the continuous dynamics of the robot in each domain and a set of impact maps \(\Delta\) that instantaneously reset the joint velocities. The impact surfaces are defined by the set \(\mathcal{C}\). The detailed explanations of a nonlinear hybrid system can be found in [15][Chapter 4.1]. An example of the hybrid system is provided in the section II-D._ ### _Symmetry of a Gait_ Since gaits are defined as periodic solutions, we can start generalizing the symmetries in legged locomotion as follows: **Definition 2**: _For any gait \(\mathcal{O}\), the symmetries form a group \(G\). A symmetry of \(G\) on the gait \(\mathcal{O}\) is an assignment of a function \(\mathcal{S}_{g}:\mathcal{O}\to\mathcal{O}\) to each element \(g\in G\) in such a way that:_ * _If_ \(I\) _is the identity element of the group_ \(G\)_, then_ \(\mathcal{S}_{I}\) _is the identity map, i.e., for every gait_ \(\mathcal{O}\) _we have_ \(\mathcal{S}_{I}(\mathcal{O})=\mathcal{O}\)_._ * _For any_ \(g,h\in G\)_, we have_ \(\mathcal{S}_{g}\circ\mathcal{S}_{h}=\mathcal{S}_{gh}\)_, i.e., for every gait_ \(\mathcal{O}\)_, we have_ \(\mathcal{S}_{g}\left(\mathcal{S}_{h}(\mathcal{O})\right)=\mathcal{S}_{gh}( \mathcal{O})\)_;_ * _For any_ \(g\in G\)_, there exists the inverse of_ \(g\) _such that_ \(\mathcal{S}_{g}\circ\mathcal{S}_{g^{-1}}=\mathcal{S}_{I}\)_;_ In the rest of this subsection, we will provide detailed definitions of the three commonly observed symmetries in locomotion. We call them torso rigid motion symmetry \(\xi\), leg permutation symmetry \(\sigma\), and time-reversal symmetry \(\psi\) respectively. #### Ii-B1 Torso Rigid Motion Symmetry for simplicity, we only consider the ground (moving surface) to be smooth and perpendicular to the gravitational field. **Definition 3**: _Given the assumptions above, for any gait \(\mathcal{O}\), the rigid motions of the torso i.e., special Euclidean group \(SE(2)\), form a symmetry subgroup \(G_{\xi}\subset G\). An action of \(G_{\xi}\) on the gait \(\mathcal{O}\) is an assignment of a function \(\mathcal{S}_{\xi}:\mathcal{O}\rightarrow\mathcal{O}\) to each element \(\xi\in G_{\xi}\) in such a way that: \[\mathcal{S}_{\xi}:[\boldsymbol{q}_{T}(t)^{\intercal},\boldsymbol{q }_{L}(t)^{\intercal},\boldsymbol{\dot{q}}_{T}(t)^{\intercal},\boldsymbol{ \dot{q}}_{L}(t)^{\intercal}]^{\intercal}\,, \tag{3}\] \[\mapsto[\mathbf{T}(\boldsymbol{q}_{T}(t))^{\intercal}, \boldsymbol{q}_{L}(t)^{\intercal},\mathbf{T}(\boldsymbol{\dot{q}}_{T}(t))^{ \intercal},\boldsymbol{\dot{q}}_{L}(t)^{\intercal}]^{\intercal}\] where \(\mathbf{T}\) is the transformation in \(SE(2)\) that can reset the torso's position \(q_{x},q_{y}\) and heading direction \(q_{\text{yaw}}\) in the inertial frame and the periodic motion is preserved. This is the most obvious symmetry in the system. Sometimes this is referred to as the group of linear transformations in mechanics since they are a change in coordinates without affecting the formula for differential equations of motion [16]. One important symmetry based on this definition is that rotating the whole system in yaw \(q_{\text{yaw}}\) will not change the motion. And moving forward with \(q_{\text{yaw}}=0\) rad is the same as moving backward with \(q_{\text{yaw}}=\pi\) rad in the inertial frame (I). It is important to note that this group of symmetry applies to any gaits from the legged system. #### Ii-B2 Leg Permutation Symmetry since a gait is a periodic orbit of a hybrid system. The joints on the \(i\)-th leg \(\boldsymbol{q}_{i}(t)\) will go through a cyclic motion as well with a period of \(T\)_i.e._, \(\boldsymbol{q}_{i}(t)=\boldsymbol{q}_{i}(t+nT),n\in\mathbb{Z}^{0+}\). Assume that all legs are identical and connected to the torso at the hip and shoulder joints such that the bilateral symmetry of the system with respect to the sagittal plane is retained. **Definition 4**: _Let \(s:=(t\mod T)/T,s\in[0,1)\) be a phase variable. The permutations \(\sigma\) of the index set of four legs \(L=\{\texttt{LH},\texttt{LF},\texttt{RF},\texttt{RH}\}\) with a phase shift \(s\) form a symmetry subgroup \(G_{\sigma}\). An action of \(G_{\sigma}\) on the gait \(\mathcal{O}\) is a function \(\mathcal{S}_{\sigma}\):_ \[\mathcal{S}_{\sigma}:[\boldsymbol{q}_{T}(t)^{\intercal}, \boldsymbol{q}_{L}(t)^{\intercal},\boldsymbol{\dot{q}}(t)_{T}^{\intercal}, \boldsymbol{\dot{q}}(t)_{L}^{\intercal}]^{\intercal}\,, \tag{4}\] \[\mapsto[\boldsymbol{q}_{T}(t)^{\intercal},\boldsymbol{q}_{x}(t+ sT)^{\intercal},\boldsymbol{\dot{q}}_{T}(t)^{\intercal},\boldsymbol{\dot{q}}_{x}(t+sT)^{ \intercal}]^{\intercal}\] _This definition incorporates one of the most famous symmetries defined by Hildebrand [2] for legged locomotion. For asymmetrical gaits in Hildebrand's definitions, all permutations of legs are only symmetric with respect to the whole stride cycle (\(s=0\)). In contrast, the symmetrical gaits, such as walking and trotting, have permutations of the front/hind leg pairs that are symmetric with respect to the half of the stride cycle (\(s=0.5\)). For gaits with different footfall patterns, they usually have different numbers of permutations \(\sigma\) in this subgroup. It is possible to disrupt these symmetries in a variety of ways, such as when legs are no longer identical or when the locations of the legs on the body are no longer distributed symmetrically. Even if the body bilateral symmetry is preserved, it can be shown that by varying the total energy in the system, leg permutation symmetry will become difficult to maintain which will result in new gait with diverse football patterns. Similar definitions and properties of leg permutations can be found in [10, 11, 12]._ #### Ii-B3 Time-Reversal Symmetry a gait in our definition must be a periodic orbit, which means that all states except for \(q_{x}\) return to their original values after one stride. The periodicity of the states also suggests the conservation of linear and angular moment in each direction over every \(T\) seconds. \[\int_{t}^{t+T}\dot{q}_{x}(\tau)d\tau=q_{x}(t+T)-q_{x}(t)=\text{ stride length}, \tag{5}\] \[\int_{t}^{t+T}\dot{q}_{y}(\tau)d\tau=q_{y}(t+T)-q_{y}(t)=0,\] \[...\] Assume the energy in the system is reversible and the reset map is continuous _i.e._, an identity map, then we have the following symmetry: **Definition 5**: _The time-reversal symmetry is an action of \(G_{\psi}\) on the gait \(\mathcal{O}\) endowed with a function \(\mathcal{S}_{\psi}\) that reverses the velocities:_ \[\mathcal{S}_{\psi}:[\boldsymbol{q}_{T}(t)^{\intercal}, \boldsymbol{q}_{L}(t)^{\intercal},\boldsymbol{\dot{q}}_{T}(t)^{\intercal}, \boldsymbol{\dot{q}}_{L}(t)^{\intercal}]^{\intercal}\,, \tag{6}\] \[\mapsto[\boldsymbol{q}_{T}(t)^{\intercal},\boldsymbol{q}_{L}(t)^ {\intercal},-\boldsymbol{\dot{q}}_{T}(t)^{\intercal},-\boldsymbol{\dot{q}}_{L}( t)^{\intercal}]^{\intercal}\] _Moreover, if there exists a finite \(t_{o}>0\) such that \(q_{x}(t_{o})=0\), \(\boldsymbol{q}_{T}(t_{o}+t)=\mathbf{T}_{\pi}(\boldsymbol{q}_{T}(t_{o}-t))\) and \(\boldsymbol{q}_{\sigma_{p}}(t_{o}+t)=-\boldsymbol{q}_{\sigma_{p}}(t_{o}-t)\) where \(\mathbf{T}_{\pi}\) represents a 180 degrees rotation of about \(z_{B}\) axis and \(\sigma_{p}\in\){(L,HLF)(R,HF), (L,H,RF)(R,HLF)}, then the gait is _self-time-reversible_. This definition of symmetry has been extensively discussed by Marc Raubert in [6]. The rotation \(\mathbf{T}_{\pi}\) applied to the torso's states and the permutation of leg pairs \(\sigma_{p}\) suggest that when the \(x_{B}\) axis of the body coordinate frame is reversed, the system remains the same in the reversed motion, which means the anterior and posterior components in the system are mirrored with respect to the frontal plane. By setting \(t=0\), the self-time-reversible gait also suggests that \(q_{\text{pitch}}(t_{o})=0\) rad and \(\dot{q}_{z}(t_{o})=0\) m/s. In addition, the configurations of the front and hind legs should have the same magnitudes but opposite signs at \(t_{o}\). One example of this symmetry is shown in the bounding gaits in Fig. 3. ### _Symmetries in Common Quadrupedal Gaits_ In this subsection, we present the naming conventions of several common quadrupedal gaits in the animal locomotion literature by using the football patterns (Fig. 2) [2, 17] and illustrate the definitions of the symmetries in these gaits. The Fig. 2: Footfall patterns of four exemplary gaits. The colored bars represent the stance phase of each leg within a full stride cycle. LH, LF, RF, RH represent left hind, left front, right front, right hind legs, respectively. list of symmetries for these gaits are summarized in Table II. _Pronking_ is a quadrupedal gait often observed in legged animals wherein all legs are used in synchrony as shown in Fig. 2(A). Within one stride, there only exists one flight phase followed by a single stance phase where all four legs are in contact with the ground. This gait has the torso rigid motion symmetry \(\xi\) (changing the origin or moving direction will not affect the gait). Since the trajectories of all legs are identical in pronking, any rearrangement of the legs (\(\sigma\in\mathbb{S}_{L}\), \(\mathcal{S}_{L}\) stands for all permutations of the set \(L\)) gives rise to a leg symmetry \(\sigma\) with \(s=0\). Additionally, pronking gaits usually have the symmetry \(\mathcal{S}_{\psi}\) that is characterized by the same magnitudes of the touch-down and lift-off angles and straight legs during apex transition in flight when \(\dot{q}_{z}=0\) as discussed in [18]. Another common gait is _bounding_ (Fig. 2(B)), which is characterized by identical motions for each leg pair (front or hind). This type of gait is used to cover ground quickly and efficiently, as the limbs move in unison and propel the body forward. Although this gait generally maintains time reverse symmetry \(\psi\), only leg permutation symmetries within leg pairs are retained (\(\sigma\in\{\)(LF,RF), (L,RH), (LF,RF)(L,RH)\(\}\)). The Cayley table for this gait is tabulated in III. As shown in Fig. 2(C), if only one pair of legs strikes the ground in quick succession while the other pair of legs continue to move in unison, it is usually referred to as _half-bounding_[4]. As a result, only one of the leg permutation symmetries is preserved in this case \(\sigma=\) (LF,RF) or (LF,RF). _Galloping_ is a four-beat gait that is characterized by four distinct touchdowns. It is the fastest gait many terrestrial animals including horses can perform and is used for covering long distances quickly and efficiently. As shown in Fig. 2(D), there are no coupling between the legs in this gait \(\sigma=\emptyset\), and usually the time-reversal symmetry \(\psi\) is broken. However, under certain circumstances, there exists such galloping gaits that can retain such symmetry, which is crucial to maintaining the balance of the torso [6]. ### _Simplistic Quadrupedal Model_ In order to demonstrate the symmetries in a dynamical system, we developed a quadrupedal spring-loaded inverted pendulum (SLIP) model, which consists of a rigid torso with mass \(M\) and inertia \(J\) and four legs that with concentrated mass \(m_{0}\) at foot position as shown in Fig.1(B). The center of mass lies on the line between the shoulder and hip joints. It is located at a distance of \(l_{b,H}\) measured from the hip joint and a distance of \(l_{b,F}=l_{b,H}-l_{o}\) from the shoulder joint. The legs are modeled as linear springs with stiffness \(k\) and are connected to the torso through rotational springs with stiffness \(k_{\text{swing}}\). In order to have an energy conservative system and avoid collision losses, we take the limit of the foot mass to zero and define the stiffness of rotational springs to be \(k_{\text{swing}}=\omega_{\text{swing}}^{2}ml_{o}^{2}\)[19]. Due to the torso rigid motion symmetry \(\xi\), we can focus on the motion in the sagittal plane by reducing the configuration space to \(\mathbf{q}_{T}=[q_{x},q_{z},q_{\text{pitch}}]\) and \(\mathbf{q}_{L}=[\alpha_{\text{L}\text{H}},\alpha_{\text{LF}},\alpha_{\text{RF}}, \alpha_{\text{RH}}]\) where \(\alpha_{i}\) are local leg angles measured with respect to the torso. While in the flight phase, the leg are maintained at resting lengths \(l_{o}\) and the torso is in free fall. When a leg is in contact with the ground, we assume there are no sliding motions of the feet, the leg angle and leg length are computed from the following kinematic constraints: \[l_{i}(\mathbf{q}) =[q_{z}-l_{b,i}\sin(q_{\text{pitch}})]\cos^{-1}(\alpha_{i}+q_{ \text{pitch}}), \tag{7}\] \[p_{i}^{x}(\mathbf{q}) =q_{x}-l_{b,i}\cos(q_{\text{pitch}})+l_{i}(\mathbf{q})\sin(\alpha_{i} +q_{\text{pitch}})\] The contact jacobian matrix of the \(i\)-th stance leg can be calculated as \(\mathbf{J}_{i}\coloneqq\frac{\partial\mathbf{q}_{i}}{\partial\mathbf{q}}\) where \(\mathbf{g}_{i}\coloneqq[l_{i}(\mathbf{q}),\,p_{i}^{x}(\mathbf{q})]^{\intercal}\). By combining (2) and the corresponding Jacobian matrices, we obtain the following equations for the close-loop system: \[\mathcal{F}\coloneqq\begin{bmatrix}\mathbf{M}(\mathbf{q})&-\mathbf{J}^{\intercal}(\mathbf{q}) \\ \mathbf{J}(\mathbf{q})&\mathbf{0}\end{bmatrix}\begin{bmatrix}\ddot{\mathbf{q}}\\ \mathbf{\lambda}\end{bmatrix}=\begin{bmatrix}-\mathbf{C}(\mathbf{q},\dot{\mathbf{q}})\dot{\bm {q}}-\mathbf{G}(\mathbf{q})\\ -\mathbf{J}(\mathbf{q},\dot{\mathbf{q}})\dot{\mathbf{q}}\end{bmatrix} \tag{8}\] where \(\mathbf{\lambda}_{i}\coloneqq[f_{i}^{x},\,f_{i}^{z}]^{\intercal}\) contains the horizontal and vertical ground reaction forces. It is assumed that every leg must touch down (TD) and lift off (LO) the ground once within one stride. Denote the timings of these two events for the \(i\)-th leg as \(t_{i}^{\text{TD}}\) and \(t_{i}^{\text{LO}}\). These eight timing variables of the four legs are functions of the states \((\mathbf{q},\dot{\mathbf{q}})\), and the impact surfaces are defined by the following sets: \[\mathcal{C}_{i}^{\text{TD}} =\{(\mathbf{q},\dot{\mathbf{q}})\in\mathcal{T}\mathcal{Q}\ |\ t=t_{i}^{\text{TD}}(\mathbf{q},\dot{\mathbf{q}}),\,l_{i}(\mathbf{q})=l_{o}\ \} \tag{9}\] \[\mathcal{C}_{i}^{\text{LO}} =\{(\mathbf{q},\dot{\mathbf{q}})\in\mathcal{T}\mathcal{Q}\ |\ t=t_{i}^{\text{LO}}(\mathbf{q},\dot{\mathbf{q}}),\,l_{i}(\mathbf{q})=l_{o}\ \}\] that naturally divide all gaits into 9 domains (\(j=1,2,\ldots,9\)): \[\Sigma:\left\{\begin{array}{c}\mathcal{F}_{j},\ (\mathbf{q},\dot{\mathbf{q}})\not \in\mathcal{C}_{j};\\ \dot{\mathbf{q}}^{+}=\Delta_{\mathcal{F}_{j}\to\mathcal{F}_{j=j+1}}\dot{\mathbf{q}}^{ -},\ (\mathbf{q},\dot{\mathbf{q}})\in\mathcal{C}_{j};\end{array}\right. \tag{10}\] in which \(\Delta\) is the impact map that instantaneously resets the pre-impact velocity \(\dot{\mathbf{q}}^{-}\) joint velocities to the post impact velocities \(\dot{\mathbf{q}}^{+}\) for the next domain [15, Chapter 3]. ## III Results In this section, we provide our main findings and demonstrate that different parameter bifurcations result in the breaking of symmetries of a simplistic model in distinct ways, which leads to diverse quadrupedal gaits with various football patterns. More specifically, we conducted numerical continuations developed in our previous work [19] and searched for periodic solutions of the model proposed in section II-D. Two parameters have been investigated in this work: the total energy \(E(\mathbf{q},\hat{\mathbf{q}})\) stored in the system and the COM location \(l_{b}\) of the torso. All other system parameters were fixed in this study with \(k=10\left[\nicefrac{{mg}}{{l_{o}}}\right]\), \(\omega=20\left[\sqrt{\nicefrac{{g}}{{l_{o}}}}\right]\), and \(J=2.5\)\([ml_{o}^{2}]\) according to the values used in animals [20, 21, 22] for simplicity. To investigate the gaits from a symmetric model, we set the parameters for the all four legs to be identical and placed the COM in the center of the torso. After that, we varied the COM location and analyzed its influences on the viable gaits. The detailed results are described in section III-A and III-B respectively. To simplify the numerical calculations and the analysis of the periodic orbits, we selected the apex transition in flight phase as the Poincare section \(\mathcal{P}=\{(\mathbf{q},\hat{\mathbf{q}})\in\mathcal{T}\mathcal{Q}\mid\hat{q}_{z}=0, \leavevmode\nobreak\ \hat{q}_{z}<0,\leavevmode\nobreak\ (t_{\rm{H}}^{\rm{ID}}(\mathbf{q},\hat{\mathbf{q}})\mod T)<(t_{\rm{i}}^{ \rm{LO}}(\mathbf{q},\hat{\mathbf{q}})\mod T)\leavevmode\nobreak\ \}\). In the following figures, the fixed points \(\mathcal{P}^{*}{:=}\mathcal{P}(t)=\mathcal{P}(t+T)\) on the Poincare section _i.e._, periodic orbits \(\mathcal{O}\) are visualized using the torso's horizontal velocity \(\dot{q}_{x}\) and pitching velocity \(\dot{q}_{\rm{pitch}}\). ### _Symmetries in Gaits of a Symmetric Model_ In this section, we assume the model has two identical ends where the front and hind body length are the same \(|l_{b,F}|=|l_{b,H}|\). We conducted numerical continuations using a predictor-correction method to find periodic orbits starting from pronking gaits. This process is similar to the continuation approach presented in [23]. #### Iii-A1 Pronking (PF) this is the easiest gait to identify because of the synchronized motions of all four legs. Several successive keyframes of a pronking gait with a speed of \(5.82[\sqrt{gl_{o}}]\) at the moments of touch-down and lift-off are depicted in Fig. 3(a). As anticipated before, this gait has the leg symmetry \(\sigma\) with \(s=0\) and the torso has no pitching motion. Furthermore, at apex transition, all legs are oriented perpendicular to the ground, and the magnitudes of the touch-down and lift-off angles are identical, indicating the time reverse symmetry \(\psi\). All pronking gaits with different average speeds from the numerical search are plotted as the blue curve (1-dimensional branch in \(\mathcal{T}\mathcal{Q}\)) in Fig. 3. It starts at a minimum speed of 0 (jumping in place) and reaches a maximum speed of \(20.42[\sqrt{gl_{o}}]\). #### Iii-A2 Bounding (BG and BE) according to section II-C, bounding gaits involve a long stride with both pairs of legs moving together, which means that the coupling of the front and hind legs is broken, allowing the legged system to cover more ground with each stride. There are two types of solutions from our model that follow this pattern: (1) During one bounding gait, as shown in Fig. 3(b), the hind leg pair touches the ground first, followed by the front legs, resulting in leg pairs being collected inward during the flight phase. In the named gaits, it is called _bounding with gathered suspension_[4], and we refer to it as BG for short thereafter. This branch of solutions has been drawn as the solid red curve in Fig. 3 which bifurcates from PK branch at \(\dot{q}_{x}=4.43\)\([\sqrt{gl_{o}}]\) and \(\dot{q}_{x}=17.42\)\([\sqrt{gl_{o}}]\) (black dots A and B). (2) As shown in Fig. 3(c), another type of bounding gait is characterized by the reverse sequence of touch-downs, in which the front legs are the first to touch the ground, followed by the hind legs. As a result, the swing leg pairs extend outward during the flight phase known as _bounding with extended suspension_ (BE). In Fig. 3, these solutions are represented by red dashed curves that are connected to the pronking branch at the same bifurcation points (A and B). While the footfall pattern and swing leg behavior of these two bounding gaits differ, they share the same symmetry. Both BG and BE have the same number of leg permutation symmetries as tabulated in Table III. Firstly, by comparing with the PK discussed in the previous section, it is the break of symmetry in the leg pair motions \(\sigma=\) (LH,LF)(RF,RH) that results in the bounding gaits with two distinct footfall patterns. And because the phase delays in the leg pairs can be both positive or negative, two new gaits emerge from the bifurcation points A and B. Second, as shown by the keys frames in Fig. 3(b)&(c), the touch-down angle of one leg pair has the same magnitude of the lift-off angle of the other leg pair _i.e._, \(\alpha_{\rm{H}}(t_{\rm{H}}^{\rm{ID}})=-\alpha_{\rm{F}}(t_{\rm{F}}^{\rm{LO}})\) and \(\alpha_{\rm{F}}(t_{\rm{F}}^{\rm{ID}})=-\alpha_{\rm{H}}(t_{\rm{H}}^{\rm{LO}})\) (H stands for hind legs and F for front legs). And the \(2\)-nd (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3-th) (3- rd) frame in Fig. 3(b)&(c) are just the mirrored image of the \(5\)-th (\(4\)-th) frame. Additionally, at apex transition, despite the differences in the pitching velocities \(\dot{q}_{\text{pitch}}\), the pitching angles for both bounding gaits are zeros. It suggests by simply reversing the velocities in this system, we can immediately find the bounding gaits with negative speeds. All of these indicate the time reverse symmetry \(\psi\) in bounding gaits according to Definition 5. #### Iii-B3 Half-Bounding (FG, FE, HG, and HE): when one of the leg pairs (front or hind) is desynchronized in the bounding gaits mentioned above, it can result in four half-bounding gaits with three touch-down events (3-beat gait). If this desynchronization happens in the hind leg pair, we get _half-bounding with spread **h**ind legs & **g**athered suspension_ (HG) or _half-bounding with spread **h**ind legs & **extended suspension_ (HE). Fig 4(d)&(e) illustrate the keyframes of these two gaits at the moment of touch-downs. In general, these solutions are similar to the bounding gait with different aerial suspensions _i.e._, BG and BE. In fact, as shown in Fig 4, the gait branch HG (yellow solid curve) emerges from point C on BG at \(\dot{q}_{x}=4.56\ [\sqrt{gl_{o}}]\), while the branch HE (yellow dash curve) originates at point D on BE with a speed of \(\dot{q}_{x}=6.05\ [\sqrt{gl_{o}}]\). As the solutions along the branches HG and HE moves closer to point A, all four legs tend to synchronize. These two gaits eventually reach the pronking branch and join together at point A. On the other hand, if the desynchronization happens in the front leg pair, the other two half-bounding gaits with spread _front legs & **g**athered suspension_ (FG) and with spread _front legs & **e**xtended suspension_ (FE) are identified from the proposed model. These two branches are visualized as green solid/dash curves in Fig 4 and they are also connected to the bounding gaits BG and BE at the bifurcation point E and F with a forward speed of \(5.74\ [\sqrt{gl_{o}}]\) and \(4.80\ [\sqrt{gl_{o}}]\) respectively. Among each half-bounding gait, only one leg permutation symmetry is retained _i.e._, \(\sigma=\) (LF,RF) or (LH,RH). The break in symmetry in one leg pair causes phase delays that ultimately lead to the half-bounding gait with different leading and trailing legs. There is a similarity between this process and the bipedal skipping gaits discussed extensively in our previous work [23]. We omit them here for the sake of simplicity. Within the half-bounding gait, time-reversal symmetry \(\psi\) ceases to exist. The Fig 4(d)-(f) also shows that the front and back leg orientations are not mirrored at the apex transitions, which means the motion will not be identical under the time-reversal symmetry \(\psi\). In this sense, reversing the signs of velocities no longer results in periodic motions. #### Iii-B4 Galloping in a galloping gait, the motion of all four legs are desynchronized. A variety of galloping gaits exist, each with a distinct football pattern and aerial suspension [5]. In order to simplify the discussion, we will refer to the galloping gaits identified in the proposed model as _galloping with gathered suspension_ (GG) and _galloping with extended suspension_ (GE). Fig. 4 illustrates these solutions as purple curves that connect to the half-bounding HG and FE at the bifurcation points F and G at the horizontal speed of \(\dot{x}=5.99\ [\sqrt{gl_{o}}]\) and \(\dot{x}=6.18\ [\sqrt{gl_{o}}]\), respectively. Galloping is the least symmetrical gait due to the loss of all symmetries in leg motions. The leg permutation symmetry no longer exists for any galloping gaits. However, the leg configurations of the front and back legs of one unique solution can still have the same magnitude but opposite signs, where \(q_{\text{pitch}}=0\) at the apex transition, meaning the solution still retains the time-reversal symmetry \(\psi\) (shown as (h) in Fig. 4). A similar galloping gait with the time-reversal symmetry also emerges on the GE branch with extended suspension and is shown in Fig. 4 (i). ### _Symmetries in Gaits of an Asymmetrical Model_ There is more than one way to break the symmetries in gaits. In this section, we examine how breaking the geometrical symmetry in the proposed model affects the symmetries in gaits. More specifically our results demonstrate that when the COM is moved away from the center of the body, the pronking and bounding gait branches from the previous section change rapidly. Fig. 4: This figure shows the gait branches of half-bounding and galloping from the proposed model. The bottom plots illustrate six exemplary gaits from the gait branches: (d)–(g) are half-bounding gaits with different spread legs and suspensions; (h) and (i) are galloping with gathered/extended suspensions. The left legs are shown in transparent color. Black feet are used to highlight the legs in stance. Keyframes of apex transition and the touch-downs are drawn in the figure. As shown in Fig. 5, two models with \(l_{b,H}=0.47\)\([l_{o}]\) and \(l_{b,H}=0.53\)\([l_{o}]\) have been tested. And a trapezoidal shape is used to depict the torso of the body and the COM is located closer to the side with the longer base in order to distinguish the results clearly. When moving forward (\(\dot{q}_{x}>0\)), at low speeds, the pronking branch PF disappears for the model with \(l_{b,H}=0.47\)\([l_{o}]\). Because the lever arm lengths are different for the four legs, the leg forces will no longer generate zero torque on the torso during the stance phases if all four legs are fully synchronized. Instead, a bounding-like gait shows up at low speeds (solid orange curve in Fig. 5), and abundant solutions can be found with gathered aerial suspension. One of the periodic solutions on this branch is shown in Fig. 5(h) which is strikingly similar to the BG branch of the fully symmetric model (transparent red curve). From this model with \(l_{b,H}=0.47\)\([l_{o}]\) we can still find bounding gaits with extended suspension. The amount of solutions is however very limited, as they shrink to a small loop near the speed of \(6\)\([\sqrt{g\dot{l}_{o}}]\) which is shown by the dashed orange curve in Fig. 5. On the other hand, the model with the COM location closer to the front (\(l_{b,H}=0.53\)\([l_{o}]\)) demonstrates the opposite preference on the viable gaits. In this case, a lot more bounding gaits with extended suspension are found whereas the bounding with gathered suspension only appears near the speed of \(6\)\([\sqrt{g\dot{l}_{o}}]\). Additionally, for the models with \(l_{b,H}<0.46\)\([l_{o}]\), the bounding gaits with extended suspension vanished when moving forward and it can only have bounding gait with gathered suspension. Similarly, for the models with \(l_{b,H}>0.54\)\([l_{o}]\), bounding with extended suspension is the only viable bounding gait and the one with gathered suspension is no longer available. Since these solutions are still bounding gaits, the symmetries in the leg pair motions \(\sigma=\) (LF,RF)(L,RH) are still preserved. If the time-reversal symmetry \(\psi\) is applied to the bounding gait with the model with \(l_{b,H}=0.47\)\([l_{o}]\), we can find new period motions. Notice that the new gaits from the model with \(l_{b,H}=0.47\)\([l_{o}]\) moving backward are identical to the gaits from the model moving forward with \(l_{b,H}=0.53\)\([l_{o}]\) under the time-reversal symmetry \(\psi\) (shown in Fig 5 (j) and (k)). However, in these gaits, during the apex transitions, the pitching angles are not zeros and the magnitudes of the magnitude of leg angles are no longer the same. This means the time-reversal symmetry \(\psi\) is no longer retained by the model with \(|l_{b,i}|\neq 0.5\)\([l_{o}]\). Similar case for \(l_{b,H}=0.53\)\([l_{o}]\) and \(l_{b,H}=0.47\)\([l_{o}]\) that use bounding with extended suspension are also shown in Fig 5 (l) and (m). ## IV Conclusions In this work, we systematically investigated the symmetries in quadrupedal legged locomotion using the group theory. To do this, we defined gaits as periodic solutions of a hybrid system. Three types of symmetries observed in legged animals and robots were discussed. The first one is the torso rigid motion symmetry \(\xi\) which is the translational and rotational symmetry of the torso under linear transformations. All gaits from the hybrid models are endowed with this symmetry. The leg permutation symmetry \(\sigma\) is the most commonly observed symmetry in quadrupedal gaits which generalizes Hildebrand's definitions of symmetrical and asymmetrical gaits. According to definition 4, one can clearly count the elements in this subgroup and even compare the number of symmetries in all quadrupedal gaits. In this sense, pronking gait is the most symmetrical gait according to Table II. The last one is the time-reversal symmetry \(\psi\) which suggests that for every gait identified from the system with a forward motion, there may exist such a gait moving backward by simply reversing the speeds. Theoretically, this symmetry only exists in conservative systems without collision losses. As we have shown in the quadrupedal SLIP model, if the legs are assumed to be massless and the swing feet can match the ground speed at the moment of touchdown [24], this symmetry can be perfectly reproduced in simulations. Additionally, we showed that when the front and rear ends of the system are identical, moving forward and moving backward are identical motions under a rotational transformation. This is referred as to self-time-reversible in definition 5. Last but not least, we showed that all three types of symmetries form a group \(G\) as shown in the Cayley Table I. We also developed a SLIP model and conducted numerical bifurcations to illustrate how breaking symmetries can lead to changes in the existence of gaits in a legged system. It has Fig. 5: This figure shows bounding gait branches for models with three \(l_{b,H}\) values. Transparent blue and red curves represent pronking and bounding gaits with COM located at the center of the torso \(l_{b}=0.5\)\([l_{o}]\) (same as Fig. 3). In the bottom, four bounding gaits from two models are shown: (j) bounding gathered with \(l_{b,H}=0.47\)\([l_{o}]\), (k) bounding gathered with \(l_{b,H}=0.53\)\([l_{o}]\), (l) bounding extended with \(l_{b,H}=0.53\)\([l_{o}]\), and (m) bounding extended with \(l_{b,H}=0.47\)\([l_{o}]\). been shown that desynchronization of the motions of two leg pairs results in two bounding gaits with gathered/extended suspensions from the pronking gait (Fig. 3). Additionally, if only one leg pair starts moving out of phase, it simultaneously breaks the symmetries of leg permutation and time-reversal, resulting in four half-bounding gaits (Fig. 4). Moreover, we demonstrated that there is more than one way to break the symmetries in gaits. The change of geometry of the torso, for instance, can also cause the system to lose time-reversal symmetry, ultimately affecting viable gaits at a given speed (Fig. 5). Analysis of the symmetries in legged locomotion can potentially provide insight into why animals of different morphologies prefer certain gaits and how they can transition between gaits at given speeds. It has also be shown that symmetries are important for the locomotion controller design in stabilizing the legged systems [18, 25]. Furthermore, studies on the symmetries may shed light on which gaits may be appropriate for a particular robot design, or they can provide guidance for roboticists in designing more agile and energy-efficient legged robots by using certain gaits.
2307.07671
Saudi Arabian Perspective of Security, Privacy, and Attitude of Using Facial Recognition Technology
Facial Recognition Technology (FRT) is a pioneering field of mass surveillance that sparks privacy concerns and is considered a growing threat in the modern world. FRT has been widely adopted in the Kingdom of Saudi Arabia to improve public services and surveillance. Accordingly, the following study aims to understand the privacy and security concerns, trust, and acceptance of FRT in Saudi Arabia. Validated Privacy Concerns (IUIPC-8), Security Attitudes (SA-6), and Security Behavior (SeBIS) scales are used along with replicate studies from Pew Research Center trust questions and government trust questions. In addition, we examine potential differences between Saudis and Americans. To gain insights into these concerns, we conducted an online survey involving 53 Saudi Arabia citizens who are residing in the USA. We have collected data in the US instead of Saudi Arabia to avoid the regulatory challenges of the Saudi Data & Artificial Intelligence Authority (SDAIA). Responses from closed-ended questions revealed that Saudis score much lower than Americans when it comes to security attitudes, whereas they score lower when it comes to privacy concerns. We found no significant difference between Saudis' and Americans' acceptance of the use of FRT in different scenarios, but we found that Saudis trust advertisers more than Americans. Additionally, Saudis are more likely than Americans to agree that the government should strictly limit the use of FRT.
Amani Mohammed Alqarni, Daniel Timko, Muhammad Lutfor Rahman
2023-07-15T00:42:30Z
http://arxiv.org/abs/2307.07671v1
# Saudi Arabian Perspective of Security, Privacy, and Attitude of Using Facial Recognition Technology ###### Abstract Facial Recognition Technology (FRT) is a pioneering field of mass surveillance that sparks privacy concerns and is considered a growing threat in the modern world. FRT has been widely adopted in the Kingdom of Saudi Arabia to improve public services and surveillance. Accordingly, the following study aims to understand the privacy and security concerns, trust, and acceptance of FRT in Saudi Arabia. Validated Privacy Concerns (IUIPC-8), Security Attitudes (SA-6), and Security Behavior (SeBIS) scales are used along with replicate studies from Pew Research Center trust questions and government trust questions. In addition, we examine potential differences between Saudis and Americans. To gain insights into these concerns, we conducted an online survey involving 53 Saudi Arabia citizens who are residing in the USA. We have collected data in the US instead of Saudi Arabia to avoid the regulatory challenges of the Saudi Data & Artificial Intelligence Authority (SDAIA). Responses from closed-ended questions revealed that Saudis score much lower than Americans when it comes to security attitudes, whereas they score lower when it comes to privacy concerns. We found no significant difference between Saudis' and Americans' acceptance of the use of FRT in different scenarios, but we found that Saudis trust advertisers more than Americans. Additionally, Saudis are more likely than Americans to agree that the government should strictly limit the use of FRT. Privacy Concerns, Security Attitudes, Security Behaviors, FRT, Saudi Arabia. ## I Introduction Facial Recognition Technology is rapidly becoming widespread in the global context. It is estimated that by 2021 over 1 billion security cameras have been installed for both private and public purposes [5]. China has the largest video surveillance network in the world and has now nearly one billion surveillance cameras [39]. The East Asia/Pacific and the Middle East/North Africa regions are active adopters of facial recognition and other identity tools. South and Central Asia and the Americas also demonstrate sizable adoption of AI surveillance instruments. As FRT usage has grown rapidly, personal information disclosure and data collection has increased; however, providing societal safety while balancing individual privacy rights has been challenging [27]. FRT raises more anxiety than other identity methods, such as fingerprint and iris recognition, because it can be employed anytime without users noticing. Therefore, it is unsurprising that there are many concerns associated with it. It is slowly making an appearance within different applications like education, retail, access control to certain internet of things (IoT) devices, transportation, hospitality, and banking. As of now, private and public FRT are creating an unprecedented dilemma. If we connect private and public FRT to a network, where data can be continuously obtained easily and combined with databases of public information, that could enable automated identification and tracking of people. A very early version of FRT was tested at the 2002 Super Bowl when law enforcement officials scanned people in the crowd without their permission and found several minor criminals. However, the experiment was subject to a high number of false positives. Following that test, FRT blossomed in the 2010s due to rapid developments in artificial intelligence [4]. The increasing use of facial recognition systems has yielded ample research opportunities as well as many potential uses and benefits in the public and private sector. Nonetheless, there are still significant risk perceptions that accompany the adoption of this technology [40, 51]. To address the substantial security and privacy concerns, it is important to know whether there are meaningful differences in privacy and security preferences, beliefs, and attitudes between people of different nations. For example, a study has been done to compare the security and convenience between four nations [22]. Attitude studies on specific applications that use FRT have also been done [28, 41, 50]. Recent studies have been done on adults in the United States to discover their trust level toward law enforcement's use of FRT [46]. Results revealed that more than 50% of U.S. participants trust law enforcement to use FRT. In contrast, when used by advertisers or technology companies, Americans are less accepting of facial recognition software. Most of the participants in the study by Smith et al. [47] believe that FRT can effectively and easily identify individual people and classify them by gender and race. With Vision 2030, Saudi Arabia is opening to the world with a unique blueprint for economic and social reform [14]. This includes one of the most vital components of its transformation program: digital transformation. By expediting the implementation of primary and digital infrastructure projects, the program aims to increase operational excellence in government, improve economic enablers, and enhance living standards. Artificial intelligence-based technology has been deployed more widely due to digital transformation. In the Fall of 2022, the Kingdom of Saudi Arabia approved the use of security surveillance cameras in the country [43]. As a result, approximately 22 places, including schools, universities, hospitals, clinics, medical cities, and private health facilities, were required to install surveillance cameras. To address the security and privacy concern, there are numerous studies have been conducted on the use of FRT. However, most research conducted on privacy concerns regarding FRT is focused on Western cultures [21, 23, 41, 47, 49, 51] and very little is conducted in the Middle East. The behaviors people engage in regarding their privacy are firmly rooted in various cultural beliefs and values [26]. In Muslim countries, women dress differently in public [37]. As a Muslim majority country, Saudi Arabia has unique culture and norms. The majority of women cover their faces in Saudi Arabia. Consequently, it is more likely that men's identity information will be collected publicly than women's in a place like Saudi Arabia. Given the unique context of Saudi Arabians, our study focuses on the perspectives of Saudi Arabian citizens residing in the united states regarding the use FRT for surveillance purposes. **In the following paper, we explore privacy and security concerns for Saudi Arabia by addressing the following research questions:** **RQ1:**_Are_ there meaningful differences in security, privacy concern, and security behaviors? **RQ2:**_Are_ there meaningful differences in public acceptance when using FRT? **RQ3:**_Are_ there meaningful differences in opinions between locals of Saudi and the US in regard to FRT? And to what extent do gender and age impact security behaviors and privacy concerns for Saudis? To address these questions, we have administered an online survey to 53 Saudi Arabian citizens residing in the United States. These participants were recruited through word-of-mouth referrals and from online messaging channels for English-speaking Saudi groups in America. We recorded participants' responses to a seven-part questionnaire including questions concerning participant demographics, behavioral attitudes and privacy concerns, and public awareness and potential misuse of FRT. Additionally, we reused questions from a Pew Research Center and Center for Data Innovation survey to compare our responses to the general American public. Our research shows differences in security and privacy concerns related to FRT use in several surveillance scenarios. **Our Contributions:** To understand Saudis' complaints, we analyze their concerns, opinions, awareness, and beliefs, and how they are different between the US populations. **Our work made the following contributions:** 1. [leftmargin=*] 2. To our best knowledge, this is the first quantitative study that measures the Saudi Arabian perspective of security and privacy attitudes and concerns regarding FRT. 3. We compared the security perspectives of Saudi Arabians with Americans to determine similarities and differences between both groups. **Summary of Key Results:** This research provided following key insights: 1. [leftmargin=*] 2. We confirm that the accuracy of FRT has an impact on the Saudi Arabian populations' support for the technology. When FRT use is proposed for personal identification with varying degrees of accuracy, we saw that both Saudi and US approval consistently increased with the higher accuracy. The difference between 80% accuracy and 100% accuracy in identifying suspects using FRT resulted a 17% increase in approval of the use of FRT technology by police. 3. Our results show that Saudi Arabian participants have a higher propensity towards government limitation of surveillance. Across all propositions, we saw a consistently higher rate of agreement in the proposition that FRT should be limited by the police and government. 4. Our analysis of security behaviors shows that the Saudi population has a low average acceptance level of FRT in our tested scenarios. Additionally, we found that Saudi participants did not believe FRT is accurate in effectively identifying someone's race. In the following section, we present work related to FRT vulnerabilities, such as biometric data challenges. We then describe the methods and results of our survey. Finally, we discuss our findings and synthesize conclusions. ## II Related Work FRT has been a focal point of research over the last few decades. In this section, we discuss works related to three key areas: Facial Recognition Technology Vulnerilities, Biometric Recognition Related Privacy, and Security Concerns, and Public Acceptance of FRT. The implications of FRT increase with increasing self-disclosure on the internet [1]. Despite the advantages of FRT surpassing the disadvantages, privacy challenges are the most significant implication of this technology, and many searches have been done to reduce privacy concerns and increase awareness [17, 34]. **FRT Vulnerabilities & Biometric Data Challenges** Facial recognition is explained as a computer that takes an image and calculates the distance between major structural pieces like the nose and eyes. The facial recognition system uses a template to analyze images of people's identities [15]. Once a computer recognizes a face, it searches its existing template of images to see if it can locate a matching code. Due to the COVID-19 pandemic, new challenges have emerged that complicate the facial recognition system. Face masks obstruct a significant part of the face, leading to low recognition performance. This case is not only for impermeable masks but also for transparent face covers because reflections produce a variation that is non-trivial to the model. However, these difficulties can be overcome by focusing on parts of the face that remain uncovered, such as the iris and the wider periocular region of the face [13]. Many countries have directly collaborated with private companies to develop digital solutions based on their requirements, without the supervision of legislative institutions and in the absence of public discussion. This revealed how these systems fail to meet even the most basic thresholds of legality, proportionality, accountability, necessity, legitimacy, or safeguarding. Previous studies have discussed the challenges of different biometric technologies. A significant challenge in using biometric recognition systems is to build a robust and appropriate sensor that minimizes recognition error [19]. A comparison between different biometric methods has shown that facial recognition biometrics has low accuracy, high cost, large template size, low long-term stability, and low security level [42]. One study that presented an analysis of the biometric authentication that causes new challenges to security and privacy discussed the danger of frequent biometric authentication that companies use to identify persons to evaluate their buying decisions. Access to people's identity information will lead to illegal spying from different agencies. As a result, security and privacy concerns will increase over time. However, signal processing is important in providing solutions to decrease security and privacy concerns [31]. Another challenge is facial expression bias, which impacts facial recognition systems since nearly all popular FRT databases show massive facial recognition biases [36]. The authors addressed the FRT challenges that can be concluded in pose variations like variation in lightning conditions and illumination problems which are observed in people who wear collisions like hats and eyeglasses [35, 9, 18, 44]. **Biometric Recognition: Privacy and Security Concerns** Since biometric features are not secretive, it is possible to obtain a person's face biometric without their knowledge. This permits covert recognition of previously recorded people. Consequently, those who yearn to remain anonymous could be denied [38]. In this study, a machine learning methodology was presented to efficiently recognize the masked faces, inspired by the state-of-the-art algorithms; the proposed method achieved 99% accuracy on the classifier, which was built on the masked faces dataset. Due to COVID-19, masked faces have created a considerable challenge for facial recognition. However, this simple yet innovative approach effectively solves the problem and addresses security and social concerns [33]. However, there are growing concerns that when COVID-19 ends, data gleaned from these digital systems could be misused. The lack of adequate regulations does not guarantee that governments will restrict their measures, particularly where there is no specific legislation establishing rules concerning the processing, storing, or discarding of the collected data. **Public acceptance and discrimination Toward FRT** Using biometric systems for remote detection has raised social, cultural, and legal concerns [8]. In healthcare, FRT poses new challenges regarding privacy [30]. In schools, FRT alters the nature of schools and schooling along divisive, authoritarian, and oppressive lines [2]. One study compared the privacy concerns within the United States justice system and FRT outside of the United States justice system. Additionally, the author presented the ethical and legal concerns associated with FRT [34]. Similarly, another study discussed privacy concerns across multiple deployment scenarios of facial recognition and strategies for the deployment of facial recognition. Specifically, the author focused on ensuring that people have the same level of transparency and control [51]. A high level of general awareness about FRT has been presented. In automobile security, smartphone usage rates are higher in China than in the US [22]. Also, this study analyzes the interplay between technical and social issues involved in the widespread application of video surveillance for person identification [6]. Public attitudes, trust, and familiarity with FRT scenarios have been analyzed [24]. Insight regarding the public's attitudes from China, the UK, Germany, and the US have been gathered, and the results provided input for policymakers and legal regulations [49]. In several studies, gender has revealed distinctions. More specifically, the author explores whether men and women think differently about privacy, and he found that men are more likely to approve of the use of cameras using FRT in the workplace [48]. If an individual identifies as a white man, then the software is right 99% of the time, while the accuracy is 35% when women with darker skin [29]. In the workplace, women are more concerned and less likely than men to accept using a camera that has FRT [48]. Gender bias is one of the consequences of using FRT, and in this study, the authors evaluate potential gender bias [3, 25]. ## III Methodology The following is a description of the questionnaire used in the study, the procedure for recruiting participants, and the analysis we used to answer our research questions. Our study protocol was approved by the Institutional Review Board (IRB) of our university. To gather responses regarding our research questions, participants were administered the survey using Qualtrics, and where the language was English. The survey contained quantitative responses to get empirically reasonable insights into privacy and security concerns. It took approximately 26 minutes to complete the survey. ### _Questionnaire_ We conducted the seven-part questionnaire after the consent. In part 1, we have Demographic questions for participants; in part 2, general Awareness and Victim-related questions; in part 3, we have Security Attitudes metrics questions. We also reused Few Research Center Questions in part 4, Privacy Concerns, and Security Behavior Intentions scaling questions in part 5 and part 6, correspondingly. Finally, we have asked questions related to Government Trust Questions. **Part 1: Demographic questions** Participant demographics were determined by asking standard questions about their age, gender, education, occupation, and income. **Part 2: Awareness and Victim related questions** Participants were asked questions about their awareness and perception of FRT. These questions were newly created with response options on a 5-point Likert scale. These questions covered the awareness levels of participants to the potential misuse of FRT, and whether they have been a victim of the technology. **Part 3: Security Attitudes (SA-6)** To measure the security attitudes of Saudis towards FRT, we provide them with previously validated security attitudes (SA6) [11]. Here, we have a five-point scale with the "Strongly agree", "Agree", "Neutral", "Disagree", and "Strongly disagree" along with the "Prefer not to answer" option. Participants who selected "Prefer not to answer" to these questions were not counted during the analysis. **Part 4: Pew Research Center Questions** In this part, the Pew questions [47] were administered to participants. Participants provided answers about the trust, efficiency, and acceptance of using FRT in different applications. Response options for this part are multiple-choice. Participants who selected "Prefer not to answer" to these questions were not counted during the analysis. **Part 5: Privacy Concerns (IUIPC)** To measure privacy concerns for Saudis towards FRT, we provide them with the previously validated Internet Users' Information Privacy Concerns Scale (IUIPC-10) [16]. Here, we have a five-point scale with "Strongly agree", "Agree", "Neutral", "Disagree", "Strongly disagree" along with the "Prefer not to answer" option. Participants who selected "Prefer not to answer" to these questions were not counted during the analysis. **Part 6: Security Behavior Intentions Scale (SeBIS)** To measure the security intentions of Saudis towards FRT, we provide them with previously validated security behavior intention scales of FRT [10]. Here, we have a five-point scale with "Never", "Rarely", "Sometimes", "Often", and "Always" along with the "Prefer not to answer" option. Participants who selected "Prefer not to answer" to these questions were removed from the sample. **Part 7: Government Trust Questions** Participants provide answers for replicate study questions [7] to evaluate the internet users' opinions on FRT by age and gender, the internet users' opinions on FRT by police, and the internet users' opinions on regulating surveillance cameras and FRT. Response options for this part included a five-point Likert-type scale ranging from strongly agree to strongly disagree. Participants who selected "Neither agree" to these questions were excluded from the sample. Strongly agree and agree options were considered as one group of agreement, and strongly disagree and disagree options were considered as one group of disagreement for our analysis. ### _Recruitment_ To avoid the regulatory challenges of the Saudi Data & Artificial Intelligence Authority (SDAIA) [32] as per the recommendation of our IRB, we collected data from the local Saudi population in America. The participants were recruited through word-of-mouth referrals and online channels. We have posted our study advertisement into a WhatsApp group comprising Saudi Arabian students who are studying in the US. Data were collected through a survey instrument targeted at Saudi people above the age of 18 and who identified themselves as Saudis based on nationality. As a token for participation, 1 random participant out of 100 received a 50-dollar Amazon gift card. ### _Analysis_ **RQ1 analysis:** We present the score for each scale that we asked our participants about, which are Privacy Concerns (IUIPC-8), Security Attitudes (SA-6), and Security Behaviors (SeBIS-30). Then, we used linear regression to analyze the three scales. We set FRT Familiarity, Gender, Age, Income, Education, and Occupations as independent variables for each scale analysis. Every dependent variable was fitted with linear regression using all covariates. Based on majority of the case, we selected the baseline. All Independent variables are listed in Table I). **RQ2 analysis:** We present the descriptive statistics for each item in Part 4. We compare the response frequency for Likert scale questions for both populations. We used the non-parametric Mann Whitney U test (MWU) since we do not have to assume that the variances and sample size should be equal for both populations. did not declare they were from Saudi Arabian and did not provide their name. This is because one of the authors posted an advertisement on the LinkedIn, and some US people filled out the survey without reading the participation requirements. Hence, we removed 22 participants from those categories. As a result, we were left with a total of 94 participants. We then selected surveys that had a 100% completion rate. Thus, we removed 41 entries of people who completed the survey partially and missed some crucial parts of the survey. Overall, we removed 63 participants in total. Then we produced a master file of 53 participants for further analysis. As our survey questions were optional, participants had the freedom not to answer those questions. ## IV Results In this section, we first describe our participants' responses to the survey. Then, we present the regression analysis on Security Attitudes, Privacy Concerns, and Security Behaviors. Finally, we analyze Saudis' responses to Pew research center questions and government trust questions toward FRT, and we compare between Saudis and Americans. Results are presented based on an online survey that measures the overall familiarity, trust, and public acceptance, opinions, and efficiency of FRT. Responses to privacy and security questions were expected to show significant differences between gender, but since there was a limited sample size, we only found a couple of significant differences. Also, we noticed that the Saudi sample did not differ significantly from the American sample for the replicated studies. ### _Demographics_ From May 2022 until September 2022, we were able to get 53 valid responses. Of the 53 participants recruited for the study, 17% of them were between 18-24 years old, 69% were between 25-34 years old, and 13% were between 35-44 years. The identified gender distribution for the study was 56.6% men, and 43.4% women. More than 74% of the participants had a Bachelor's and Master's degree. 34% of the participants are graduate students, whereas the rest are distributed between other occupations. Finally, for income, 14% of Saudis prefer not to reveal their income. Participant characteristics and additional demographic information can be found in Table II. ### _Public Awareness & Technology Victims_ **Public Awareness** Our analysis showed that 71.7% of Saudis have heard about FRT, while 28.3% have never heard about it. Furthermore, 71.7% of Saudis agree that they are very aware of the privacy risks concerning FRT, while 11.3% disagree. Additionally, 68% of Saudis agree that they are aware that FRT scans can be captured easily and remotely, while 13.2% of them disagree. **Technology Victims** Our participants scored an average of 2.87 (\(\sigma\) =.56, min=1.5, max=4), indicating that they rarely have been a victim of FRT. 73.6% of Saudis have never been a victim of somebody accessing their facial information to extort them for money, while only 3.8% of them have been a victim. 73.6% of Saudis never fell victim to somebody using their facial information under their name, while 5.7% of them have been a victim. 64.2% of Saudis have never had their facial information misused for any purpose, such as identity theft, while 5.7% of them have been a victim. 56.6% of Saudis have never been a victim of FRT flaws such as false positive identification, while 1.9% of them faced the risks of error due to flaws in the technology. ### _Security Attitudes (SA-6)_ We analyzed the responses to the SA-6 security attitude scale questions on the survey. Potential scores on this scale range from 6-30, with higher numbers indicating a more positive attitude toward security behaviors. Overall, all participants scored an average of 21.3 (\(\sigma\) = 3.95, min=11, max=29), meaning that Saudis scored much lower than the average U.S. population sample [11, 12]. As a result, Saudis are scoring in a lower range compared to Americans for security attitudes on this scale. **Regression Model Analysis (SA-6)** Our regression model (Table III) includes Saudis who are familiar with FRT (Saudi familiar mean= 21.9, Saudi unfamiliar mean = 20.8), but this factor is not significant. Also, we discovered that there is one significant relationship between security attitudes and the independent variable income (covariates). **Income covariate:** Saudis whose income falls within the "$100,000 to $149,000" range were associated with a 1.21-point increase in positive attitude toward security (_p_ = 0.015). ### _Pew Research Center Question Analysis_ #### Iv-D1 Saudi Public Opinion on Automated FRT In this part of the survey, we replicate a study that has been done on the US population from the Pew Research Center [47]. We provided our participants with the same questions, and compared the answers between the two populations to analyze the extent of the differences. The broad questions can be found in Table V and provide a brief description of the face topics taken from the Pew American Trends Panel [45]. FACE1 references question 1 from the Pew Center Survey and discuss familiarity. FACE2 references questions 2-4 in the Pew Center Survey and discuss efficiency. FACE3 references questions 5-7 in the Pew Center Survey and discuss trust in different scenarios. FACE4 covers questions in the Pew Center Survey 8-11 and cover the acceptance of FRT in different scenarios. #### Iv-D2 Saudi Public Perception on Automated FRT In this section, we have provided our participants with the same questions from the Pew research center study that has been done on American people [47], and we compare the results between Saudis and Americans. We present the descriptive statistics for each section, and use the Mann Whitney U test to determine whether there are any significant difference between the two populations. **Public Familiarity** 89% of Saudis have heard of something related to automated FRT. 21% said that they have heard a lot about automated FRT, while only 11% of Saudis have not heard anything at all about facial recognition (Table IV). **Public Beliefs on FRT Efficiency** In general, our participants score an average of 2.31 (\(\sigma\) =.79, min=1, max=4), indicating that they think the use of FRT is not effective when it comes to accurately identifying someone's race. As shown in Table IV, Saudis think that the efficiency of FRT at accurately identifying people is 22.6%. Furthermore, Saudis think that the efficiency of FRT at accurately identifying someone's race is 18.9%. In addition, 9.4% of Saudis think that FRT is not effective at all at accurately identifying someone's race. We found that there is no significant difference between Saudis' and Americans' opinions on the efficiency of FRT in different scenarios. **Public Trust in different scenarios:** Overall, our participants scored an average of 2.65 (\(\sigma\) =.72, min=1, max=4), indicating that they think the use of FRT by law enforcement agencies, companies, and advertisers is not a great deal. Among the three scenarios, Saudi's trust for the second scenario, "Technology Companies," is the highest, while it is at its lowest for the "Advertisers" scenario. As depicted in Table IV, only 5.7% of Saudis think that it is a great deal to trust advertisers to use FRT in the advertisement scenario, while 41.5% of them do not trust advertisers at all. A significant difference between Saudis' and Americans trust of advertisers using the FRT has been found (MWU, \(p\) = 0.01). **Public Acceptance in Different Scenarios** In general, our participants score an average of 1.88 (\(\sigma\) =.48, min=1, max=3), indicating that they are more likely to refuse than accept the use of FRT. 47.2% of Saudis accept the use of FRT in the scenario of "Law enforcement assessing security threats in public spaces." While 17% accept the use of FRT in the scenario "Advertisers seeing how people respond to public ad displays." From Table IV, we can see that 60% of Saudis do not accept the use of FRT in the advertisement scenario, while only 30% of them do not accept the use of it in the law enforcement scenario. In addition, we found that there is no significant difference between Saudis' and Americans' acceptance of the use of FRT in different scenarios. ### _Privacy Concerns (IUIPC)_ Now, we consider responses to the IUIPC-10, which measures privacy concerns. It is a well-known scale that measures privacy concerns. This scale consists of 10 items divided into three dimensions: three control items (ctrl1, ctrl2, ctrl3), three awareness items (awa1, awa2, awa3), and four collection items (coll1, coll2, coll3, coll4). Since IUIPC-8 yielded a statistically significantly better fit to the Vuong test than IUIPC10, we trim ctrl3 and aware3 [16]. We converted our 5-point scale to 7-point using a conversion scale mapping [20]. Potential scores for IUIPC-8 range from 8-56, with higher scores indicating higher levels of privacy concern. Our participants scored on average of 45.91 (\(\sigma\) = 8.11, min=22, max=56), indicating that they tend to be more privacy sensitive than not. Our regression model (Table VI) includes Saudis who are familiar with FRT (Saudi familiar mean= 47.44, Saudi unfamiliar mean = 45.68); we found that several variables were negatively correlated with positive concerns towards privacy. **Education covariate:** Saudis who have not preferred to reveal their education degree are associated with a 2.89-point decrease in positive privacy concerns (\(p\) =.002). \begin{tabular}{l c c c c c c c c} \hline \hline **Question** & **Group** & **N** & & & **Response(\%)** & **pvalue** & \(\mathit{d}_{Cohen}\) & **Effect Size** & **Conclusive** \\ \hline **TOPIC/RT Familiarity** & **FACEi Options** & & & A late & A late & Nonling \# all & & & \\ \hline **FACEi** & Saudi Group & 53 & 20.8 & 67.9 & 11.3 & & & & \\ & U.S. Group & 4200 & 24.3 & 63.0 & 12.6 & & & & \\ \hline **TOPIC/RT Efficiency** & **FACEi Options** & & Very **Income covariate:** Saudis whose income falls "under $25,000" were associated with a 0.8-point decrease in positive concerns towards privacy (_p_ = 0.029), and Saudis whose income is in "$150,000 or above" range were associated with a 2.7-point decrease in positive concerns towards privacy (_p_ = 0.008). **Occupation covariate:** Saudis who have an occupation in skilled labor are associated with a 2.38-point decrease in positive privacy concerns (_p_ = 0.024), and Saudis who are college students are associated with a 1.25-point decrease in positive privacy concerns (_p_ =.027). ### _Security Behavior (SeBIS-30)_ First, we analyzed the responses to the Security Behavior Intention Scale (SeBIS-30) questions on the survey, where higher numbers indicate a more positive attitude toward security behaviors. The SeBIS-30 questionnaire examines whether cultural differences influence end-users security behavior. Overall, all participants scored an average of 95.67 (\(\sigma\) = 14.66, min=72, max=159). **Regression Model Analysis (SeBIS-30)** Our regression model (Table VIII) includes Saudis who are familiar with FRT (Saudi familiar mean= 96, Saudi unfamiliar mean = 95.4), but this factor is also not significant. Also, we found that there are two significant differences between security behaviors and the independent variables income and occupation (covariates). **Income covariate:** Saudis whose income is in the "$150,000 or above" range was associated with a 2.15-point increase in positive behaviors toward security (_p_ < 0.001). **Occupation covariate:** Saudis who are college students were associated with a 1.25-point decrease in positive behaviors toward security (_p_ = 0.027). ### _Government Trust Questions Analysis_ **Saudis' and Americans' Opinions on FRT** We found that 44.0% of Saudis think that the government should strictly limit the use of FRT, while 26.2% of Americans support limiting the use of FRT. Likewise, only 43.1% of Saudis want the government to limit the use of FRT even if it would prevent stores from using this technology to stop shoplifting, while 23.8% of Americans would agree to such a tradeoff. As highlighted in Table IX, 37.3% of Saudis want the government to limit the use of FRT even if it means that the airport can't use FRT to speed up security lines, while 20% of Americans would agree to such a tradeoff (Table IX). Across the four propositions, we found that the Saudi sample had a higher average agreement that the government should strictly limit the use of FRT. In conclusion, there is a difference in perspectives between the two populations regarding the limiting of FRT technology. **Saudis' and Americans' Opinions on FRT by Age and Gender** Overall, we concluded that there is no significant difference between male and female Saudis who agree or disagree with the strict limitation of FRT use by the government. For Saudis, disagreement is prevalent between the 18-34 age group on the limitation of using FRT in different scenarios. As a result, both populations share a similar opinion on this proposition. 35.7% of men are less likely to agree on the limited use of FRT, especially at the expense of public safety, compared to 54.5% of women. Also, 37.9% of men are more likely to agree on the limited use of FRT, especially when it comes to reducing shoplifting and speeding up security lines, compared to 50.0% of women. For Americans, there were differences in these opinions based on age, with older Americans being less likely to disagree with government limitations on the use of FRT. For example, 52% of 18- to 34-year-olds disagree with limitations that come at the expense of public safety, compared to 54.5% of Saudis respondents. We noticed that the Saudi women group were much more likely to support limiting the use of FRT than US women in all the scenarios in (Table VII). **Saudis' and Americans' Opinions on the Use of FRT by Police Departments** After asking Saudi participants whether police departments should be allowed to use FRT to help find suspects, the number of Saudis who agree and support using this technology increased depending on its accuracy. For Saudis, if the software is accurate 80% of the time, 54.9% of Saudis agree with using it, whereas 25% disagree. In contrast, if the software is accurate 90% of the time, 62.0% of the participant respondents agree with using it, and 22.0% disagree. However, if the software is accurate 100% of the time, 74.0% of the participants agree with using it, while 12.0% of them disagree. When U.S. participants were asked whether police departments should be allowed to use FRT to help find suspects, they found that Americans who agree and support using this technology also increased depending on its accuracy. For Americans, if the software is accurate 80% of the time, 39% of Americans agree with using it, whereas 32% disagree. If the software is accurate 90% of the time, 47% of the participants respondents agree with using it, and 25% disagree. However, if the software is accurate 100% of the time, 59% of the participants agree with using it, and 16% disagree (Table IX). Despite an FRT accuracy of 100%, approximately 12% of the two populations do not believe that FRT should be used by the police to find suspects. **Saudis' Opinions on Regulating Surveillance Cameras and FRT** When participants were asked whether the government should limit the use of surveillance cameras, Saudis were more likely to support such limitation. The support for limiting surveillance cameras when it comes to public safety, 23.5% of Saudis agree to limit the use of surveillance cameras, and 32.7% would agree with using facial recognition. Americans were more likely to support limiting the use of surveillance cameras 36.2% than FRT 26.2%. The support for limits on surveillance cameras, even if it is going to reduce shoplifting, drops from 44.2% to just 23.5% by the Saudi group, and the support for limiting the use of facial recognition drops slightly from 44.0% to 43.1%. Alternatively, when it comes to public safety, then 17.9% of Americans agree to limit the use of surveillance cameras, and 18.3% would agree to use FRT. These findings have been outlined in (Table IX). #### V-B1 Findings from the comparison between the two populations We compare our participants to the participants from the Center of Data Innovation study [7]. Since we do not have to assume that the variances and sample size should be equal for the two populations, we chose to apply the Mann Whitney U Test to find if there are any significant differences between Saudis and Americans. We found a couple of significant differences between the two populations for the opinions on limiting the use of FRT(MWU, \(p\) = 0.019). This indicates that Saudis are more in agreement on the limitation of the use of FRT than Americans in some scenarios. In other words, Saudis tend to be more in agreement than Americans when it comes to limiting the use of FRT, even when used to reduce shoplifting (MWU, \(p\) = 0.005) or if it means that airports can't use it to speed up security lines(MWU, \(p\) = 0.012). Also, for their opinions on surveillance cameras, Saudis tend to be less in agreement than Americans when it comes to limiting the use of surveillance cameras, even when used to reduce shoplifting (MWU, \(p\) = 0.006). For the remaining questions, they shared a similar opinion on the use of FRT by the government. **Sample Size and Statistical Power** After our initial experiment, we performed a post hoc power analysis to determine whether our non-significant results were due to the modest sample size (N=53) of our Saudi population. With power (1-\(\beta\)) set at 0.95 and \(\alpha\) = 0.05 using a two-tailed means statistical comparison, we found a between group effect size of (0.5) for the current sample size used in this study. Thus, with our current population we are able to detect large and medium effect sizes with 95% power when applying the Mann Whitney U Test. ## V Discussion In our research, we aimed to understand the perceptions and attitudes of people with respect to FRT. The results of our study have some implications, and we acknowledge that a larger data set may be necessary to draw conclusions on the broader Saudi population. Due to skewness of the data collected, the results might be biased toward Saudis who are younger, and more educated. Accordingly, the conclusions we draw are not representative of all Saudis. Therefore, our findings cannot be assumed to represent the public perceptions in the Kingdom of Saudi Arabia. To answer our RQ1, we found that there were some differences in security and privacy concerns and behaviors. One of our findings states that Saudis score much lower than Americans when it comes to security attitudes. While we do not claim that this finding will be the same for a representative sample of Saudis, it does illustrate a potential nuance in the cultural differences between the two groups. Based on the results, we found a similar change in the opinions of FRT usage by police based on its efficacy. In regard to RQ2, another finding showed that Saudis are more likely than Americans to agree that the government should strictly limit the use of FRT. As mentioned earlier, this finding cannot be claimed as that of the public in Saudi Arabia. Both populations share the same order of acceptance regarding trust in this technology in four different scenarios, whereby they trust the law enforcement scenario the most and the advertiser's scenario the least. This could be intuitively similar to a representative sample of Saudis and other populations. In our RQ3, we asked to what extent does Gender and Age impact security behavior and privacy concerns. But we did not find any impact on security behaviors, neither privacy concerns. We do not guarantee this result will be the same for Female Gender who are living in the Kingdom. In several studies, gender has shown a difference. In addition, most women who are living in the Kingdom of Saudi Arabia are wearing Hijab and large number of them are covering their faces. Women who are residing in the United States might not be covering their faces and their responses might not be the same when they go back to the Kingdom regarding privacy concerns. **Recommendations** In regard to the acceptance rates of FRT, we found that opinions improved along with the accuracy of the technology. This bolsters the case made by prior research that false identification of FRT by law enforcement is a significant concern [51]. As a result, more research could be done to improve the perception of FRT as a reliable technology. Regarding the regulation of surveillance cameras, we found that the Saudi population was more likely to support limitations on the use of FRT. Although this is not a topic explored in this study, future work could explore follow-up questions to explore this difference. Additionally, our results show the public beliefs in the efficiency of FRT is so low across multiple use scenarios. This is an issue that should be addressed by both public and private institutions detailing how they use FRT. More should be done to make information detailing the accuracy of FRT in different scenarios available. **Future Work** While our work includes information on the differences between Saudi and American opinions on FRT in several contexts, there are many left to explore. FRT is expanding into many fields such as hospitality services and financial sectors. Understanding the differences in the perceptions of FRT in different cultural contexts could provide important insights into how these concerns should be addressed in the future. **Limitations** Our study includes certain limitations that are common for this type of research. First, since we collected only 53 responses and due to the regulatory challenges of the Saudi Data & Artificial Intelligence Authority (SDAIA), we obtained results from a modest sample that does not statistically represent the entire Saudi population. Instead, our study focused on the Saudi population residing in America. Additionally, while our survey respondents were residing within the US, we did not consider how long they have been currently residing in the United States as a factor in our analysis. Differences in the length of their stay in the United States may affect their responses to these topics. However, our sample still provides valuable insights into the perceptions of FRT. Second, most of our participants who completed the survey were students and might be more educated when compared to the average person in Saudi Arabia, and we compared the results for the Pew scales to a more representative sample of the U.S. population. Thus, our study outcomes cannot be generalized to Saudis in Saudi Arabia. Third, survey responses were only collected from Saudis who currently reside in the United States to avoid confounds related to availability and popularity, as well as cultural differences. Nevertheless, a more representative sample of Saudis is recommended and could be more valuable for further research. Lastly, we noticed an inconsistency with the responses between a Pew Research Center question and a general question regarding FRT in our survey. This may be due to the Pew Research Center question being double barreled. In particular, we found 71.7% of participants who reported not hearing about FRT in the General Questions, while expressing that 89.9% had heard or read about the development of automated FRT in the Pew Research Center Question. The Research Center question being asked later in the survey, the different in complexity of the question, or the extra information about FRT embedded in the question itself giving additional context to FRT technology may have caused participants to indicate they had more understanding of the technology than in the previous response. ## VI Conclusion FRT is a growing area of mass surveillance that raises privacy issues and is seen as an increasing menace in today's society. Despite there are several studies on understanding the security and privacy concerns of FRT in the context of western cultures, there have been no prior studies for the Muslim-majority countries such as Saudi Arabia, which has unique cultures and norms. To gain insights into the security concerns and attitudes toward the FRT of Saudi Arabian, we designed and conducted an online survey with 53 participants. Our study sheds light on whether Saudis and Americans have meaningful differences in behavioral attitudes and concerns regarding the privacy and security of FRT that they could realistically encounter as part of their everyday activities. We used previously validated metrics to answer our research questions and found a couple of differences between Saudis and Americans. In terms of security attitudes and privacy concerns, Saudis are more likely to score lower than Americans. When we compared our sample to a U.S. representative sample, we found that the acceptance of FRT by Saudis and Americans in various scenarios does not vary significantly. In addition, more Saudis than Americans believe FRT should be strictly limited by the government. Moreover, future studies on FRT can be guided by our findings, and efforts should be made to ensure the privacy of individuals as FRT advances. Our study findings have implications for policymakers, researchers, and the public regarding the use of FRT in Saudi Arabia. ## VII Acknowledgement The authors are grateful to all the participants who participated in our research. We appreciate PST'23 anonymous reviewers for their thoughtful feedbacks and comments and helped us to improve final version of the paper. We also want to thank Soha Khoso for proofreading the draft version of the paper.
2303.15782
CARTO: Category and Joint Agnostic Reconstruction of ARTiculated Objects
We present CARTO, a novel approach for reconstructing multiple articulated objects from a single stereo RGB observation. We use implicit object-centric representations and learn a single geometry and articulation decoder for multiple object categories. Despite training on multiple categories, our decoder achieves a comparable reconstruction accuracy to methods that train bespoke decoders separately for each category. Combined with our stereo image encoder we infer the 3D shape, 6D pose, size, joint type, and the joint state of multiple unknown objects in a single forward pass. Our method achieves a 20.4% absolute improvement in mAP 3D IOU50 for novel instances when compared to a two-stage pipeline. Inference time is fast and can run on a NVIDIA TITAN XP GPU at 1 HZ for eight or less objects present. While only trained on simulated data, CARTO transfers to real-world object instances. Code and evaluation data is available at: http://carto.cs.uni-freiburg.de
Nick Heppert, Muhammad Zubair Irshad, Sergey Zakharov, Katherine Liu, Rares Andrei Ambrus, Jeannette Bohg, Abhinav Valada, Thomas Kollar
2023-03-28T07:52:15Z
http://arxiv.org/abs/2303.15782v1
# CARTO: Category and Joint Agnostic Reconstruction of ARTiculated Objects ###### Abstract We present CARTO, a novel approach for reconstructing multiple articulated objects from a single stereo RGB observation. We use implicit object-centric representations and learn a single geometry and articulation decoder for multiple object categories. Despite training on multiple categories, our decoder achieves a comparable reconstruction accuracy to methods that train bespoke decoders separately for each category. Combined with our stereo image encoder we infer the 3D shape, 6D pose, size, joint type, and the joint state of multiple unknown objects in a single forward pass. Our method achieves a 20.4% absolute improvement in mAP 3D IOU50 for novel instances when compared to a two-stage pipeline. Inference time is fast and can run on a NVIDIA TITAN XP GPU at 1 HZ for eight or less objects present. While only trained on simulated data, CARTO transfers to real-world object instances. Code and evaluation data is available at: carto.cs.uni-freiburg.de ## 1 Introduction Reconstructing 3D shapes and inferring the 6D pose and sizes of objects from partially observed input observations remains a fundamental problem in computer vision with applications in Robotics [10, 11, 13, 20] and AR/VR [8, 44]. This object-centric 3D scene understanding problem is challenging and under-constrained since inferring 6D pose and shape can be ambiguous without prior knowledge about the object of interest. Previous work has shown that it is possible to perform category-level 3D shape reconstruction and 6D pose estimation in real-time [7], enabling the reconstruction of complete, fine-grained 3D shapes and textures. However, there are a wide variety of real-world objects that do not have a constant shape but can be articulated according to the object's underlying kinematics. There has been great progress in articulated object tracking [5, 37, 32, 9] and reconstruction [10, 26] from a sequence of observations. However, a sequence of observations is cumbersome since it often requires prior interaction with the environment. In contrast, object reconstruction from a single stereo image through inferring latent information about an object a priori enables both grasping and manipulation of previously unknown articulated objects. Additionally, estimates from a single image can also serve as a good initial guess for object tracking approaches [37]. Previous approaches to articulated object reconstruction from a single observation use a two-stage approach [16] where first objects are detected using, e.g., Mask-RCNN [4]. Then, based on the detection output, object properties, e.g. part-poses and NOCS maps [35], are predicted and the object is reconstructed using backward optimization [24]. Such an approach is complex, error prone, does not scale across many categories, and does not run in real-time. To mitigate the aforementioned challenges, building on ideas from [7] - a single-shot approach to output complete 3D information (3D shape, 6D pose, and sizes of multiple objects) on a per-pixel manner - we present "Category and Joint Agnostic Reconstruction of ARTiculated Objects" (CARTO). First, extending [24], we train a robust category- and joint-agnostic 3D decoder [28, 3, 24] by learning disentangled latent shape and joint codes. The shape code encodes the canonical shape of the object while the joint code captures the articulation state of the object consisting of the type of articulation (e.g., prismatic or revolute) and the amount of articulation (i.e. joint state). To disentangle both codes we impose structure among our learned joint codes by proposing a physically grounded regularization term. Second, in combination with our stereo RGB image encoder we can do inference in a _single-shot manner_ to detect the objects' spatial centers, 6D poses and sizes as well as shape and joint codes. The latter two can then be used as input to our category- and joint-agnostic decoder to directly reconstruct all detected objects. To evaluate CARTO, we first evaluate the reconstruction and articulation state prediction quality of our category- and joint-agnostic decoder and compare against decoders trained on a single category. In an ablation study we show the necessity of our proposed joint code regularization over naively training the joint codes. We then quantitatively compare our full pipeline to a two-stage approach on synthetic data and show qualitative results on a new real-world dataset. Our main contributions can be summarized as follows: * An approach for learning a shape and joint decoder jointly in a category- and joint-agnostic manner. * A single shot method, which in addition to predicting 3D shapes and 6D pose, also predicts the articulation amount and type (prismatic or revolute) for each object. * Large-scale synthetic and annotated real-world evaluation data for a set of articulated objects across 7 categories. * Training and evaluation code for our method. ## 2 Related Work Related work to CARTO includes neural fields for reconstruction, implicit reconstructions of non-rigid objects and articulated object detection, pose estimation, and reconstruction. **Neural Fields for Reconstruction**: Neural fields, i.e. coordinate-based multi-layer perceptrons [39], have become a popular method for reconstruction in recent years. These methods encode continuous functions that model various scene properties, such as Signed Distance [28], radiance [23], and occupancy [21]. Variations of these include hybrid discrete-continuous representations that employ an external data structure, i.e. a grid or an octree, to partition the implicit function [29, 43]. The encoded shape can then be extracted via sphere tracing [17] after querying the implicit function repeatedly. Recent advances in differential rendering have enabled learning of shapes, as well as other scene properties such as appearance, only from images and without the need for 3D supervision [23]. Our approach falls into the paradigm of using neural fields for articulated object reconstruction and further learns a complete system for detection, pose estimation, and articulated shape reconstruction from a single observation. **Implicit Reconstruction of Non-Rigid Objects**: Going beyond static scenes with rigid objects, [1] handle dynamic scenes while [27, 33] focus on reconstructing humans by leveraging their strong shape and kinematic prior as well as the amount of readily available datasets. [42] propose a general reconstruction framework to reconstruct any non-rigid entity (i.e. humans, animals, or objects) given only an RGB-video without requiring a category-specific shape template, while [14] focus on point cloud input data and split the prediction into a canonical shape and a deformation field. One downside of general reconstruction methods is that they do not leverage the rigidity and kinematic constraints of articulated objects. To explicitly use such priors, [26] propose a method that processes a multi-view sequence of a moving object and discovers its parts as well as their respective reconstruction and kinematic structure in an unsupervised way. Going one step further, [36] learn a shape and appearance prior for each category which allows them to model accurate reconstructions of articulated objects with only 6 given views. Similarly, [34] propose a Neural Radiance Field (NeRF) based method that can reconstruct objects on a category level given some images of an object. Also leveraging a learned shape prior over objects of the same category, [24] reconstructs articulated objects using only a single observation by optimizing for a latent shape code; similarly to [34], their method is only tested on revolute objects. Our approach also represents objects through low-dimensional codes. However, by disentangling the shape from the articulation state in our code, our method becomes category- and joint-agnostic. Other multi-category models, such as [10], reconstruct objects given a point cloud in two different articulation states, which limits the approach to objects with a single joint; [25] uses many observations before and after articulation to reconstruct an object. Most similar to our disentangled representation, [40] learns a latent space in which part-pairs are close if one part-pair can be transformed into another through a valid joint transformation. **Articulated Object Detection, Pose Estimation and Reconstruction**: Work in articulated object detection and pose estimation typically first requires the detection of individual parts and their respective poses from a sequence of images demonstrating the articulation [5, 9, 10, 37, 32] or from a single image [15, 16, 22]. [15] combines this part-level view of articulated objects with a holistic object-centric view as done for rigid objects [35]. Similarly, our work predicts the poses for articulated objects in the scene in a single pass from a single stereo or RGB-D image, without the need to detect individual parts first. Most similar to us, [7, 8] also detect the pose, shape and scale of multiple objects from an RGB-D observation via a single-stage approach. Both methods perform category-agnostic detection of unseen object instances at test time, however, they are limited to rigid objects, with [7] using a point cloud decoder for shape reconstruction, while [8] employs a latent shape and appearance prior. Our method also performs category-agnostic object detection and reconstruction, and we extend [7, 8] to handle articulated objects of multiple types in a single network forward pass, thus enabling fast and accurate articulate shape, pose and size estimation from a single stereo image. ## 3 Technical Approach In this section, we detail our proposed single-shot detector for articulated objects. Our method consists of two individually learned components: an encoder that predicts latent object codes as well as poses in the camera frame and a decoder that reconstructs objects in a canonical frame that can be transformed into the camera frame through the predicted pose. An overview of our approach is shown in Fig. 2. ### Encoder Our encoder builds upon CenterSnap [7] and SimNet [13]. For each pixel in our input stereo image \(I^{W\times H\times 6}\) we predict an importance scalar \(\psi\), where a higher value indicates closeness to a 2D spatial center in the image of the object. The full output map of \(\psi\) represents a heatmap over objects. Additionally, we predict a dense pixel map of canonical 6D poses for each articulated object independent of their articulation state. Further, we extend [7], which predicted a shape code \(\mathbf{z}_{\text{s}}\in\mathbb{R}^{D_{s}}\), to also predict a joint code \(\mathbf{z}_{\text{j}}\in\mathbb{R}^{D_{j}}\) for each pixel. These codes can be used to predict the articulation state of the object. Additionally, while not needed for our full pipeline, to guide the network towards important geometric object features, we also predict a semantic segmentation mask as well as 3D bounding boxes, again on a pixel-level. We use these predictions for constructing our baseline as described in Sec. 4.3. Last, we also predict a depth map \(D^{W\times H}\). The full network architecture is given in Sec. S.1.1. During inference of our full pipeline, given the predicted heatmap of importance values, we use non-maximum suppression to extract peaks in the image. At each peak, we then query the feature map to get the pose, shape, and joint code. We convert our 13-dimensional pose vector to a scale value \(\in\mathbb{R}\) of the canonical object frame, a position \(\in\mathbb{R}^{3}\) and using [2] to an orientation \(\in\mathbb{R}^{3\times 3}\) in the camera frame. We then use the shape and joint code to reconstruct each object in its canonical object frame using our decoder. After reconstruction, we use the predicted pose to place the object in the camera frame as shown in Fig. 2. ### Decoder Given a latent code, the decoder reconstructs object geometry, classifies the discrete joint type (i.e., as prismatic or revolute), and predicts the continuous joint state. To disentangle the shape of the object from its articulation state, we split the latent code in two separate codes: a shape and a joint code. We assign the same unique shape code \(\mathbf{z}_{\text{s}}\in\mathbb{R}^{D_{s}}\) to an object instance in different articulation states, where an articulation state is expressed through its own joint code variable \(\mathbf{z}_{\text{j}}\in\mathbb{R}^{D_{j}}\). We structure our decoder as two sub-decoders, one for reconstructing the geometry (Sec. 3.2.1) and the other for predicting the joint type _jt_ and state \(q\) (Sec. 3.2.2). See Sec. S.1.2 for a full architecture description. #### 3.2.1 Geometry Decoder The geometry decoder \(\phi_{\text{geom}}\) reconstructs objects based on a shape code \(\mathbf{z}_{\text{s}}\) and joint code \(\mathbf{z}_{\text{j}}\). In principle, the approach is agnostic to the specific decoder architecture as long as it is differentiable with respect to the input latent codes. While there are many potential options such as occupancy maps [21] as adopted in [10], we use signed distance functions (SDFs) [28] due to the proven performance in [24, 28]. Specifically, in the case when using SDFs as our geometry decoder, the model takes as input a point in 3D space \(\mathbf{x}\) as well as a shape \(\mathbf{z}_{\text{s}}\) and joint code \(\mathbf{z}_{\text{j}}\) \[\phi_{\text{geom}}(\mathbf{z}_{\text{s}},\mathbf{z}_{\text{j}},\mathbf{x})=\hat{s}_{\mathbf{x}} \tag{1}\] and predicts a value \(\hat{s}_{\mathbf{x}}\) that indicates the distance to the surface of the object. For faster inference, we implement a multi-level refinement procedure [8]. We first sample query points on a coarse grid and refine them around points that have a predicted distance within half of the boundary to the next point. This step can be repeated multiple times to refine the object prediction up to a level \(l\). Eventually, we extract the surface of the objects by selecting all query points \(\mathbf{x}\) for which \(|\hat{s}_{\mathbf{x}}|<\epsilon\) holds. By taking the derivative \[\mathbf{n}_{\mathbf{x}}=\frac{\partial\phi_{\text{geom}}(\mathbf{z}_{\text{s}},\mathbf{z}_{ \text{j}},\mathbf{x})}{\partial\mathbf{x}} \tag{2}\] and normalizing it we get the normal \(\hat{\mathbf{n}}_{\mathbf{x}}\) at each point \(\mathbf{x}\), which can then be used to project the points onto the surface of the object with \(\hat{\mathbf{x}}=\mathbf{x}-\hat{s}_{\mathbf{x}}\hat{\mathbf{n}}_{\mathbf{x}}\). #### 3.2.2 Joint Decoder As we represent the articulation state of the object implicitly through a joint code \(\mathbf{z}_{\mathrm{j}}\), we additionally introduce an articulation state decoder \(\phi_{\mathrm{joint}}\) to regress a discrete joint type \(\hat{\mathbf{\mathrm{\mathrm{\{prismatic}},\mathrm{\,\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{ \mathrm{ \mathrm{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}}\}\}\}\}\}\}\}\} \ \}}}\ \}}\}\}\ \}\}\ \}\}\ \}\ \)\)\)\ the gradient all the way to the codes themselves and thus, the embedding spaces rearrange them accordingly. Similar to [24], during training we regularize the codes through minimizing the L2-norm \[\mathcal{L}_{\text{reg}}(\mathbf{z})=\|\mathbf{z}\|, \tag{4}\] where \(\mathbf{z}\) is either \(\mathbf{z}_{\text{s}}\) or \(\mathbf{z}_{\text{j}}\). **The geometry decoder** described in Sec. 3.2.1 is trained on a set of query points \(\mathbf{x}\) close to the object surface sampled as in [28]. We define our reconstruction loss \(\mathcal{L}_{\text{rec}}\) as in [28] but use a leaky clamping function \[\text{clamp}_{l}(s|\delta,\alpha)=\begin{cases}s&|s|\leq\delta,\\ \alpha s&|s|>\delta\end{cases} \tag{5}\] which is conceptually similar to a leaky ReLU by instead of hard clamping values above a threshold \(\delta\), we multiply it by a small factor \(\alpha\). Initial testing revealed a more stable training. Our reconstruction loss at one query point \(\mathbf{x}\) is now given by: \[\mathcal{L}_{\text{rec}}\left(\mathbf{z}_{\text{s}},\mathbf{z}_{\text{j} },\mathbf{x},s_{\mathbf{x}}\right)= \tag{6}\] \[\left|\text{clamp}_{l}(\phi_{\text{geom}}(\mathbf{z}_{\text{s}}, \mathbf{z}_{\text{j}},\mathbf{x})|\delta,\alpha)-\text{clamp}_{l}(s_{\mathbf{x}}|\delta,\alpha)\right|,\] where \(s_{\mathbf{x}}\) is the ground truth distance to the surface. **The joint decoder** introduced in Sec. 3.2.2 is jointly trained with the aforementioned geometry decoder. For the joint type loss \(\mathcal{L}_{\text{j}t}\), we use cross entropy between the predicted joint type \(\hat{\text{j}t}\) and ground truth \(\hat{\text{j}t}\) and for the joint state loss \(\mathcal{L}_{q}\) the L2-norm between the predicted joint state \(\hat{q}\) and ground truth \(q\). **Joint Space Regularization**: One core contribution of our approach is how we impose structure in our joint code space during our decoder training. Here, we enforce the same similarity of latent codes as their corresponding articulations states have. A visualization of the underlying idea is shown in Fig. 3. Formally, given the joint codes \(\mathbf{z}_{\text{j}}^{k}\) and \(\mathbf{z}_{\text{j}}^{l}\) encoding two different articulation states \(k,l\in 1,\dots,N\), we define the similarity between them in latent space as \[\text{sim}_{\text{latent}}\left(\mathbf{z}_{\text{j}}^{k},\mathbf{z}_{\text{j}}^{l} \right)=\exp\left(-\frac{\|\mathbf{z}_{\text{j}}^{k}-\mathbf{z}_{\text{j}}^{l}\|}{ \sigma}\right). \tag{7}\] Similarly, we define the respective similarity in real joint space, considering the joint types \(\hat{\text{j}t}^{k}\) and \(\hat{\text{j}t}^{l}\) and the joint states \(q^{k}\) and \(q^{l}\), through \[\text{sim}_{\text{real}}\left(\left(\hat{\text{j}t}^{k},q^{k} \right),\left(\hat{\text{j}t}^{l},q^{l}\right)\right)= \tag{8}\] \[\begin{cases}\exp\left(-\left(\frac{q^{k}-q^{l}}{\sigma_{\mu}} \right)^{2}\right)&\hat{\text{j}t}^{k}=\hat{\text{j}t}^{l}\\ 0&\hat{\text{j}t}^{k}\neq\hat{\text{j}t}^{l},\end{cases}\] where \(\sigma_{\hat{\text{j}t}}\) is a joint type specific scaling. By minimizing the L1-norm between both similarity measurements \[\mathcal{L}_{\text{j}t}\left(\mathbf{z}_{\text{j}}^{k},\mathbf{z}_{\text{ j}}^{l}\right)= \tag{9}\] \[\left|\text{sim}_{\text{latent}}\left(\mathbf{z}_{\text{j}}^{k},\bm {z}_{\text{j}}^{l}\right)-\text{sim}_{\text{real}}\left(\left(\hat{\text{j}t} ^{k},q^{k}\right),\left(\hat{\text{j}t}^{l},q^{l}\right)\right)\right|\] we enforce that the latent similarities are similarly scaled as their real similarities. We scale this formulation to all articulation states in the training set as described below. Calculating \(\text{sim}_{\text{real}}\) can be done once in a pre-processing step for all articulation state pairs \(k,l\in 1,\dots,N\) resulting in a matrix \(\mathbf{S}_{\text{real}}\in\mathbb{R}^{N\times N}\). Similarly, calculating all \(\text{sim}_{\text{latent}}\)-pairs can be efficiently implemented as a vector-vector product. We denote the resulting matrix as \(\mathbf{S}_{\text{latent}}\in\mathbb{R}^{N\times N}\). Eq. (9) now simplifies to \[\mathcal{L}_{\text{j}t}=\frac{|\mathbf{S}_{\text{latent}}-\mathbf{S}_{\text{real}}| }{N^{2}}. \tag{10}\] Through this efficient calculation, optimizing this loss term comes with almost no overhead during training. This concept of similarity can be extended for arbitrary kinematic graphs. **Pre-Training**: Before we start training our full decoder, we only optimize our joint codes \(\mathbf{z}_{\text{j}}^{\eta}\forall n\in 1,\dots,N\). The pre-training helps with learning the full decoder as our joint codes are already more structured and thus, it is easier to learn the shape and joint code disentanglement. In the pre-training, we minimize \[\mathcal{L}_{\text{pre}}=\delta_{\text{reg},\mathbf{z}_{\text{j}},\text{pre}} \mathcal{L}_{\text{reg},\mathbf{z}_{\text{j}}}+\delta_{\text{j},\text{pre}} \mathcal{L}_{\text{j}t}, \tag{11}\] where \(\mathcal{L}_{\text{reg},\mathbf{z}_{\text{j}}}\) is the default norm regularization from Eq. (4) and \(\mathcal{L}_{\text{j}t}\) was introduced in Eq. (10). **Loss Function**: Given an object \(x_{m,n}\), we express our full decoder loss as \[\mathcal{L}= \delta_{\text{reg},\mathbf{z}}\mathcal{L}_{\text{reg},\mathbf{z}_{\text{ s}}}+\delta_{\text{reg},\mathbf{z}_{\text{j}}}\mathcal{L}_{\text{reg},\mathbf{z}_{\text{j}}}+ \delta_{\text{rec}}\mathcal{L}_{\text{rec}} \tag{12}\] \[+\delta_{\mu}\mathcal{L}_{\mu}+\delta_{q}\mathcal{L}_{q},\] where \(\mathcal{L}_{\text{reg},\mathbf{z}}\) are the shape and joint code regularization from Eq. (4), \(\mathcal{L}_{\text{rec}}\) is the reconstruction loss introduced in Eq. (6), \(\mathcal{L}_{\text{j}t}\) and \(\mathcal{L}_{q}\) are the joint type and state loss. We jointly optimize \(\mathcal{L}\) for the latent shape and joint code as well as the network parameters of the geometry decoder and joint decoder using ADAM [12] for 5000 epochs. Our new joint code regularizer loss \(\mathcal{L}_{\text{j}t}\) introduced in Eq. (10) is minimized at the end of each epoch separately scaled by \(\delta_{\text{j}t}\). All \(\delta\) variables are scalars to balance the different loss terms and are reported in Tab. S.1. #### 3.3.2 Encoder Using [13] we generate a large-scale dataset in which we annotate each pixel with its respective ground truth value from the simulation as described in Sec. 3.1. For annotating the shape codes we directly use the results of our previous encoder training whereas for the joint code we use our inverse mapping explained in Sec. 3.3.3 to retrieve joint codes for arbitrary sampled articulation states. #### 3.3.3 Inverse Joint Decoder To solve the inverse problem, given an articulation state for which we want to retrieve a joint code, we fit polynomial functions in the learnt joint code space. With the help of this mapping, we can retrieve arbitrary joint codes which then can be combined with a shape code to reconstruct objects in novel articulation states which have not been seen during the decoder training. Additionally, the mapping provides joint code training labels for the encoder. We describe the full mapping as a function \(\xi_{\text{code}}(\textit{if},q)=\mathbf{z}_{\text{j}}\) that takes a joint type _jt_ and joint state \(q\) as input and outputs a joint code \(\mathbf{z}_{\text{j}}\). We leverage the fact that after decoder training, we learned a joint code \(\mathbf{z}_{\text{j}}^{n}\) for each known training articulation state. We now define individual mappings for each joint type _jt_ the following way. We will treat each latent dimension \(d\) separately. For each dimension \(d\), we fit a polynomial function \(\xi_{\text{code}}^{\textit{jt},d}(q)\) of varying degree \(p\) through all point tuples \((q^{n},\mathbf{z}_{\text{j}}^{n}(d))\,\forall\,n\in 1,\dots,N\). The final function \[\xi_{\text{code}}(\textit{if},q)=\begin{bmatrix}\xi_{\text{code}}^{\textit{jt },1}(q)\\ \vdots\\ \xi_{\text{code}}^{\textit{jt},D_{j}}(q)\end{bmatrix} \tag{13}\] is then given by evaluating the polynomials individually and stacking the results into a vector. The exact choice of \(p\) is not important as long as the amount of joint codes to fit to is much higher than the potential dimensions of the polynomial \(p\ll N\). Thus, we fixed \(p=5\) for all of our experiments. A visualization of our learned latent joint space and the fitted polynomials is given in Fig. S.3a. ## 4 Experiments We conduct two main experiments, an object-centric canonical reconstruction task and a full scene reconstruction task. The first experiment is to evaluate the performance of our newly introduced decoder while the second experiment highlights the advantages of our single-forward pass method compared to a two-stage approach. ### Object Set For both experiments, we use 3D models from the PartNet-Mobility [38] object set to generate a training and test data set. **Categories**: While PartNet-Mobility provides more than 2000 objects from 46 categories, as done in previous work [5, 9, 15, 24, 41] we only select a subset of all categories. From this subset of categories, we select objects with one fixed base part and one significant moving part (e.g. we filter out knobs, buttons etc.). To later create realistic room layouts, we further differentiate between three placement types for objects, stand-alone (SA), counter (C) and table-top (TT) objects. In Tab. 1 we list the number of objects per category we selected as well as the context in which they can be used. **Object Canonicalization**: When tackling the task of reconstructing objects in a canonical object frame, usually, objects are canonicalized such that they either fit into the unit cube or unit sphere. This helps with the stability of learning and simplifies hyperparameter tuning. This approach fails for articulated objects as their outer dimensions change depending on the joint state. Blindly rescaling an articulated object such that it fits inside the unit cube or unit sphere results in an inconsistent part scaling across different joint states. To mitigate this problem, [15] proposed the NAOCS-space. Following their approach, first, we bring an object in its closed state (i.e. the lower joint limit) and put it in a canonical orientation such that for all objects, Z is pointing upwards and X back, Y is given through a right-hand coordinate frame system. Different from [15], we rescale and translate the closed object such that it fits in the unit cube and then backwards apply that same rescaling and translation to all objects of the same instance independent of the joint state of the object. It is important to note that rescaling an articulated object has no impact on revolute joint states (in \(\deg\)), but prismatic joint states (in \(\mathrm{m}\)) which have to be rescaled accordingly. ### Canonical Reconstruction Task In our first experiment, we evaluate how well our decoders reconstruct the object's geometry and the articulation state. Thus, the task is not to reconstruct the object in the camera frame, but simply in its canonical frame. As described in Sec. 3.2.3, we will optimize the shape and joint code for each input SDF with ADAM [12] first jointly for 400 steps, \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Category} & Joint & \multirow{2}{*}{SA} & \multirow{2}{*}{C} & \multirow{2}{*}{TT} & \multirow{2}{*}{Train} & \multirow{2}{*}{Test} \\ & Type & & & & & \\ \hline Dishwasher & Rev. & ✗ & ✓ & ✗ & 18 & 5 \\ Laptop & Rev. & ✗ & ✗ & ✓ & 20 & 5 \\ Microwave & Rev. & ✓ & ✗ & ✓ & 10 & 5 \\ Oven & Rev. & ✓ & ✓ & ✗ & 7 & 3 \\ Refrigerator & Rev. & ✓ & ✓ & ✗ & 10 & 2 \\ Table & Pris. & ✓ & ✗ & ✗ & 19 & 5 \\ WashingMachine & Rev. & ✓ & ✓ & ✗ & 8 & 2 \\ \hline \multicolumn{6}{c}{\(SA=\textit{stand-alone}\), \(C=\textit{counter}\), \(\textit{TT}=\textit{table-top}\)} \\ \multicolumn{6}{c}{_Rev. = Revolute, \(\textit{Pris.}=\textit{Prismatic}\)_} \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of Selected Objects. We select a subset of the PartNet-Mobility [38] object set and report the amount of instances we selected per category for our training and test set. Our final object set has 92 objects instances for training and 25 for testing. then only the shape code for 100 steps and finally, only the joint code for 100 steps. **Dataset**: To generate our dataset for the canonical reconstruction task, we first apply the aforementioned canonicalization to each object from our object set described in Sec. 4.1. Here, the placement type does not matter. Then, we sample each object in 50 joint configurations uniformly spaced within the joint limits, make the meshes watertight using [6] and follow [28] to generate 100k ground truth SDF values. Lastly, we rescale the generated data by the largest extents across all meshes to a unit cube. As mentioned in Sec. 4.1 we also have to rescale prismatic joint states accordingly. While we do not consider this as a new dataset contribution, we make our generation code available to allow other researchers to adjust the selected object instances and categories and generate their own data. **Baselines and Ablations**: Throughout this experiment, we will compare against the state-of-the-art for category-level object reconstruction method _A-SDF_[24]. As _A-SDF_ is designed to work on a single category, first, to show that learning an implicit joint code does not have a negative impact rather than using the real joint state directly as input, we compare against _A-SDF_ directly by training _CARTO_ also only on a single category. Second, we will jointly train on all categories to highlight that _CARTO_ is able to generalize to a wide variety of categories and articulations using one model. Third, we additionally perform an ablation study to understand the importance of our similarity regularization introduced in Sec. 3.3.1. In this ablation study, we remove the pre-training step and the post-epoch step. We call this model _CARTO-No-Enf_. And fourth, we extend _A-SDF_ to also take the joint type as input which allows us to train it jointly on all categories. Please note that we neglect _A-SDF_s proposed test-time adaption (TTA) technique as in real applications it would not be practical to keep network weights of all different instances encountered. Results using TTA are reported in Sec. S.5.1. **Metrics**: To measure reconstruction quality, we report the bi-directional L2-Chamfer distance multiplied by 1000 between the ground truth points and the extracted points using the model's respective generation method. To quantify the articulation state prediction, we will report the joint type prediction accuracy as well as the joint state error measured in \(\mathrm{deg}\) or \(\mathrm{m}\) depending on the joint type for all correctly classified joint types. **Results**: The results for the canonical reconstruction task are shown in Tab. 2. While no method clearly outperforms the other, it is notable though that CARTO and _A-SDF_ trained on all categories are performing slightly better on average across all categories compared to our baselines. This shows that having disentangled joint and shape codes in CARTO can make the reconstruction category-agnostic. ### Full Pipeline Task In our second experiment, the full pipeline task, we want to investigate the advantages CARTO has over a two-stage approach. To that end, we set up two experiments, one on simulated data and one on real-world data. For the full pipeline experiment, we use the trained decoders from the previous experiment. **Datasets**: We evaluate on two datasets, quantitatively on a synthetic dataset that aligns with our synthetic training dataset and qualitatively on a newly collected real-world dataset. _Synthetic Data_: For training our encoder, we use SimNet [13] to generate a large-scale dataset of indoor kitchen environments. We use the same articulated object instances from Tab. 1 we also used to train our decoders. Unlike the previous experiment, the placement type of the object matters here. For each randomly sampled articulated object in the scene, we randomly sample a joint state in its joint limits as well as a scale from a pre-defined scale for each category. To get ground truth joint codes for sampled articulation states we use the proposed method in Sec. 3.3.3. After sampling a scene, we generate noisy stereo images as well as non-noisy depth images (only used for evaluation of the baseline). To generate our synthetic test dataset we follow the same procedure with the only exception that we use the defined test-instances. _Real Data_: Additionally, we evaluate the performance on a real-world test dataset we collected. Therefore, we select two real object instances from each of the following categories: knives, laptops, refrigerators, staplers, ovens, dishwashers, microwaves as well as one storage furniture and washing machine instance. We place these instances in common household environments. For each object, four different viewpoints were collected for each of the four articulation states for the object. We measure the real joint state and annotate the data using [30] with orientated 3D bound \begin{table} \begin{tabular}{l|c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{CD (\(\downarrow\))} & Joint State & Joint Type \\ & & Error (\(\downarrow\)) & Accuracy (\(\uparrow\)) \\ \hline A-SDF [24]\(\dagger\) & 1.437 & 11.337\({}^{\circ}\) (0.094m) & N/A \\ CARTO \(\dagger\) & 1.190 & 12.474\({}^{\circ}\) (0.081m) & N/A \\ A-SDF [24] & **0.934** & 16.139\({}^{\circ}\) (0.235m) & **0.962** \\ CARTO-No-Enf & 2.246 & 35.892\({}^{\circ}\) (0.104m) & 0.646 \\ CARTO & 1.192 & **11.512\({}^{\circ}\)** (0.141m) & 0.908 \\ \hline \hline \end{tabular} \end{table} Table 2: Decoder Optimization Results. Each object is sampled in 50 different joint states for training as well as for testing. We average over all instances from our seven categories here. For a category-level comparison see Tab. S.3. \(\dagger\)_means the model is trained only on one category, thus the joint type prediction is not applicable (NA). The joint state error mean is only reported across the revolute categories, as there is only one prismatic category (reported in brackets)._ ing boxes. In total, we collected 263 images. For collection we used a ZED 2 stereo camera. To get depth images we use state-of-the-art, off-the-shelf learned stereo depth methods to produce highly accurate depth images [31]. _Comparison to other Datasets_: To the best of our knowledge, the closest works to CARTO's dataset are the RBO [19] and BMVC [22] dataset. Both datasets do not provide large-scale synthetic stereo-RGB or RGB-D images and only half of the categories with readily available 3D furniture models from PartNetMobility. For a full comparison see Tab. S.2. **Baselines**: We set up two baselines using A-SDF [24] and follow their proposed method to reconstruct objects in the camera frame. Since A-SDF assumes a given segmentation of the object as well as a pose that transforms the object from the camera frame to the object-centric frame, we will compare against two versions of A-SDF. One, where we use ground truth segmentation masks and poses which we call _A-SDF-GT_ and one, where we use our model to predict segmentation masks whose center we then use to query our pixel-level pose map. We call this variant simply _A-SDF_. In both cases we approximate normals using the depth image, use the segmentation masks to extract the corresponding object point cloud from the depth image, transform the point clouds into the canonical object frame using the predicted pose, create SDF values, optimize for the SDF values as done in Sec. 4.2 and eventually reproject the reconstruction into the camera frame using the same transformation. **Metrics**: We compare our reconstructions in the camera frame using two different metrics typically used for object pose prediction [35]. First, we compare the absolute error of the position and orientation by reporting the respective percentage below \(10^{\circ}10\mathrm{cm}\) and \(20^{\circ}30\mathrm{cm}\) combined. Second, we evaluate the average precision for various IOU-overlap thresholds (**IOU25** and **IOU50**) between the reconstructed bounding box and the ground truth bounding box. Both metrics serve as a proxy for articulation state and reconstruction quality. For evaluation results on these more fine-grained metrics, we refer to Sec. S.5.2. **Results**: In Tab. 3 we report results using the aforementioned metrics as well as show a speed comparison of CARTO against the baselines. _Reconstruction_: As visible in Tab. 3a _CARTO_ shows superior performance over both variants of _A-SDF_ for our full reconstruction task. Overall the performance of all methods is lower compared to similar experiments on category-level rigid object detection [35]. This can be attributed to the fact that in our kitchen scenarios we deal with heavy occlusions due to many objects being placed under the counter. Taking the occlusions into consideration, it becomes clear that for _A-SDF_ it is very difficult to estimate the exact extents given only a front-showing partial point cloud of the object. Compared to that, _CARTO_ benefits from its single-shot encoding step as the whole image is taken into consideration. We show qualitative results on our synthetic and real-world data in Fig. 1 as well as in Sec. S.5.3 where we also compare against an RGB-D version of CARTO. _Detection Speed_: Aside from a lower pose and bounding box error, _CARTO_ processes frames faster than _A-SDF_. Tab. 3b shows a reduction in inference time of more than 60 times while still persevering the same level of detail. ## 5 Conclusion We presented a novel method to reconstruct multiple articulated objects in a scene in a category- and joint-agnostic manner from a single stereo image. For reconstruction we learn a SDF-based decoder and show the necessity of regularization to achieve good performance. Our full single-shot pipeline improves over current two-stage approaches in terms of 3DIoU and inference speed. **Limitations**: While CARTO is able to generalize to unseen instances, it still relies on a learned shape prior. Using test time adaption techniques such as done by [24] helps mitigating this issues, but is not sufficient to deal with categorically different objects. Additionally, while the single-forward pass is fast, jointly optimizing for pose, scale, codes like done in [8, 18] could further improve results with the cost of added execution time. Currently CARTO is only trained on objects with a single joint. To extend CARTO to objects with an arbitrary number of joints, we must be able to calculate pairwise similarity between two object states. While not explored in this paper, CARTO introduces a framework for future research to pursue this research question. A potential solution could leverage Eq. (8) and Hungarian matching of the cross-product of articulation states to obtain similarities measurements between arbitrary kinematic structures. \begin{table} \end{table} Table 3: Full Scene Reconstructions Results. **Acknowledgements**: This work was partially funded by the Carl Zeiss Foundation with the ReScaLe project.
2306.07558
Some Properties Of Proximal Homotopy Theory
Nearness theory comes into play in homotopy theory because the notion of closeness between points is essential in determining whether two spaces are homotopy equivalent. While nearness theory and homotopy theory have different focuses and tools, they are intimately connected through the concept of a metric space and the notion of proximity between points, which plays a central role in both areas of mathematics. This manuscript investigates some concepts of homotopy theory in proximity spaces. Moreover, these concepts are taken into account in descriptive proximity spaces.
Melih Is, Ismet Karaca
2023-06-13T06:15:12Z
http://arxiv.org/abs/2306.07558v1
# Some properties of proximal homotopy theory ###### Abstract. Nearness theory comes into play in homotopy theory because the notion of closeness between points is essential in determining whether two spaces are homotopy equivalent. While nearness theory and homotopy theory have different focuses and tools, they are intimately connected through the concept of a metric space and the notion of proximity between points, which plays a central role in both areas of mathematics. This manuscript investigates some concepts of homotopy theory in proximity spaces. Moreover, these concepts are taken into account in descriptive proximity spaces. Key words and phrases:Proximity, descriptive proximity, homotopy, fibration, cofibration 2010 Mathematics Subject Classification: 54E05, 54E17, 14D06, 55P05 ## 1. Introduction Topological perspective first appears in the scientific works of Riemann and Poincare in the 19th century [18, 19]. The concept reveals that the definitions of topological space emerge either through Kuratowski's closure operator [2] or through the use of open sets. Given the Kuratowski's closure operator, there are many strategies and approaches that seem useful in different situations and are worth developing, as in nearness theory. Proximity spaces are created by reflection of the concept of being near/far on sets. Given a nonempty set \(X\), and any subsets \(E\), \(F\subset X\), we say that \(E\) is near \(F\) if \(E\cap F\neq\emptyset\). A method based on the idea of near sets is first proposed by Riesz, is revived by Wallace, and is axiomatically elaborated it by Efremovic [1, 20, 22]. Let \(X\) be a nonempty set. A proximity is a binary relation (actually a nearness relation) defined on subsets of \(X\) and generally denoted by \(\delta\). One can construct a topology on \(X\) induced by the pair \((X,\delta)\) using the closure operator (named a proximity space). Indeed, for any point \(x\in X\), if \(\{x\}\) is near \(E\), then \(x\in\overline{E}\). In symbols, if \(\{x\}\delta E\), then \(x\delta\overline{E}\). It should be noted that the notation \(x\delta E\) is sometimes used instead of \(\{x\}\delta E\) for abbreviation (in particular in a metric space). It appears that several proximities may correspond in this way to the same topology on \(X\). Moreover, several topological conclusions can be inferred from claims made about proximity spaces. The near set theory is reasonably improved by Smirnov's compactification, Leader's non-symmetric proximity, and Lodato's symmetric proximity [3, 4, 21]. Peters also contributes to the theory of nearness by introducing the concept of spatial nearness and descriptive nearness [9, 10]. In addition, the strong structure of proximity spaces stands out in the variety of application areas: In [6], it is possible to see the construction of proximity spaces in numerous areas such as cell biology, the topology of digital images, visual marketing, and so on. In a broader context, the application areas of near spaces are listed along with the history of the subject in [11]. According to this, some near set theory-related topics are certain engineering problems, image analysis, and human perception. The main subject of this article, algebraic topology approaches in proximity spaces, is a work in progress in the literature. Mapping spaces, one of the fundamental concepts in homotopy theory, is examined in proximity spaces in [8]. The proximal setting of the notion fibration is first defined in [17]. Peters and Vergili have recently published interesting research on descriptive proximal homotopy, homotopy cycles, path cycles, and Lusternik-Schnirelmann theory of proximity spaces [14, 15, 16, 17]. This paper is primarily concerned with the theory of proximal homotopy and is organized as follows. In Section 2, we discuss the general properties of proximity and descriptive proximity spaces. Section 3 covered 4 main topics in proximity spaces: Mapping spaces, covering spaces, fibrations, and cofibrations. They provide different types of examples and frequently used algebraic topology results in proximal homotopy cases. Next, the descriptive proximal homotopy theory, which is handled in Section 5, discusses the ideas from Section 3.3 and illustrates them with examples by using feature vectors as color scales. Finally, the last section establishes the direction for future works by clearly emphasizing the application areas of homotopy theory. ## 2. Preliminaries Before proceeding with the main results, it is critical to remember the fundamental characteristics of proximity and descriptive proximity spaces. ### On Proximity Spaces Consider a pseudo-metric space \((X,d)\). A binary relation \(\delta\) defined by \[{}^{\prime\prime}E\delta F \Leftrightarrow D(E,F)=0^{\prime\prime}\] satisfies \[\begin{array}{ccccc}\textbf{(a)}&E\delta F&\Rightarrow&F\delta E,\\ \textbf{(b)}&(E\cup F)\delta G&\Leftrightarrow&E\delta G\ \vee\ F\delta G, \\ \textbf{(c)}&E\delta F&\Rightarrow&E\neq\emptyset\ \wedge\ F\neq\emptyset, \\ \textbf{(d)}&E\underline{\delta}F&\Rightarrow&\exists G\subset X:E\underline{ \delta}G\ \wedge\ (X-G)\underline{\delta}F,\\ \textbf{(e)}&E\cap F\neq\emptyset&\Rightarrow&E\delta F\end{array}\] for \(D(E,F)=\inf\{d(x_{1},x_{2}):x_{1}\in E,x_{2}\in F\}\)[1, 5, 21]. \(\delta\) is a nearness relation and \(E\delta F\) is read as "\(E\)_is near \(F\)_". Otherwise, the notation \(E\underline{\delta}F\) means that "\(E\) is _far from \(F\)_". **Definition 2.1**.: [1, 5, 21] The nearness relation \(\delta\) for the subsets of \(X\) is said to be an _Efremovic proximity_ (simply denoted by _EF-proximity_ or _proximity_) provided that \(\delta\) satisfies **(a)**-**(e)**. \((X,\delta)\) is said to be an _EF-proximity_ (or _proximity_) _space_. As an example of a proximity space, the _discrete proximity_\(\delta\) on a (nonempty) set \(X\) is defined by "\(E\delta F\ \Leftrightarrow\ E\cap F\neq\emptyset\)" for \(E\), \(F\subset X\). Also, the _indiscrete proximity_\(\delta^{{}^{\prime}}\) on a (nonempty) set \(X\) is given by \(E\delta^{{}^{\prime}}F\) for any nonempty subsets \(E\) and \(F\) in \(X\). A subset \(E\) of \(X\) with a proximity \(\delta\) is a closed set if "\(x\delta E\ \Rightarrow\ x\in E\)". The converse is also valid. Therefore, given a proximity \(\delta\) on \(X\), a topology \(\tau(\delta)\) is defined by the family of complements of all closed sets via Kuratowski closure operator [5]. **Theorem 2.2**.: _[_5_]_ _For a proximity \(\delta\) and a topology \(\tau(\delta)\) on a set \(X\), we have that the closure \(\overline{E}\) coincides with \(\{x:x\delta E\}\)._ Given any proximities \(\delta\) and \(\delta^{{}^{\prime}}\) on respective sets \(X\) and \(X^{{}^{\prime}}\), a map \(h\) from \(X\) to \(X^{{}^{\prime}}\) is called _proximally continuous_ if "\(E\delta F\ \Rightarrow\ h(E)\delta^{{}^{\prime}}h(F)\)" for \(E\), \(F\subset X\)[1, 21]. We denote a proximally continuous map by "pc-map". Given a proximity \(\delta\) on \(X\) and a subset \(E\subset X\), a _subspace proximity_\(\delta_{E}\) is defined on the subsets of \(E\) as follows [5]: "\(E_{1}\delta E_{2}\ \Leftrightarrow\ E_{1}\delta_{E}E_{2}\)" for \(E_{1}\), \(E_{2}\subset E\). Let \((X,\delta)\) be a proximity space and \((E,\delta_{E})\) a subspace proximity. A pc-map \(k:(X,\delta)\rightarrow(E,\delta_{E})\) is a _proximal retraction_ provided that \(k\circ j\) is an identity map on \(1_{E}\), where \(j:(E,\delta_{E})\rightarrow(X,\delta)\) is an inclusion map. **Lemma 2.3**.: _[_14_]__(Gluing Lemma) Assume that \(f_{1}:(X^{{}^{\prime}},\delta^{{}^{\prime}}_{1})\rightarrow(Y^{{}^{\prime}}, \delta^{{}^{\prime}}_{2})\) and \(f_{2}:(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}}_{1})\rightarrow(Y^{{} ^{\prime}},\delta^{{}^{\prime}}_{2})\) are pc-maps with the property that they agree on the intersection of \(X\) and \(X^{{}^{\prime\prime}}\). Then the map \(f_{1}\cup f_{2}:(X^{{}^{\prime}}\cup X^{{}^{\prime\prime}},\delta)\rightarrow(Y ^{{}^{\prime}},\delta^{{}^{\prime}}_{2})\), defined by \(f_{1}\cup f_{2}(s)=\begin{cases}f_{1}(s),&s\in X^{{}^{\prime}}\\ f_{2}(s),&s\in X^{{}^{\prime\prime}}\end{cases}\) for any \(s\in X^{{}^{\prime}}\cup X^{{}^{\prime\prime}}\), is a pc-map._ We say that \(h\) is a _proximity isomorphism_ provided that \(h\) is a bijection and each of \(h\) and \(h^{-1}\) is pc-map [5]. According to this, \((X,\delta)\) and \((X^{{}^{\prime}},\delta^{{}^{\prime}})\) are said to be _proximally isomorphic spaces_. Another important proximity relation is given on the subsets of the cartesian product of two proximity spaces as follows [3]: Let \(\delta\) and \(\delta^{{}^{\prime}}\) be any proximities on respective sets \(X\) and \(X^{{}^{\prime}}\). For any subsets \(E_{1}\times E_{2}\) and \(F_{1}\times F_{2}\) of \(X\times X^{{}^{\prime}}\), \(E_{1}\times E_{2}\)_is near_\(F_{1}\times F_{2}\) if \(E_{1}\delta F_{1}\) and \(E_{2}\delta^{{}^{\prime}}F_{2}\). **Definition 2.4**.: _[_14_]_ _Given two pc-maps \(h_{1}\) and \(h_{2}\) from \(X\) to \(X^{{}^{\prime}}\), if there is a pc-map \(F\) from \(X\times I\) to \(X^{{}^{\prime}}\) with the properties \(F(x,0)=h_{1}(x)\) and \(F(x,1)=h_{2}(x)\), then \(h_{1}\) and \(h_{2}\) are called proximally homotopic maps._ The map \(F\) in Definition 2.4 is said to be a _proximal homotopy between \(h\) and \(h^{{}^{\prime}}\)_. We simply denote a proximal homotopy by "prox-hom". Similar to topological spaces, prox-hom is an equivalence relation on proximity spaces. Let \(\delta\) be a proximity on \(X\) and \(E\subset X\). \(E\) is called a \(\delta-\)_neighborhood of \(F\)_, denoted by \(F\ll_{\delta}E\), provided that \(F\underline{\delta}(X-E)\)[5]. The proximal continuity of any function \(h:(X,\delta)\rightarrow(X^{{}^{\prime}},\delta^{{}^{\prime}})\) can also be expressed as \[{}^{\prime\prime}E\ll_{\delta^{{}^{\prime}}}F\ \Rightarrow\ h^{-1}(E)\ll_{ \delta}h^{-1}(F)^{\prime\prime}\] for any \(E\), \(F\subset X^{\prime}\). **Theorem 2.5**.: _[_5_]_ _Let \(E_{k}\ll_{\delta}F_{k}\) for \(k=1,\cdots,r\). Then_ \[\bigcap_{k=1}^{r}E_{k}\ll_{\delta}\bigcap_{k=1}^{r}F_{k}\quad\text{and}\quad \bigcup_{k=1}^{r}E_{k}\ll_{\delta}\bigcup_{k=1}^{r}F_{k}.\] **Definition 2.6**.: [14] For any two elements \(x_{1}\) and \(x_{2}\) in \(X\) with a proximity \(\delta\), a _proximal path from \(x_{1}\) to \(x_{2}\)_ in \(X\) is a pc-map \(h\) from \(I=[0,1]\) to \(X\) for which \(h(0)=x_{1}\) and \(h(1)=x_{2}\). The proximal continuity of the proximal path \(h:I\to X\) in Definition 2.6 means that "\(D(E,F)=0\ \Rightarrow\ h(E)\delta h(F)\)" for \(E\), \(F\in I\). Recall that \(X\) is a _connected proximity space_ if and only if for all nonempty \(E\), \(F\in\mathcal{P}(X)\), \(E\cup F=X\) implies that \(E\delta F\)[7]. Let \(\delta\) be a proximity on \(X\). Then \(X\) is called a _path-connected proximity space_ if, for any points \(x_{1}\) and \(x_{2}\) in \(X\), there exists a proximal path from \(x_{1}\) to \(x_{2}\) in \(X\). **Lemma 2.7**.: _Proximal path-connectedness implies proximal connectedness as in the same as topological spaces._ Proof.: Let \(\delta\) be a path-connected proximity on \(X\). Suppose that \((X,\delta)\) is not proximally connected. Then there exists two nonempty subsets \(E\), \(F\) in \(X\) such that \(E\cup F=X\) and \(E\underline{\delta}F\). Since \(X\) is proximally path-connected, there is a pc-map \(h:[0,1]\to X\) with \(h(0)=E\) and \(h(1)=F\). Consider the subsets \(h^{-1}(E)\) and \(h^{-1}(F)\in I\). They are nonempty sets because \(0\in h^{-1}(E)\) and \(1\in h^{-1}(F)\). Their union is \([0,1]\), and by the proximal continuity of \(h\), \(h^{-1}(E)\underline{\delta}h^{-1}(F)\). This contradicts with the fact that \([0,1]\) is proximally connected. Finally, \(X\) is proximally connected. **Theorem 2.8**.: _Proximal path-connectedness coincides with proximal connectedness._ Proof.: Given a proximity \(\delta\) on \(X\), by Lemma 2.7, it is enough to prove that any connected proximity space is a path-connected proximity space. Suppose that \(X\) is not a path-connected proximity space. Then any map \(h:([0,1],\delta^{{}^{\prime}})\to(X,\delta)\) with \(h(0)=x\) and \(h(1)=y\) is not proximally continuous, i.e., if \(E\delta^{{}^{\prime}}F\) for all \(E\), \(F\in[0,1]\), then \(h(E)\underline{\delta}h(F)\). Take \(E=\{0\}\subset I\) and \(F=(0,1]\subset I\). Since \(D(E,F)=\inf\{d(0,z):z\in F\}=0\), we have that \(E\delta F\). It follows that \(h(E)=\{x\}\) is not near \(h(F)=X\setminus\{x\}\). On the other hand, \[h(E)\cup h(F)=\{x\}\cup X\setminus\{x\}=X.\] Thus, \(X\) is not proximally connected and this is a contradiction. ### On Descriptive Proximity Spaces Assume that \(X\) is a nonempty set and \(x\in X\). Consider the set \(\Phi=\{\phi_{1},\cdots,\phi_{m}\}\) of maps (generally named as probe functions) \(\phi_{j}:X\to\mathbb{R}\), \(j=1,\cdots,m\), such that \(\phi_{j}(x)\) denotes a feature value of \(x\). Let \(E\subset X\). Then the set of descriptions of a point \(e\) in \(E\), denoted by \(\mathcal{Q}(E)\), is given by the set \(\{\Phi(e):e\in E\}\), where \(\Phi(e)\) (generally called a feature vector for \(e\)) equals \((\phi_{1}(e),\cdots,\phi_{m}(e))\). For \(E\), \(F\subset X\), the binary relation \(\delta_{\Phi}\) is defined by \[{}^{\prime\prime}E\delta_{\Phi}F\ \Leftrightarrow\ \mathcal{Q}(E)\cap\mathcal{Q}(F) \neq\emptyset^{\prime\prime}, \tag{1}\] and \(E\delta_{\Phi}F\) is read as "\(E\) is _descritively near_\(F\)" [9, 10, 12]. Also, \(E\delta_{\Phi}F\) is often used to state "\(E\) is _descritively far from_\(F\)". The _descriptive intersection of \(E\) and \(F\) and the _descriptive union of \(E\) and \(F\)_ are defined by \[E\bigcap_{\Phi}F=\{x\in E\cup F:\Phi(x)\in\mathcal{Q}(E)\ \wedge\ \Phi(x)\in \mathcal{Q}(F)\},\] and \[E\bigcup_{\Phi}F=\{x\in E\cup F:\Phi(x)\in\mathcal{Q}(E)\ \vee\ \Phi(x)\in \mathcal{Q}(F)\},\] respectively [12]. A binary relation \(\delta_{\Phi}\) defined by (1) [6] satisfies \[\textbf{(f)}\qquad\quad E\delta_{\Phi}F \Rightarrow E\neq\emptyset\ \wedge F\neq\emptyset,\] \[\textbf{(g)}\qquad\quad E\bigcap_{\Phi}F\neq\emptyset \Rightarrow E\delta_{\Phi}F,\] \[\textbf{(h)}\qquad\quad E\bigcap_{\Phi}F\neq\emptyset \Rightarrow F\bigcap_{\Phi}E,\] \[\textbf{(i)}\qquad\quad E\delta_{\Phi}(F\cup G) \Leftrightarrow E\delta_{\Phi}F\ \vee\ E\delta_{\Phi}G,\] \[\textbf{(k)}\qquad\quad E\underline{\delta_{\Phi}}F \Rightarrow \exists G\subset X:E\underline{\delta_{\Phi}}G\ \wedge\ (X-G)\underline{\delta_{\Phi}}F.\] \(\delta_{\Phi}\) is a descriptive nearness relation. **Definition 2.9**.: [6] The nearness relation \(\delta_{\Phi}\) for the subsets of \(X\) is said to be an _descriptive Efremovic proximity_ (simply denoted by _descriptive EF-proximity_ or _descriptive proximity_) if \(\delta_{\Phi}\) satisfies **(f)**-**(k)**. \((X,\delta_{\Phi})\) is said to be a _descriptive EF-proximity_ (or _descriptive proximity_) _space_. A map \(h:(X,\delta_{\Phi})\rightarrow(X,\delta^{{}^{\prime}}_{\Phi})\) is called _descriptive proximally continuous_ provided that "\(E\delta_{\Phi}F\ \Rightarrow\ h(E)\delta^{{}^{\prime}}_{\Phi}h(F)\)" for \(E\), \(F\subset X\)[13, 14]. We denote a descriptive proximally continuous map by "dpc-map". Let \(\delta_{\Phi}\) be a descriptive proximity on \(X\), and \(E\subset X\) a subset. Then a _descriptive subspace proximity_\(\delta^{E}_{\Phi}\) is defined on the subsets of \(E\) as follows: \[{}^{\prime\prime}E_{1}\delta_{\Phi}E_{2}\ \Leftrightarrow\ E_{1}\delta^{E}_{ \Phi}E^{\prime\prime}_{2}\] for \(E_{1}\), \(E_{2}\subset E\). Given a descriptive proximity \(\delta_{\Phi}\) on \(X\), a descriptive subspace proximity \((E,\delta^{E}_{\Phi})\), and the inclusion \(j:(E,\delta^{E}_{\Phi})\rightarrow(X,\delta_{\Phi})\), a dpc-map \(k:(X,\delta_{\Phi})\rightarrow(E,\delta^{E}_{\Phi})\) is called a _descriptive proximal retraction_ if \(k\circ j=1_{E}\). **Lemma 2.10**.: _[_14_]__(Gluing Lemma) Assume that \(f_{1}:(X^{{}^{\prime}},\delta^{{}^{\prime}}_{\Phi_{1}})\rightarrow(Y^{{}^{ \prime}},\delta^{{}^{\prime}}_{\Phi_{2}})\) and \(f_{2}:(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}}_{\Phi_{1}})\rightarrow(Y ^{{}^{\prime}},\delta^{{}^{\prime}}_{\Phi_{2}})\) are two dpc-maps with the property that they agree on the intersection of \(X^{{}^{\prime}}\) and \(X^{{}^{\prime\prime}}\). Then the map \(f_{1}\cup f_{2}\) from \((X^{{}^{\prime}}\cup X^{{}^{\prime\prime}},\delta_{\Phi})\) to \((Y^{{}^{\prime}},\delta^{{}^{\prime}}_{\Phi_{2}})\), defined by \(f_{1}\cup f_{2}(s)=\begin{cases}f_{1}(s),&s\in X^{{}^{\prime}}\\ f_{2}(s),&s\in X^{{}^{\prime\prime}}\end{cases}\) for any \(s\in X^{{}^{\prime}}\cup X^{{}^{\prime\prime}}\), is a dpc-map._ \(h\) is a _descriptive proximity isomorphism_ if \(h\) is a bijection and each of \(h\) and \(h^{-1}\) is dpc-map [5]. Hence, \((X,\delta_{\Phi})\) and \((X^{{}^{\prime}},\delta^{{}^{\prime}}_{\Phi})\) are called _descriptive proximally isomorphic spaces_. A descriptive proximity relation on the cartesian product of descriptive proximity spaces is defined as follows [3]: Assume that \(\delta_{\Phi}\) and \(\delta^{{}^{\prime}}_{\Phi}\) are any descriptive proximities on \(X\) and \(X^{{}^{\prime}}\), respectively. \(E\delta_{\Phi}F\) and \(E^{{}^{\prime}}\delta^{{}^{\prime}}_{\Phi}F^{{}^{\prime}}\) implies that \(E\times E^{{}^{\prime}}\) is _descriptively near_\(F\times F^{{}^{\prime}}\), where \(E\times E^{{}^{\prime}}\) and \(F\times F^{{}^{\prime}}\) are any subsets of \(X\times X^{{}^{\prime}}\). **Definition 2.11**.: [14] Let \(h_{1}\), \(h_{2}:(X,\delta_{\Phi})\rightarrow(X^{{}^{\prime}},\delta_{\Phi}^{{}^{\prime}})\) be any map. Then \(h_{1}\) and \(h_{2}\) are said to be _descriptive proximally homotopic maps_ provided that there exists a dpc-map \(G:X\times I\to X^{{}^{\prime}}\) with \(G(x,0)=h_{1}(x)\) and \(G(x,1)=h_{2}(x)\). In Definition 2.11, \(G\) is a _descriptive proximal homotopy between \(h_{1}\) and \(h_{2}\)_. We simply denote a descriptive proximal homotopy by "dprox-hom". Given a descriptive proximity \(\delta_{\Phi}\) on \(X\) and a subset \(F\subset X\), \(F\) is said to be a \(\delta_{\Phi}-\)_neighborhood of \(E\)_, denoted by \(E\ll_{\delta_{\Phi}}F\), if \(E\underline{\delta_{\Phi}}(X-F)\)[13]. **Theorem 2.12**.: _[_5_]_ _Let \(E_{j}\ll_{\delta_{\Phi}}F_{j}\) for \(j=1,\cdots,m\). Then_ \[\bigcap_{j=1}^{m}E_{j}\ll_{\delta_{\Phi}}\bigcap_{j=1}^{m}F_{j}\quad\text{and }\quad\bigcup_{j=1}^{m}E_{j}\ll_{\delta_{\Phi}}\bigcup_{j=1}^{m}F_{j}.\] **Definition 2.13**.: [14] Let \(x_{1}\) and \(x_{2}\) be any two elements in \(X\) with a descriptive proximity \(\delta_{\Phi}\). Then a _descriptive proximal path from \(x_{1}\) to \(x_{2}\)_ in \(X\) is a dpc-map \(h\) from \(I=[0,1]\) to \(X\) for which \(h(0)=x_{1}\) and \(h(1)=x_{2}\). In Definition 2.13, the fact \(h:I\to X\) is descriptive proximally continuous means that "\(D(E,F)=0\ \Rightarrow\ h(E)\delta_{\Phi}h(F)\)" for \(E\), \(F\in I\). A descriptive proximity space \((X,\delta_{\Phi})\) is _connected_ if and only if for all nonempty \(E\), \(F\in\mathcal{P}(X)\), \(E\cup F=X\) implies that \(E\delta_{\Phi}F\)[7]. A descriptive proximity space \((X,\delta_{\Phi})\) is _path-connected_ if, for any points \(x_{1}\) and \(x_{2}\) in \(X\), there exists a descriptive proximal path from \(x_{1}\) to \(x_{2}\) in \(X\). **Theorem 2.14**.: _In a descriptive proximity space, path-connectedness coincides with connectedness._ Proof.: Follow the method in the proof of Theorem 2.8. ## 3. Homotopy Theory on Proximity Spaces This section, one of the main parts (Section 3 and Section 4) of the paper, examines the projection of the homotopy theory elements in parallel with the proximity spaces. First, we start with the notion of proximal mapping spaces. Then we have proximal covering spaces. The last two parts are related to proximal fibrations and its dual notion of proximal cofibrations. Results on these four topics that we believe will be relevant to future proximity space research are presented. ### Proximal Mapping Spaces The work of mapping spaces in nearness theory starts with [8] and is still open to improvement. Note that the study of discrete invariants of function spaces is essentially homotopy theory in algebraic topology, and recall that depending on the nature of the spaces, it may be useful to attempt to impose a topology on the space of continuous functions from one topological space to another. One of the best-known examples of this is the compact-open topology. **Definition 3.1**.: Let \(\delta_{1}\) and \(\delta_{2}\) be two proximities on \(X\) and \(Y\), respectively. The proximal mapping space \(Y^{X}\) is defined as \(\{\alpha:X\to Y\ |\ \alpha\) is a pc-map\(\}\) having the following proximity relation \(\delta\) on itself: Let \(E\), \(F\subset X\) and \(\{\alpha_{i}\}_{i\in I}\) and \(\{\beta_{j}\}_{j\in J}\) be any subsets of pc-maps in \(Y^{X}\). We say that \(\{\alpha_{i}\}_{i\in I}\delta\{\beta_{j}\}_{j\in J}\) if the fact \(E\delta_{1}F\) implies that \(\alpha_{i}(E)\delta_{2}\beta_{j}(F)\). **Example 3.2**.: Consider the set \(X=\{a,b,c,d,e,f,g,h\}\) of cells in Figure 3.1 with the proximity \(\delta\). Define three proximal paths \(\alpha_{1}\), \(\alpha_{2}\), and \(\alpha_{3}\in X^{I}\) by \[\alpha_{1}:a \mapsto b\mapsto c\mapsto d\mapsto e\mapsto f\mapsto g\mapsto h,\] \[\alpha_{2}:h \mapsto a\mapsto b\mapsto c\mapsto d\mapsto e\mapsto f\mapsto g,\] \[\alpha_{3}:a \mapsto h\mapsto g\mapsto f\mapsto e\mapsto d\mapsto c\mapsto b.\] For all \(t\in I\), \(\alpha_{1}(t)\delta\alpha_{2}(t)\). This means that \(\alpha_{1}\) is near \(\alpha_{2}\). On the other hand, for \(t\in[2/8,3/8]\), we have that \(\alpha_{1}(t)=c\) and \(\alpha_{3}(t)=g\), that is, \(\alpha_{1}\) and \(\alpha_{3}\) are not near in \(X\). **Definition 3.3**.: For the proximal continuity of a map \(H:(X,\delta_{1})\to(Z^{Y},\delta^{{}^{\prime}})\), we say that the fact \(E\delta_{1}F\) implies that \(H(E)\delta^{{}^{\prime}}H(F)\) for any subsets \(E\), \(F\subset X\). **Proposition 3.4**.: _Let \(\delta_{1}\), \(\delta_{2}\), and \(\delta_{3}\) be any proximities on \(X\), \(Y\), and \(Z\), respectively. Then the map \(G:(X\times Y,\delta^{{}^{\prime\prime}})\to(Z,\delta_{3})\) is pc-map if and only if the map \(H:(X,\delta_{1})\to(Z^{Y},\delta^{{}^{\prime}})\) defined by \(H(E)(F):=G(E\times F)\) is pc-map for \(E\subset X\) and \(F\subset Y\)._ Proof.: Assume that \(E_{1}\delta_{1}F_{1}\) for \(E_{1}\), \(F_{1}\subset X\). If \(E_{2}\delta_{2}F_{2}\) for \(E_{2}\), \(F_{2}\subset Y\), then we find \((E_{1}\times E_{2})\delta^{{}^{\prime\prime}}(F_{1}\times F_{2})\). Since \(G\) is a pc-map, we get \(G(E_{1}\times E_{2})\delta_{3}G(F_{1}\times F_{2})\). It follows that \(H(E_{1})(E_{2})\delta_{3}H(F_{1})(F_{2})\). This shows that \(H(E_{1})\delta^{{}^{\prime}}H(F_{1})\), i.e., \(H\) is a pc-map. Conversely, assume that \((E_{1}\times E_{2})\delta^{{}^{\prime\prime}}(F_{1}\times F_{2})\). Then we get \(E_{1}\delta_{1}F_{1}\) in \(X\) and \(E_{2}\delta_{2}F_{2}\) in \(Y\). Since \(H\) is a pc-map, we get \(H(E_{1})\delta^{{}^{\prime}}H(F_{1})\). So, we have that \(H(E_{1})(E_{2})\delta_{3}H(F_{1})(F_{2})\). This leads to the fact that \(G(E_{1}\times E_{2})\delta^{{}^{\prime\prime}}G(F_{1}\times F_{2})\), namely that, \(G\) is a pc-map. **Theorem 3.5**.: _Let \(\delta_{1}\), \(\delta_{2}\), and \(\delta_{3}\) be any proximities on \(X\), \(Y\), and \(Z\), respectively. Then \((Z^{X\times Y},\delta_{4})\) and \(((Z^{Y})^{X},\delta_{5})\) are proximally isomorphic spaces._ Figure 3.1. The picture represented by \(X=\{a,b,c,d,e,f,g,h\}\). Proof.: Define a bijective map \(f:Z^{X\times Y}\to(Z^{Y})^{X}\) by \(f(G)=H\). For any pc-maps \(G\), \(G^{{}^{\prime}}\subset Z^{X\times Y}\) such that \(G\delta_{4}G^{{}^{\prime}}\), we have that \(f(G)\delta_{5}f(G^{{}^{\prime}})\). Indeed, for \(E_{1}\times E_{2}\), \(F_{1}\times F_{2}\subset X\times Y\), we have that \(G(E_{1}\times E_{2})\delta_{3}G(F_{1}\times F_{2})\). This means that \(H(E_{1})(E_{2})\delta_{3}H(F_{1})(F_{2})\). Another saying, we find \(H\delta_{5}H^{{}^{\prime}}\). Therefore, \(f\) is a pc-map. For the proximal continuity of \(f^{-1}\), assume that \(H\delta_{5}H^{{}^{\prime}}\). Then we have that \(H(E_{1})\) and \(H^{{}^{\prime}}(F_{1})\) are near in \(Z^{Y}\) for \(E_{1}\), \(F_{1}\subset X\). If \(E_{2}\delta_{2}F_{2}\) in \(Y\), then we have that \(H(E_{1})(E_{2})\delta_{3}H^{{}^{\prime}}(F_{1})(F_{2})\). It follows that \(G(E_{1}\times E_{2})\delta_{3}G^{{}^{\prime}}(F_{1}\times F_{2})\). Thus, we obtain that \(G\delta_{4}G^{{}^{\prime}}\), which means that \(f^{-1}(H)\delta_{4}f^{-1}(H^{{}^{\prime}})\). Finally, \(f\) is a proximity isomorphism. **Theorem 3.6**.: _Let \(\delta_{1}\), \(\delta_{2}\), and \(\delta_{3}\) be any proximities on \(X\), \(Y\), and \(Z\), respectively. Then \(((Y\times Z)^{X},\delta_{4})\) and \((Y^{X}\times Z^{X},\delta_{5})\) are proximally isomorphic spaces._ Proof.: The proximal isomorphism is given by the map \[f:((Y\times Z)^{X},\delta_{4})\to(Y^{X}\times Z^{X},\delta_{5})\] with \(f(\alpha)=(\pi_{1}\circ\alpha,\pi_{2}\circ\alpha)\), where \(\pi_{1}\) and \(\pi_{2}\) are the projection maps from \(Y\times Z\) to the respective spaces. For any \(\{\alpha_{i}\}_{i\in I}\), \(\{\beta_{j}\}_{j\in J}\subset(Y\times Z)^{X}\) such that \(\{\alpha_{i}\}_{i\in I}\) is near \(\{\beta_{j}\}_{j\in J}\), we obtain that \(\pi_{k}\circ\{\alpha_{i}\}_{i\in I}\) is near \(\pi_{k}\circ\{\beta_{i}\}_{j\in J}\) for each \(k\in\{1,2\}\). Therefore, we have that \((\pi_{1}\circ\{\alpha_{i}\}_{i\in I},\pi_{2}\circ\{\alpha_{i}\}_{i\in I})\) is near \((\pi_{1}\circ\{\beta_{j}\}_{j\in J},\pi_{2}\circ\{\beta_{j}\}_{j\in J})\). Thus, \(f(\{\alpha_{i}\}_{i\in I})\) is near \(f(\{\beta_{j}\}_{j\in J})\), i.e., \(f\) is a pc-map. For the pc-map \[g:(Y^{X}\times Z^{X},\delta_{5})\to((Y\times Z)^{X},\delta_{4})\] with \(g(\beta,\gamma)=(\beta\times\gamma)\circ\Delta_{X}\), where \(\Delta_{X}:(X,\delta_{1})\to(X^{2},\delta_{1}^{{}^{\prime}})\) is a diagonal map of proximity spaces on \(X\), we have that \(g\circ f\) and \(f\circ g\) are identity maps on respective proximity spaces \((Y\times Z)^{X}\) and \(Y^{X}\times Z^{X}\). Consequently, \(((Y\times Z)^{X},\delta_{4})\) and \((Y^{X}\times Z^{X},\delta_{5})\) are proximally isomorphic spaces. **Definition 3.7**.: Let \(\delta_{1}\) and \(\delta_{2}\) be any proximities on \(X\) and \(Y\), respectively. Then the proximal evaluation map \[e_{X,Y}:(Y^{X}\times X,\delta)\to(Y,\delta_{2})\] is defined by \(e(\alpha,x)=\alpha(x)\). To show that the evaluation map \(e_{X,Y}\) is a pc-map, we first assume that \((\{\alpha_{i}\}_{i\in I}\times E)\delta(\{\beta_{j}\}_{j\in J}\times F)\) in \(Y^{X}\times X\). This means that \(\{\alpha_{i}\}_{i\in I}\delta^{{}^{\prime}}\{\beta_{j}\}_{j\in J}\) for a proximity relation \(\delta^{{}^{\prime}}\) on \(Y^{X}\) and \(E\delta_{1}F\) in \(X\). It follows that \(\alpha_{i}(E)\delta_{2}\beta_{j}(F)\) in \(Y\) for any \(i\in I\) and \(j\in J\). Finally, we conclude that \[e_{X,Y}(\{\alpha_{i}\}_{i\in I}\times E)\delta_{2}e_{X,Y}(\{\beta_{j}\}_{j\in J }\times F).\] **Example 3.8**.: Consider the proximal evaluation map \(e_{I,X}:(X^{I}\times I,\delta)\to(X,\delta_{1})\). Since \(X^{I}\times\{0\}\) is proximally isomorphic to \(X^{I}\) by the map \((\alpha,0)\mapsto\alpha(0)\), the restriction \[e_{I,X}^{0}=e_{I,X}|_{(X^{I}\times\{0\})}:(X^{I},\delta^{{}^{\prime}})\to(X, \delta_{1}),\] defined by \(e_{I,X}^{0}(\alpha)=\alpha(0)\), is a pc-map. **Example 3.9**.: Let \(e_{I,X\times X}:((X\times X)^{I}\times I,\delta)\rightarrow(X,\delta_{1})\) be the proximal evaluation map. By Theorem 3.6, the restriction \[e_{I,X\times X}^{0}=e_{I,X\times X}|_{(X^{I}\times\{0\})}:(X^{I},\delta^{{}^{ \prime}})\rightarrow(X\times X,\delta^{{}^{\prime}}),\] defined by \(e_{I,X\times X}^{0}(\alpha)=(\alpha(0),\alpha(1))\), is a pc-map. Note that, in topological spaces, the map \(X^{I}\to X\times X\), \(\alpha\mapsto(\alpha(0),\alpha(1))\), is the path fibration. Similarly, the map \(X^{I}\to X\), \(\alpha\mapsto\alpha(0)\), is the path fibration with a fixed initial point at \(t=0\). ### Proximal Covering Spaces A covering space of a topological space and the fundamental group are tightly related. One can categorize all the covering spaces of a topological space using the subgroups of its fundamental group. Covering spaces are not only useful in algebraic topology, but also in complex dynamics, geometric group theory, and the theory of Lie groups. **Definition 3.10**.: A surjective and pc-map \(p:(X,\delta)\rightarrow(X^{{}^{\prime}},\delta^{{}^{\prime}})\) is a proximal covering map if the following hold: * Let \(\{x^{{}^{\prime}}\}\subseteq X^{{}^{\prime}}\) be any subset with \(\{x^{{}^{\prime}}\}\ll_{\delta^{{}^{\prime}}}Y^{{}^{\prime}}\). Then there is an index set \(I\) satisfying that \[p^{-1}(Y^{{}^{\prime}})=\bigcup_{i\in I}Y_{i}\] with \(V_{i}\ll_{\delta}Y_{i}\), where \(V_{i}\in p^{-1}(\{x^{{}^{\prime}}\})\) for each \(i\in I\). * \(Y_{i}\neq Y_{j}\) when \(i\neq j\) for \(i\), \(j\in I\). * \(p|_{Y_{i}}:Y_{i}\to Y^{{}^{\prime}}\) is a proximal isomorphism for every \(i\in I\). In Definition 3.10, \((X,\delta)\) is called a proximal covering space of \((X^{{}^{\prime}},\delta^{{}^{\prime}})\). For \(i\in I\), \(Y_{i}\) is said to be a proximal sheet. For any \(x^{{}^{\prime}}\in X^{{}^{\prime}}\), \(p^{-1}(\{x^{{}^{\prime}}\})\) is called a proximal fiber of \(x^{{}^{\prime}}\). The map \(p|_{Y_{i}}:Y_{i}\to Y^{{}^{\prime}}\) is a proximal isomorphism if the map \(p:(X,\delta)\rightarrow(X^{{}^{\prime}},\delta^{{}^{\prime}})\) is a proximal isomorphism. However, the converse is not generally true. Given any proximity \(\delta\) on \(X\), it is obvious that the identity map on \(X\) is always a proximal covering map. **Example 3.11**.: Assume that \(X=\{a_{1},a_{2},a_{3},a_{4}\}\cup\{b_{1},b_{2},b_{3},b_{4}\}\cup\{c_{1},c_{2},c_{3},c_{4}\}\) and \(X^{{}^{\prime}}=\{d_{1},d_{2},d_{3},d_{4}\}\) are two proximity spaces such that \(p:(X,\delta)\rightarrow(X^{{}^{\prime}},\delta^{{}^{\prime}})\) is a surjective and pc-map defined by \(p(a_{i})=p(b_{i})=p(c_{i})=d_{i}\) for each \(i=1,2,3,4\) (see Figure 3.2). Let \(\{d_{1}\}\subset X^{{}^{\prime}}\) and \(Y^{{}^{\prime}}=\{d_{1},d_{2},d_{4}\}\) a proximal \(\delta^{{}^{\prime}}-\)neighborhood of \(\{d_{1}\}\). For \(V_{1}=\{a_{1}\}\), \(V_{2}=\{b_{1}\}\), and \(V_{3}=\{c_{1}\}\), we have \(p^{-1}(Y^{{}^{\prime}})=\bigcup_{i=1}^{3}Y_{i}\), where \(Y_{1}=\{a_{1},a_{2},a_{4}\}\), \(V_{2}=\{b_{1},b_{2},b_{4}\}\), and \(V_{3}=\{c_{1},c_{2},c_{4}\}\). Note that \(V_{1}\underline{\delta}(X-Y_{1})\), \(V_{2}\underline{\delta}(X-Y_{2})\), and \(V_{3}\underline{\delta}(X-Y_{3})\), i.e., \(Y_{i}\) is a proximal \(\delta-\)neighborhood of \(V_{i}\) for each \(i\in\{1,2,3\}\). Moreover, for \(i\), \(j\in\{1,2,3\}\) with \(i\neq j\), we have that \(Y_{i}\) is not \(Y_{j}\), and \(p|_{Y_{1}}:\{a_{1},a_{2},a_{4}\}\rightarrow\{d_{1},d_{2},d_{4}\}\), \(p|_{Y_{2}}:\{b_{1},b_{2},b_{4}\}\rightarrow\{d_{1},d_{2},d_{4}\}\), and \(p|_{Y_{3}}:\{c_{1},c_{2},c_{4}\}\rightarrow\{d_{1},d_{2},d_{4}\}\) are proximal isomorphisms. For other points \(d_{2}\), \(d_{3}\), and \(d_{4}\), a similar process is done. This shows that \(p\) is a proximal covering map. **Example 3.12**.: Given a proximity \(\delta\) on \(X\), consider the surjective and pc-map \(p:(X\times\{0,1,2\cdots\},\delta^{{}^{\prime}})\to(X,\delta)\) with \(p(x,t)=x\). For a proximal \(\delta-\)neighborhood \(Y\) of any subset \(\{x\}\subset X\), we have that \[p^{-1}(Y)=Y\times\mathbb{Z}^{+}\subset X\times\{0,1,2\cdots\}\] for a proximal \(\delta^{{}^{\prime}}-\)neighborhood \(Y^{{}^{\prime}}\) of \(Y\times\mathbb{Z}^{+}\). Moreover, \(p|_{Y\times\mathbb{Z}^{+}}:Y\times\mathbb{Z}^{+}\to Y\), \(p|_{Y\times\mathbb{Z}^{+}}(x,t)=x\), is a proximal isomorphism. Thus, \(p\) is a proximal covering map. **Proposition 3.13**.: _Any proximal isomorphism is a proximal covering map._ Proof.: Let \(p:(X,\delta)\to(X^{{}^{\prime}},\delta^{{}^{\prime}})\) be a proximal isomorphism. Then \(p\) is a pc-map. Lemma 4.1 of [6] says that \(p\) is continuous with respect to compatible topologies. Therefore, we get \(p^{-1}(Y^{{}^{\prime}})=Y\subset X\) for an open neighborhood \(Y^{{}^{\prime}}\) of any subset \(\{x^{{}^{\prime}}\}\subseteq X^{{}^{\prime}}\). Combining with the fact that a proximal neighborhood of a set is also a neighborhood, we conclude that \(p^{-1}(Y^{{}^{\prime}})=Y\) for a proximal \(\delta^{{}^{\prime}}-\)neighborhood \(Y^{{}^{\prime}}\) of \(\{x^{{}^{\prime}}\}\) in \(X^{{}^{\prime}}\) and a proximal \(\delta-\)neighborhood \(Y\) of \(V\), where \(V\in p^{-1}(\{x^{{}^{\prime}}\})\) in \(X\). Furthermore, \(p|_{Y}:Y\to Y^{{}^{\prime}}\) is an isomorphism of proximity spaces because \(p\) is an isomorphism of proximity spaces. Finally, \(p\) is a proximal covering map. The following diagram illustrates two ways to prove that any proximal isomorphism \(p:(X,\delta)\to(X^{{}^{\prime}},\delta^{{}^{\prime}})\) is a covering map between respective compatible topologies on both \((X,\delta)\) and \((X^{{}^{\prime}},\delta^{{}^{\prime}})\): \[\begin{CD}\text{proximal isomorphism}@>{}>{}>\text{proximal covering map}\\ @V{}V{}V\\ \text{homeomorphism}@>{}>{}>\text{covering map}.\end{CD}\] **Theorem 3.14**.: _The cartesian product of two proximal covering maps is a proximal covering map._ Proof.: Let \(p:(X,\delta_{1})\to(X^{{}^{\prime}},\delta_{1}^{{}^{\prime}})\) and \(q:(Y,\delta_{2})\to(Y^{{}^{\prime}},\delta_{2}^{{}^{\prime}})\) be two proximal covering maps. Then for a proximal \(\delta_{1}^{{}^{\prime}}-\)neighborhood \(M_{1}^{{}^{\prime}}\) of \(\{x_{1}^{{}^{\prime}}\}\subset X^{{}^{\prime}}\), we have that \[p^{-1}(M_{1}^{{}^{\prime}})=\bigcup_{i\in I}M_{i}\] for a proximal \(\delta_{1}-\)neighborhood \(M_{1}\) of \(V_{i}\), where \(V_{i}\in p^{-1}(\{x_{1}^{{}^{\prime}}\})\). We also have that \(M_{i}\neq M_{k}\) with any \(k\in I\) when \(i\neq k\). Similarly, for a proximal \(\delta_{2}^{{}^{\prime}}-\)neighborhood \(N_{2}^{{}^{\prime}}\) of \(\{x_{2}^{{}^{\prime}}\}\subset Y^{{}^{\prime}}\), we have that \[q^{-1}(N_{2}^{{}^{\prime}})=\bigcup_{j\in J}N_{j}\] for a proximal \(\delta_{2}-\)neighborhood \(N_{j}\) of \(W_{j}\), where \(W_{j}\in q^{-1}(\{x_{2}^{{}^{\prime}}\})\). Also, we have that \(N_{i}\neq N_{l}\) with any \(l\in J\) when \(j\neq l\). For a proximal neighborhood \(M_{1}^{{}^{\prime}}\times N_{2}^{{}^{\prime}}\) of \(\{x_{1}^{{}^{\prime}}\}\times\{x_{2}^{{}^{\prime}}\}\subset X^{{}^{\prime}} \times Y^{{}^{\prime}}\), we get \[(p\times q)^{-1}(M_{1}^{{}^{\prime}}\times N_{2}^{{}^{\prime}})=p^{-1}(M_{1}^{ {}^{\prime}})\times q^{-1}(N_{2}^{{}^{\prime}})=\bigcup_{i\in I}M_{i}\times \bigcup_{j\in J}N_{j}=\bigcup_{\begin{subarray}{c}i\in I\\ j\in J\end{subarray}}(M_{i}\times N_{j}).\] It is clear that \(M_{i}\times N_{j}\neq M_{k}\times N_{l}\) when \((i,j)\neq(k,l)\) for any \(i\), \(k\in I\) and \(j\), \(l\in J\). Moreover, since \(p|_{M_{i}}:M_{i}\to M_{1}^{{}^{\prime}}\) and \(q|_{N_{j}}:N_{j}\to N_{2}^{{}^{\prime}}\) are proximal isomorphisms, the map \((p\times q)|_{M_{i}\times N_{j}}:M_{i}\times N_{j}\to M_{1}^{{}^{\prime}}\times N _{2}^{{}^{\prime}}\) is a proximal isomorphism. Consequently, \(p\times q:X\times Y\to X^{{}^{\prime}}\times Y^{{}^{\prime}}\) is a proximal covering map. ### Proximal Fibrations Some topological problems can be conceptualized as lifting or extension problems. In the homotopy-theoretic viewpoint, fibrations and cofibrations deal with them, respectively (see Section 3.4 for the detail of cofibrations). Postnikov systems, spectral sequences, and obstruction theory, which are important tools constructed on homotopy theory, involve fibrations. On the other hand, the notion of proximal fibration of proximity spaces is first mentioned in [17], and we extend this with useful properties in proximity cases. **Definition 3.15**.: A pc-map \(p:(X,\delta)\to(X^{{}^{\prime}},\delta^{{}^{\prime}})\) is said to have the proximal homotopy lifting property (PHLP) with respect to a proximity space \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\) if for an inclusion map \(i_{0}:(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\to(X^{{}^{\prime \prime}}\times I,\delta_{1})\), \(i_{0}(x^{{}^{\prime\prime}})=(x^{{}^{\prime\prime}},0)\), for every pc-map \(k:(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\to(X,\delta)\), and prox-hom \(G:(X^{{}^{\prime\prime}}\times I,\delta_{1})\to(X^{{}^{\prime}},\delta^{{}^{ \prime}})\) with \(p\circ k=G\circ i_{0}\), then there exists a prox-hom \(G^{{}^{\prime}}:(X^{{}^{\prime\prime}}\times I,\delta_{1})\to(X,\delta)\) for which \(G^{{}^{\prime}}(x^{{}^{\prime\prime}},0)=k(x^{{}^{\prime\prime}})\) and \(p\circ G^{{}^{\prime}}(x^{{}^{\prime\prime}},t)=G(x^{{}^{\prime\prime}},t)\). **Definition 3.16**.: A pc-map \(p:(X,\delta)\to(X^{{}^{\prime}},\delta^{{}^{\prime}})\) is said to be a proximal fibration if it has the PHLP for any proximity space \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\). **Example 3.17**.: For any proximity spaces \((X,\delta)\) and \((X^{{}^{\prime}},\delta^{{}^{\prime}})\), we shall show that the projection map \(\pi_{1}:(X\times X^{{}^{\prime}},\delta_{2})\to(X,\delta)\) onto the first factor is a proximal fibration. Consider the diagram with \(\pi_{1}\circ(k_{X},k_{X^{{}^{\prime}}})=G\circ i_{0}\). Then there is a map \(G^{{}^{\prime}}:(X^{{}^{\prime\prime}}\times I,\delta_{1})\to(X\times X^{{}^{ \prime}},\delta_{2})\) defined by \(G^{{}^{\prime}}=(G,F)\), where \(F:(X^{{}^{\prime\prime}}\times I,\delta_{1})\to(X^{{}^{\prime}},\delta^{{}^{ \prime}})\) is the composition of the first projection map \((X^{{}^{\prime\prime}}\times I,\delta_{1})\to(X^{{}^{\prime\prime}},\delta^{{} ^{\prime\prime}})\) and \(k_{X^{{}^{\prime}}}\). Since \(k_{X^{{}^{\prime}}}\) and the first projection map are pc-maps, it follows that \(F\) is a pc-map. Moreover, we get \(F(x^{{}^{\prime\prime}},0)=F(x^{{}^{\prime\prime}},1)=k_{X^{{}^{\prime}}}(x^{ {}^{\prime\prime}})\), which means that \(F\) is a (constant) prox-hom. Combining this result with the fact that \(H\) is a prox-hom, we have that \(G^{{}^{\prime}}\) is a prox-hom. Moreover, we get \[G^{{}^{\prime}}\circ i(x^{{}^{\prime\prime}}) = G^{{}^{\prime}}(x^{{}^{\prime\prime}},0)=(G(x^{{}^{\prime\prime} },0),F(x^{{}^{\prime\prime}},0))=(k_{X}(x^{{}^{\prime\prime}}),k_{X^{{}^{ \prime}}}(x^{{}^{\prime\prime}}))\] \[= (k_{X},k_{X^{{}^{\prime}}})(x^{{}^{\prime\prime}}),\] and \[\pi_{1}\circ G^{{}^{\prime}}(x^{{}^{\prime\prime}},t)=\pi_{1}(G(x^{{}^{\prime \prime}},t),F(x^{{}^{\prime\prime}},t))=G(x^{{}^{\prime\prime}},t).\] This shows that \(\pi_{1}\) is a proximal fibration. **Example 3.18**.: Let \(c:(X,\delta)\to(\{x_{0}\},\delta_{0})\) be the constant map of proximity spaces. Given the diagram with the condition \(p\circ k(x^{{}^{\prime\prime}})=G\circ i_{0}(x^{{}^{\prime\prime}})=\{x_{0}\}\). Then there exists a (constant) prox-hom \(G^{{}^{\prime}}:(X^{{}^{\prime\prime}}\times I,\delta_{1})\to(X,\delta)\) defined by \(G^{{}^{\prime}}(x^{{}^{\prime\prime}},t)=k(x^{{}^{\prime\prime}})\) satisfying that \[p\circ G^{{}^{\prime}}(x^{{}^{\prime\prime}},t)=p(G^{{}^{\prime}}(x^{{}^{ \prime\prime}},t))=\{x_{0}\}=G(x^{{}^{\prime\prime}},t),\] \[G^{{}^{\prime}}\circ i_{0}(x^{{}^{\prime\prime}})=G^{{}^{\prime}}(x^{{}^{ \prime\prime}},0)=k(x^{{}^{\prime\prime}}).\] This proves that \(p\) is a proximal fibration. **Proposition 3.19**.: _i) The composition of two proximal fibrations is also a proximal fibration._ _ii) The cartesian product of two proximal fibrations is also a proximal fibration._ Proof.: **i)** Let \(p_{1}:(X_{1},\delta_{1})\to(Y_{1},\delta_{1}^{{}^{\prime}})\) and \(p_{2}:(Y_{1},\delta_{1}^{{}^{\prime}})\to(Y_{2},\delta_{2}^{{}^{\prime}})\) be any proximal fibrations. Then for the inclusion map \(i_{0}:(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\to(X^{{}^{\prime \prime}}\times I,\delta_{3})\), pc-maps \(k_{1}:(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\to(X_{1},\delta_{1})\), \(k_{2}:(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\to(Y_{1},\delta_{1}^{ {}^{\prime}})\), and proximal homotopies \(G_{1}:(X^{{}^{\prime\prime}}\times I,\delta_{3})\to(Y_{1},\delta_{1}^{{}^{ \prime}})\), \(G_{2}:(X^{{}^{\prime\prime}}\times I,\delta_{3})\to(Y_{2},\delta_{2}^{{}^{\prime}})\) with the property \(p_{1}\circ k_{1}=G_{1}\circ i_{0}\) and \(p_{2}\circ k_{2}=G_{2}\circ i_{0}\), there exist two proximal homotopies \(G_{1}^{{}^{\prime}}:(X^{{}^{\prime\prime}}\times I,\delta_{3})\to(X_{1},\delta_ {1})\) and \(G_{2}^{{}^{\prime}}:(X^{{}^{\prime\prime}}\times I,\delta_{3})\to(Y_{1},\delta_ {1}^{{}^{\prime}})\) satisfying that \[G_{1}^{{}^{\prime}}\circ i_{0} =k_{1},p_{1}\circ G_{1}^{{}^{\prime}}=G_{1},\] \[G_{2}^{{}^{\prime}}\circ i_{0} =k_{2},p_{2}\circ G_{2}^{{}^{\prime}}=G_{2}.\] If we take \(G_{2}^{{}^{\prime}}=G_{1}\), then we have the following commutative diagram: Thus, we get \[G_{1}^{{}^{\prime}}\circ i_{0}=k_{1},\] \[(p_{2}\circ p_{1})\circ G_{1}^{{}^{\prime}}=G_{2}.\] This show that the composition \(p_{2}\circ p_{1}\) is a proximal fibration. **ii)** Let \(p_{1}:(X_{1},\delta_{1})\to(Y_{1},\delta_{1}^{{}^{\prime}})\) and \(p_{2}:(X_{2},\delta_{2})\to(Y_{2},\delta_{2}^{{}^{\prime}})\) be any proximal fibrations. Then for the inclusion map \(i_{0}:(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\to(X^{{}^{\prime \prime}}\times I,\delta_{3})\), pc-maps \(k_{1}:(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\to(X_{1},\delta_{1})\), \(k_{2}:(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\to(X_{2},\delta_{2})\), and proximal homotopies \(G_{1}:(X^{{}^{\prime\prime}}\times I,\delta_{3})\to(Y_{1},\delta_{1}^{{}^{ \prime}})\), \(G_{2}:(X^{{}^{\prime\prime}}\times I,\delta_{3})\to(Y_{2},\delta_{2}^{{}^{ \prime}})\) with the property \(p_{1}\circ k_{1}=G_{1}\circ i_{0}\) and \(p_{2}\circ k_{2}=G_{2}\circ i_{0}\), there exist two proximal homotopies \(G_{1}^{{}^{\prime}}:(X^{{}^{\prime\prime}}\times I,\delta_{3})\to(X_{1},\delta _{1})\) and \(G_{2}^{{}^{\prime}}:(X^{{}^{\prime\prime}}\times I,\delta_{3})\to(X_{2},\delta _{2})\) satisfying that \[G_{1}^{{}^{\prime}}\circ i_{0} =k_{1},p_{1}\circ G_{1}^{{}^{\prime}}=G_{1},\] \[G_{2}^{{}^{\prime}}\circ i_{0} =k_{2},p_{2}\circ G_{2}^{{}^{\prime}}=G_{2}.\] Consider the map \(G_{3}^{{}^{\prime}}=(G_{1}^{{}^{\prime}},G_{2}^{{}^{\prime}})\). Then \(G_{3}^{{}^{\prime}}\) is clearly a prox-hom and we have the following commutative diagram: Thus, we get \[G_{3}^{{}^{\prime}}\circ i_{0}=(k_{1},k_{2}),\] \[(p_{1}\times p_{2})\circ G_{3}^{{}^{\prime}}=(G_{1},G_{2}).\] This proves that the cartesian product \(p_{1}\times p_{2}\) is a proximal fibration. Let \(f:(X,\delta_{1})\to(Y,\delta_{2})\) be a pc-map. Then for any pc-map \(g:(Z,\delta_{3})\to(Y,\delta_{2})\), a proximal lifting of \(f\) is a pc-map \(h:(X,\delta_{1})\to(Z,\delta_{3})\) satisfying that \(f=g\circ h\). **Proposition 3.20**.: _Let \(p:(X,\delta_{1})\to(Y,\delta_{2})\) be a proximal fibration. Then_ _i) The pullback \(g^{*}p:(P,\delta)\to(Y^{{}^{\prime}},\delta^{{}^{\prime}}_{2})\) is a proximal fibration for any pc-map \(g:(Y^{{}^{\prime}},\delta^{{}^{\prime}}_{2})\to(Y,\delta_{2})\)._ _ii) For any proximity space \((Z,\delta_{3})\), the map \(p_{*}:(X^{Z},\delta^{{}^{\prime}}_{3})\to(Y^{Z},\delta^{{}^{\prime\prime}}_{3})\) is a proximal fibration._ Proof.: **i)** Let \[P=\{(x,y^{{}^{\prime}})\ |\ g(y^{{}^{\prime}})=p(e)\}\subseteq X\times Y^{{}^{ \prime}}\] be a proximity space with the proximity \(\delta_{0}\) on itself. Since \(p\) is a proximal fibration, for an inclusion map \(i_{0}:(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\to(X^{{}^{\prime \prime}}\times I,\delta_{3})\), for any pc-map \(k_{1}\) from \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\) to \((X,\delta_{1})\), and prox-hom \(G_{1}:(X^{{}^{\prime\prime}}\times I,\delta_{3})\to(X,\delta)\) with \(p\circ k_{1}=G_{1}\circ i_{0}\), there exists a prox-hom \[G^{{}^{\prime}}_{1}:(X^{{}^{\prime\prime}}\times I,\delta_{3})\to(X,\delta_{1})\] for which \(G^{{}^{\prime}}_{1}(x^{{}^{\prime\prime}},0)=k_{1}(x^{{}^{\prime\prime}})\) and \(p\circ G^{{}^{\prime}}_{1}(x^{{}^{\prime\prime}},t)=G_{1}(x^{{}^{\prime\prime }},t)\). Assume that a map \(k_{2}\) from \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\) to \((P,\delta_{0})\) is a pc-map and \(G_{2}:(X^{{}^{\prime\prime}}\times I,\delta_{3})\to(Y^{{}^{\prime}},\delta^{{} ^{\prime}}_{2})\) is a prox-hom with \(g^{*}p\circ k_{2}=G_{2}\circ i_{0}\). If we define \(G^{{}^{\prime}}_{2}:(X^{{}^{\prime\prime}}\times I,\delta_{3})\to(P,\delta_{0})\) by \(G^{{}^{\prime}}_{2}=(G^{{}^{\prime}}_{1},G_{2})\), then we observe that \[G^{{}^{\prime}}_{2}\circ i_{0}=k_{2},\] \[g^{*}p\circ G^{{}^{\prime}}_{2}=G_{2}.\] This gives the desired result. **ii)** Consider the following diagrams: and Since \(p\) is a proximal fibration, we have \(H^{{}^{\prime}}_{1}:(Z\times X^{\prime\prime}\times I,\delta^{{}^{\prime\prime }})\to(X,\delta_{1})\) as the prox-hom in the upper diagram. \(Z\times X^{\prime\prime}\times I\) is proximally isomorphic to \(X^{{}^{\prime\prime}}\times I\times Z\) and we can think of \(G^{{}^{\prime}}_{1}\) as the prox-hom \((X^{{}^{\prime\prime}}\times I\times Z,\delta^{{}^{\prime\prime}})\to(X, \delta_{1})\). By Proposition 3.4, we have the prox-hom \(G^{{}^{\prime}}_{2}:(X^{{}^{\prime\prime}}\times I,\delta^{{}^{\prime}})\to(X ^{Z},\delta^{{}^{\prime}}_{3})\) in the lower diagram. This map satisfies the desired conditions, and thus, we conclude that \(p_{*}\) is a proximal fibration. ### Proximal Cofibrations Similar to the proximal fibration, we currently deal with the notion of proximal cofibration of proximity spaces. We first study the problem of extension in homotopy theory, and then present the definition of proximal cofibration with its basic results. **Definition 3.21**.: Given two proximity spaces \((X,\delta)\) and \((X^{{}^{\prime}},\delta^{{}^{\prime}})\), a pc-map \(h:X\to X^{{}^{\prime}}\) is said to have a proximal homotopy extension property (PHEP) with respect to a proximity space \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\) if for inclusion maps \(i_{0}^{X}:(X,\delta)\to(X\times I,\delta_{1})\) and \(i_{0}^{X^{{}^{\prime}}}:(X^{{}^{\prime}},\delta^{{}^{\prime}})\to(X^{{}^{ \prime}}\times I,\delta^{{}^{\prime}}_{1})\), for every pc-map \(k:(X^{{}^{\prime}},\delta^{{}^{\prime}})\to(X^{{}^{\prime\prime}},\delta^{{} ^{\prime\prime}})\), and prox-hom \(F:(X\times I,\delta_{1})\to(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\) with \(k\circ(h\times 1_{0})=F\circ i_{0}^{X}\), then there exists a prox-hom \(F^{{}^{\prime}}:(X^{{}^{\prime}}\times I,\delta^{{}^{\prime}}_{1})\to(X^{{}^ {\prime\prime}},\delta^{{}^{\prime\prime}})\) satisfying \(F^{{}^{\prime}}\circ i_{0}^{X^{{}^{\prime}}}=k\) and \(F^{{}^{\prime}}\circ(h\times 1_{I})=F\). **Example 3.22**.: Let \(X^{{}^{\prime\prime}}=\{a,b,c,d\}\) be a set with the proximity \(\delta^{{}^{\prime\prime}}\) on itself as in Figure 3.3. Let \(\gamma_{1}\) and \(\gamma_{2}\) be proximal paths on \(X^{{}^{\prime\prime}}\) such that \(\gamma_{1}(0)=b\), \(\gamma_{1}(1)=a\), \(\gamma_{2}(0)=b\), and \(\gamma_{2}(1)=c\). Consider the following diagram for an inclusion map \(h:(\{0\},\delta)\to(I,\delta^{{}^{\prime}})\), where \(k:(I,\delta^{{}^{\prime}})\to(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\) is the map \(\gamma_{2}\) and \(F:(\{0\}\times I,\delta_{1})\to(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\) is defined by \(F(0,t)=\gamma_{1}(t)\) for all \(t\in I\): i.e., the equality \(k\circ(h\times 1_{0})=F\circ i_{0}^{\{0\}}\) holds. Then, by Gluing Lemma, there exists a prox-hom \(F^{{}^{\prime}}:(I\times I,\delta_{1}^{{}^{\prime}})\to(X^{{}^{\prime\prime}}, \delta^{{}^{\prime\prime}})\) defined by \(F^{{}^{\prime}}(0,t_{1})=F(0,t_{1})\) and \(F^{{}^{\prime}}(t_{2},0)=k(t_{2})\) for all \((t_{1},t_{2})\in I\times I\) which satisfy \[F^{{}^{\prime}}\circ(h\times 1_{I})=F,\] \[F^{{}^{\prime}}\circ i_{0}^{I}=k.\] Schematically, we have the diagram **Definition 3.23**.: A pc-map \(h:(X,\delta)\to(X^{{}^{\prime}},\delta^{{}^{\prime}})\) is said to be a proximal cofibration if it has the PHEP with respect to any proximity space \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\). **Example 3.24**.: Let \(h:(X,\delta)\hookrightarrow(X^{{}^{\prime}},\delta^{{}^{\prime}})\) be an inclusion map such that \(X\subset X^{{}^{\prime}}\). Then \(h\) is a natural proximal cofibration since there exists a prox-hom \[F^{{}^{\prime}}=F|_{X^{{}^{\prime}}}:(X^{{}^{\prime}}\times I,\delta_{1}^{{} ^{\prime}})\to(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\] satisfying the conditions of PHEP with respect to any proximity space \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\). **Proposition 3.25**.: _i) Let \(h:(X,\delta)\to(X^{{}^{\prime}},\delta^{{}^{\prime}})\) and \(h^{{}^{\prime}}:(Y,\delta_{1})\to(Y^{{}^{\prime}},\delta_{1}^{{}^{\prime}})\) be two maps such that \(X\) and \(X^{{}^{\prime}}\) are proximally isomorphic to \(Y\) and \(Y^{{}^{\prime}}\), respectively, and the following diagram commutes:_ _Then \(h\) is a proximal cofibration if and only if \(h^{{}^{\prime}}\) is a proximal cofibration._ _ii) The composition of two proximal cofibrations is also a proximal cofibration._ _iii) The coproduct of two proximal cofibrations is also a proximal cofibration._ _iv) Let \(h:(X,\delta)\to(X^{{}^{\prime}},\delta^{{}^{\prime}})\) be a proximal cofibration and the following is a pushout diagram._ _Then \(h^{{}^{\prime}}\) is a proximal cofibration._ Proof.: **i)** Let \(h\) be a proximal cofibration. By Definition 3.21, there is a prox-hom \(F^{{}^{\prime}}:(X^{{}^{\prime}}\times I,\delta^{{}^{\prime}}_{2})\to(X^{{}^{ \prime\prime}},\delta^{{}^{\prime\prime}})\) such that \[F^{{}^{\prime}}\circ i_{0}^{X^{{}^{\prime}}}=k\ \ \text{and}\ \ F^{{}^{\prime}} \circ(h\times 1_{I})=F\] for any pc-map \(k:(X^{{}^{\prime}},\delta^{{}^{\prime}})\to(X^{{}^{\prime\prime}},\delta^{{} ^{\prime\prime}})\) and prox-hom \(F\) from \((X\times I,\delta_{2})\) to \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\) with \(k\circ(h\times 1_{0})=F\circ i_{0}^{X}\). Assume that \(\beta_{1}:X\to Y\) and \(\beta_{2}:X^{{}^{\prime}}\to Y^{{}^{\prime}}\) are two proximal isomorphisms. Since the diagram commutes, we know that \[h^{{}^{\prime}}\circ\beta_{1}=\beta_{2}\circ h.\] Let \(i_{0}^{Y}:(Y,\delta_{1})\to(Y\times I,\delta_{3})\) and \(i_{0}^{Y^{{}^{\prime}}}:(Y^{{}^{\prime}},\delta^{{}^{\prime}}_{1})\to(Y^{{}^{ \prime}}\times I,\delta^{{}^{\prime}}_{3})\) be two inclusion maps, \(k^{{}^{\prime}}:=k\circ(\beta_{2})^{-1}:(Y^{{}^{\prime}},\delta^{{}^{\prime}} _{1})\to(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\) a pc-map, and \(F^{{}^{\prime\prime}}:=F^{{}^{\prime}}\circ(\beta_{1}^{-1}\times 1_{I})\) from \((Y\times I,\delta_{3})\) to \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\) a prox-hom for which \[k^{{}^{\prime}}\circ(h^{{}^{\prime}}\times 1_{0})=F^{{}^{\prime}}\circ i_{0} ^{Y}.\] Then there exists a prox-hom \[F^{{}^{\prime\prime\prime}}:=F^{{}^{\prime}}\circ((\beta_{2})^{-1}\times 1_ {I}):(Y^{{}^{\prime}}\times I,\delta^{{}^{\prime}}_{3})\to(X^{{}^{\prime\prime }},\delta^{{}^{\prime\prime}})\] such that \(F^{{}^{\prime\prime}}\circ i_{0}^{Y^{{}^{\prime}}}=k^{{}^{\prime}}\) and \(F^{{}^{\prime\prime}}\circ(h^{{}^{\prime}}\times 1_{I})=F^{{}^{\prime}}\). Conversely, assume that \(h^{{}^{\prime}}\) is a proximal cofibration. Similarly, for a prox-hom \(F^{{}^{\prime}}\) from \((Y^{{}^{\prime}}\times I,\delta^{{}^{\prime}}_{3})\) to \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\) that makes \(h^{{}^{\prime}}\) a cofibration, there exists a prox-hom \[F^{{}^{\prime\prime}}:=F^{{}^{\prime}}\circ(f^{{}^{\prime}}\times 1_{I}):(X^{{}^{ \prime}}\times I,\delta^{{}^{\prime}}_{2})\to(X^{{}^{\prime\prime}},\delta^{{ }^{\prime\prime}})\] that makes \(h\) a proximal cofibration. **ii)** Let \(h:(X,\delta)\to(X^{{}^{\prime}},\delta^{{}^{\prime}})\) and \(h^{{}^{\prime}}:(X^{{}^{\prime}},\delta^{{}^{\prime}})\to(Y,\delta_{1})\) be two proximal cofibrations. Then for any pc-map \(k:(X^{{}^{\prime}},\delta^{{}^{\prime}})\to(X^{{}^{\prime\prime}},\delta^{{} ^{\prime\prime}})\) and prox-hom \(F\) from \((X\times I,\delta_{2})\) to \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\) with \[k\circ(h\times 1_{0})=F\circ i_{0}^{X},\] there is a prox-hom \(F^{{}^{\prime}}:(X^{{}^{\prime}}\times I,\delta^{{}^{\prime}}_{2})\to(X^{{}^{ \prime\prime}},\delta^{{}^{\prime\prime}})\) such that \(F^{{}^{\prime}}\circ i_{0}^{X^{{}^{\prime}}}=k\) and \(F^{{}^{\prime}}\circ(h\times 1_{I})=F\), similarly, for any pc-map \(k^{{}^{\prime}}:(Y,\delta_{1})\to(X^{{}^{\prime\prime}},\delta^{{}^{\prime \prime}})\) and prox-hom \(G^{{}^{\prime}}:(X^{{}^{\prime}}\times I,\delta^{{}^{\prime}}_{2})\to(X^{{}^{ \prime\prime}},\delta^{{}^{\prime\prime}})\) with \[k^{{}^{\prime}}\circ(h^{{}^{\prime}}\times 1_{0})=G^{{}^{\prime}}\circ i_{0}^{X^{{}^{ \prime}}},\] there is a prox-hom \(F^{{}^{\prime\prime}}:(Y\times I,\delta^{{}^{\prime}}_{3})\to(X^{{}^{\prime \prime}},\delta^{{}^{\prime\prime}})\) such that \(F^{{}^{\prime\prime}}\circ i^{Y}_{0}=k^{{}^{\prime}}\) and \(F^{{}^{\prime\prime}}\circ(h^{{}^{\prime}}\times 1_{I})=G^{{}^{\prime}}\). Combining these results with the fact \(F^{{}^{\prime}}=G^{{}^{\prime}}\), we have the following: For a pc-map \(k^{{}^{\prime\prime}}:=k^{{}^{\prime}}\) and prox-hom \(G^{{}^{\prime\prime}}:=F\) with \[k^{{}^{\prime\prime}}\circ((h^{{}^{\prime}}\circ h)\times 1_{0})=G^{{}^{ \prime\prime}}\circ i^{X}_{0},\] there is a prox-hom \(F^{{}^{\prime\prime\prime}}:=F^{{}^{\prime\prime}}\) such that \[F^{{}^{\prime\prime\prime}}\circ i^{Y}_{0}=k^{{}^{\prime\prime}}\ \ \text{and}\ \ F^{{}^{\prime\prime\prime}}\circ((h^{{}^{\prime}}\circ h)\times 1 _{I})=G^{{}^{\prime\prime}}.\] This proves that \(h^{{}^{\prime}}\circ h\) is a proximal cofibration. **iii)** Let \(h_{j}:(X_{j},\delta_{j})\to(X^{{}^{\prime}}_{j},\delta^{{}^{\prime}}_{j})\) be a family of cofibrations for all \(j\in J\). Then we shall show that \(\sqcup_{j}h_{j}:(\sqcup_{j}X_{j},\delta)\to(\sqcup_{j}X^{{}^{\prime}}_{j}, \delta^{{}^{\prime}})\) is a cofibration. Since for all \(j\in J\), \(h_{j}:(X_{j},\delta_{j})\to(X^{{}^{\prime}}_{j},\delta^{{}^{\prime}}_{j})\) is cofibration, we have that for any pc-map \(k_{j}\) from \((X^{{}^{\prime}}_{j},\delta^{{}^{\prime}})\) to \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\) and prox-hom \(F_{j}:(X_{j}\times I,\delta_{2})\to(X^{{}^{\prime\prime\prime}},\delta^{{}^{ \prime\prime\prime}})\) with \[k_{j}\circ(h_{j}\times 1_{0})=F_{j}\circ i^{X_{j}}_{0},\] there is a prox-hom \(F^{{}^{\prime}}_{j}:(X^{{}^{\prime}}_{j}\times I,\delta^{{}^{\prime}}_{2})\to (X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\) such that \(F^{{}^{\prime}}_{j}\circ i^{X^{{}^{\prime}}_{0}}=k_{j}\) and \(F^{{}^{\prime}}_{j}\circ(h_{j}\times 1_{I})=F_{j}\). Now assume that for a pc-map \(\sqcup k_{j}:(\sqcup_{j}X^{{}^{\prime}}_{j},\delta^{{}^{\prime}})\to(X^{{}^{ \prime\prime}},\delta^{{}^{\prime\prime}})\) and prox-hom \(\sqcup_{j}F_{j}:(\sqcup_{j}X_{j}\times I,\delta_{4})\to(X^{{}^{\prime\prime}}, \delta^{{}^{\prime\prime}})\) with \[\sqcup_{j}k_{j}\circ(\sqcup_{j}h_{j}\times 1_{0})=\sqcup_{j}F_{j}\circ i^{ \sqcup_{j}X^{{}^{\prime}}_{j}}_{0}.\] Then there exists a map \(\sqcup_{j}F^{{}^{\prime}}_{j}:(\sqcup_{j}X^{{}^{\prime}}_{j}\times I,\delta_{ 5})\to(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\) such that \(F^{{}^{\prime}}_{j}=\sqcup_{j}F^{{}^{\prime}}_{j}\circ i_{j}\) for a map \(i_{j}:(X^{{}^{\prime}}_{j}\times I,\delta^{{}^{\prime}}_{2})\to(\sqcup_{j}X^{{ }^{\prime}}_{j}\times I,\delta_{5})\). If we define \(i^{{}^{\prime}}_{j}\) as \(i_{j}\circ i^{X^{{}^{\prime}}_{j}}_{0}\), then we find that \(i^{\sqcup_{j}X^{{}^{\prime}}_{j}}_{0}=\sqcup_{j}i^{{}^{\prime}}_{j}\). It follows that \[\sqcup_{j}F^{{}^{\prime}}_{j}\circ i^{\sqcup_{j}X^{{}^{\prime}}_{j}}_{0}= \sqcup_{j}k_{j}\] and \[\sqcup_{j}F^{{}^{\prime}}_{j}\circ(\sqcup_{j}h_{j}\times 1_{I})=\sqcup_{j}F_{j}.\] Finally, we have that \(\sqcup_{j}h_{j}\) is a cofibration. **iv)** Let \(h:(X,\delta)\to(X^{{}^{\prime}},\delta^{{}^{\prime}})\) be a proximal cofibration, i.e., there is a prox-hom \(F^{{}^{\prime}}:(X^{{}^{\prime}}\times I,\delta^{{}^{\prime}}_{2})\to(X^{{}^{ \prime\prime}},\delta^{{}^{\prime\prime}})\) such that \[F^{{}^{\prime}}\circ i^{X^{{}^{\prime}}}_{0}=k\ \ \text{and}\ \ F^{{}^{\prime}}\circ(h \times 1_{I})=F\] for any pc-map \(k:(X^{{}^{\prime}},\delta^{{}^{\prime}})\to(X^{{}^{\prime\prime}},\delta^{{}^{ \prime\prime}})\) and prox-hom \(F\) from \((X\times I,\delta_{2})\) to \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\) with \(k\circ(h\times 1_{0})=F\circ i^{X}_{0}\). Since we have a pushout diagram, it follows that \(l^{{}^{\prime}}\circ h=h^{{}^{\prime}}\circ l\) holds. Now assume that \(k^{{}^{\prime}}:(Y^{{}^{\prime}},\delta^{{}^{\prime}}_{1})\to(X^{{}^{\prime \prime}},\delta^{{}^{\prime\prime}})\) is a pc-map, and \(F^{{}^{\prime\prime}}\) from \((Y^{{}^{\prime}}\times I,\delta_{3})\) to \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\) is a prox-hom with \(k^{{}^{\prime}}\circ l^{{}^{\prime}}=k\), \(F^{{}^{\prime\prime}}\circ(l\times 1_{I})\), and \[k^{{}^{\prime}}\circ(h^{{}^{\prime}}\times 1_{0})=F^{{}^{\prime\prime}}\circ i^{X^{{}^{ \prime}}}_{0}.\] Then there exists a prox-hom \[F^{{}^{\prime\prime\prime}}:(Y^{{}^{\prime}}\times I,\delta_{3})\to(X^{{}^{ \prime\prime}},\delta^{{}^{\prime\prime}})\] such that \(F^{{}^{\prime\prime\prime}}\circ(l^{{}^{\prime}}\times 1_{I})=F^{{}^{\prime}}\). Moreover, we have that \[F^{{}^{\prime\prime\prime}}\circ(l^{{}^{\prime}}\times 1_{I})=F^{{}^{ \prime}} \Rightarrow F^{{}^{\prime\prime\prime}}\circ(l^{{}^{\prime}}\times 1_{I})\circ i_{0}^{ {}^{\prime}}=F^{{}^{\prime}}\circ i_{0}^{{}^{\prime}}\] \[\Rightarrow F^{{}^{\prime\prime\prime}}\circ i_{0}^{{}^{\prime}}\circ l^{{}^ {\prime}}=k\] \[\Rightarrow F^{{}^{\prime\prime\prime}}\circ i_{0}^{{}^{\prime}}\circ l^{{}^ {\prime}}=k^{{}^{\prime}}\circ l^{{}^{\prime}}\] \[\Rightarrow F^{{}^{\prime\prime\prime}}\circ i_{0}^{{}^{\prime}}=k^{{}^{ \prime}},\] and \[F^{{}^{\prime}}\circ(h\times 1_{I})=F^{{}^{\prime}} \Rightarrow F^{{}^{\prime\prime}}\circ(l^{{}^{\prime}}\times 1_{I})\circ(h \times 1_{I})=F^{{}^{\prime\prime}}\circ(l\times 1_{I})\] \[\Rightarrow F^{{}^{\prime\prime\prime}}\circ((l^{{}^{\prime}}\circ h)\times 1 _{I})=F^{{}^{\prime\prime}}\circ(l\times 1_{I})\] \[\Rightarrow F^{{}^{\prime\prime\prime}}\circ((h^{{}^{\prime}}\circ l) \times 1_{I})=F^{{}^{\prime\prime}}\circ(l\times 1_{I})\] \[\Rightarrow F^{{}^{\prime\prime\prime}}\circ(h^{{}^{\prime}}\times 1 _{I})\circ(l\times 1_{I})=F^{{}^{\prime\prime}}\circ(l\times 1_{I})\] \[\Rightarrow F^{{}^{\prime\prime\prime}}\circ(h^{{}^{\prime}}\times 1 _{I})=F^{{}^{\prime\prime}}.\] As a consequence, \(h^{{}^{\prime}}\) is a proximal cofibration. **Theorem 3.26**.: \(h:(X,\delta)\rightarrow(X^{{}^{\prime}},\delta^{{}^{\prime}})\) _is a proximal cofibration if and only if \((X^{{}^{\prime}}\times 0)\cup(X\times I)\) is a proximal retract of \(X^{{}^{\prime}}\times I\)._ Proof.: Let \(X^{{}^{\prime\prime\prime}}=(X^{{}^{\prime}}\times 0)\cup(X\times I)\). If \(f\) is a proximal cofibration, then for any pc-map \(k:(X^{{}^{\prime}},\delta^{{}^{\prime}})\rightarrow(X^{{}^{\prime\prime}}, \delta^{{}^{\prime\prime}})\) and prox-hom \(F:(X\times I,\delta_{2})\rightarrow(X^{{}^{\prime\prime}},\delta^{{}^{\prime \prime}})\) with \(k\circ(h\times 1_{0})=F\circ i_{0}^{X}\), there is a prox-hom \(F^{{}^{\prime}}:(X^{{}^{\prime}}\times I,\delta_{2}^{{}^{\prime}}) \rightarrow(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}})\) such that \(F^{{}^{\prime}}\circ i_{0}^{X^{{}^{\prime}}}=k\) and \(F^{{}^{\prime}}\circ(h\times 1_{I})=F\). Hence, \(F^{{}^{\prime}}\) is a proximal retraction of \(X^{{}^{\prime}}\times I\). Conversely, let \(h:(X^{{}^{\prime}}\times I,\delta_{2}^{{}^{\prime}})\rightarrow(X^{{}^{ \prime\prime}},\delta^{{}^{\prime\prime}})\) be a proximal retraction. Assume that \(k:(X^{{}^{\prime}},\delta^{{}^{\prime}})\rightarrow(Y,\delta)\) is a pc-map and \(F:(X\times I,\delta_{2})\rightarrow(Y,\delta)\) is a prox-hom with \[k\circ(h\times 1_{0})=F\circ i_{0}^{X}.\] Define a map \(F^{{}^{\prime\prime}}:(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}}) \rightarrow(Y,\delta)\) by \(F^{{}^{\prime\prime}}(x^{{}^{\prime}},t)=F(x^{{}^{\prime}},t)\) and \(F^{{}^{\prime\prime}}(x^{{}^{\prime}},0)=k(x)\). By Lemma 2.3, \(F^{{}^{\prime\prime}}\) is a pc-map. Therefore, the map \(F^{{}^{\prime}}=F^{{}^{\prime\prime}}\circ k\) is a proximal fibration satisfying that \(F^{{}^{\prime}}\circ i_{0}^{X^{{}^{\prime}}}=k\) and \(F^{{}^{\prime}}\circ(h\times 1_{I})=F\). This shows that \(h\) is a proximal cofibration. ## 4. Descriptive Proximity Definitions This section is dedicated to describing the concepts given in Section 3 on descriptive proximity spaces. Recall that a (spatial) proximity is also a descriptive proximity, and note that, in the examples of this section, descriptions of feature vectors consider the colors of boxes or some parts of balls (see Example 4.2, Example 4.7, and Example 4.12). **Definition 4.1**.: Let \((X,\delta_{\Phi}^{1})\) and \((Y,\delta_{\Phi}^{2})\) be two descriptive proximity spaces. The descriptive proximal mapping space \(Y^{X}\) is defined as the set \[\{\alpha:X\to Y\ |\ \alpha\ \text{is a dpc-map}\}\] having the following descriptive proximity relation \(\delta_{\Phi}\) on itself: Let \(E\), \(F\subset X\) and \(\{\alpha_{i}\}_{i\in I}\) and \(\{\beta_{j}\}_{j\in J}\) be any subsets of dpc-maps in \(Y^{X}\). We say that \(\{\alpha_{i}\}_{i\in I}\delta_{\Phi}\{\beta_{j}\}_{j\in J}\) if the fact \(E\delta_{\Phi}^{1}F\) implies that \(\alpha_{i}(E)\delta_{\Phi}^{2}\beta_{j}(F)\). **Example 4.2**.: Consider the set \(X=\{a,b,c,d,e,f,g,h\}\) in Figure 3.1 with the descriptive proximity \(\delta_{\Phi}\), where \(\Phi\) is a set of probe functions that admits colors of given boxes. Define three descriptive proximal paths \(\gamma_{1}\), \(\gamma_{2}\), and \(\gamma_{3}\in X^{I}\) by \[\gamma_{1}:a\mapsto b\mapsto c\mapsto d,\] \[\gamma_{2}:c\mapsto b\mapsto a\mapsto h,\] \[\gamma_{3}:a\mapsto h\mapsto g\mapsto f.\] For all \(t\in I\), \(\gamma_{1}(t)\delta_{\Phi}\gamma_{2}(t)\). Indeed, \[\gamma_{1}(t)=\begin{cases}\text{red},&t\in[0,1/4]\text{ and }[3/4,1]\\ \text{green},&t\in[1/4,2/4]\\ \text{black},&t\in[2/4,3/4]\end{cases}=\gamma_{2}(t),\] namely that, \(\gamma_{1}\) is descriptively near \(\gamma_{2}\). However, for \(t\in[1/4,2/4]\), we have that \(\alpha_{1}(t)=\text{green}\) and \(\alpha_{3}(t)=\text{black}\), that is, \(\alpha_{1}\) and \(\alpha_{3}\) are not descriptively near in \(X\). **Definition 4.3**.: We say that a map \(H:(X,\delta_{\Phi}^{1})\to(Z^{Y},\delta_{\Phi}^{{}^{\prime}})\) is descriptive proximally continuous if the fact \(E\delta_{\Phi}^{1}F\) implies that \(H(E)\delta_{\Phi}^{{}^{\prime}}H(F)\) for any subsets \(E\), \(F\subset X\). **Definition 4.4**.: For any descriptive proximity spaces \((X,\delta_{\Phi}^{1})\) and \((Y,\delta_{\Phi}^{2})\), the descriptive proximal evaluation map \[e_{X,Y}:(Y^{X}\times X,\delta_{\Phi})\to(Y,\delta_{\Phi}^{2})\] is defined by \(e(\alpha,x)=\alpha(x)\). **Proposition 4.5**.: _The descriptive proximal evaluation map \(e_{X,Y}\) is a dpc-map._ Proof.: We shall show that for any \(E\), \(F\subset X\) and \(\{\alpha_{i}\}_{i\in I}\), \(\{\beta_{j}\}_{j\in J}\subset Y^{X}\), \((\{\alpha_{i}\}_{i\in I}\times E)\delta_{\Phi}(\{\beta_{j}\}_{j\in J}\times F)\) implies \(e_{X,Y}(\{\alpha_{i}\}_{i\in I}\times E)\delta_{\Phi}^{2}e_{X,Y}(\{\beta_{j}\} _{j\in J}\times F)\). \[(\{\alpha_{i}\}_{i\in I}\times E)\delta_{\Phi}(\{\beta_{j}\}_{j \in J}\times F) \Rightarrow \{\alpha_{i}\}_{i\in I}\delta_{\Phi}^{{}^{\prime}}\{\beta_{j}\} _{j\in J}\text{ \ and \ }E\delta_{\Phi}^{1}F\] \[\Rightarrow \alpha_{i}(E)\delta_{\Phi}^{2}\beta_{j}(F),\text{ \ }\forall i\in I,\text{ } \forall j\in J\] \[\Rightarrow e_{X,Y}(\{\alpha_{i}\}_{i\in I}\times E)\delta_{\Phi}^{2}e_{X,Y}(\{ \beta_{j}\}_{j\in J}\times F),\] where \(Y^{X}\) has a descriptive proximity \(\delta_{\Phi}^{{}^{\prime}}\). **Definition 4.6**.: A surjective and dpc-map \(p:(X,\delta_{\Phi})\to(X^{{}^{\prime}},\delta_{\Phi}^{{}^{\prime}})\) between any descriptive proximity spaces \((X,\delta_{\Phi})\) and \((X^{{}^{\prime}},\delta_{\Phi}^{{}^{\prime}})\) is a descriptive proximal covering map if the following hold: * Let \(\{x^{{}^{\prime}}\}\subseteq X^{{}^{\prime}}\) be any subset with \(\{x^{{}^{\prime}}\}\ll_{\delta_{\Phi}^{{}^{\prime}}}Y^{{}^{\prime}}\). Then there is an index set \(I\) satisfying that \[p^{-1}(Y^{{}^{\prime}})=\bigcup_{i\in I}Y_{i}\] with \(V_{i}\ll_{\delta_{\Phi}}Y_{i}\), where \(V_{i}\in p^{-1}(\{x^{{}^{\prime}}\})\) for each \(i\in I\). * \(Y_{i}\neq Y_{j}\) when \(i\neq j\) for \(i\), \(j\in I\). * \(p|_{Y_{i}}:Y_{i}\to Y^{{}^{\prime}}\) is a descriptive proximal isomorphism for every \(i\in I\). In Definition 4.6, \((X,\delta_{\Phi})\) is called a descriptive proximal covering space of \((X^{{}^{\prime}},\delta^{{}^{\prime}}_{\Phi})\). For \(i\in I\), \(Y_{i}\) is said to be a descriptive proximal sheet. For any \(x^{{}^{\prime}}\in X^{{}^{\prime}}\), \(p^{-1}(\{x^{{}^{\prime}}\})\) is called a descriptive proximal fiber of \(x^{{}^{\prime}}\). The map \(p|_{Y_{i}}:Y_{i}\to Y^{{}^{\prime}}\) is a descriptive proximal isomorphism if the map \(p:(X,\delta_{\Phi})\to(X^{{}^{\prime}},\delta^{{}^{\prime}}_{\Phi})\) is a descriptive proximal isomorphism. However, the converse is not generally true. Given any descriptive proximity space \((X,\delta_{\Phi})\), it is obvious that the identity map on \(X\) is always a descriptive proximal covering map. **Example 4.7**.: Consider the surjective and dpc-map \(p:(X,\delta_{\Phi})\to(X^{{}^{\prime}},\delta^{{}^{\prime}}_{\Phi})\), defined by \(p(a_{i})=p(b_{i})=p(c_{i})=d_{i}\) for any \(i=1,2,3,4\), in Figure 3.2, where \(\Phi\) is a set of probe functions that admits colors of given shapes. Let \(\{d_{1}\}\subset X^{{}^{\prime}}\) and \(Y^{{}^{\prime}}=\{d_{1},d_{3},d_{4}\}\) a \(\delta^{{}^{\prime}}_{\Phi}-\)neighborhood of \(\{d_{1}\}\). For \(V_{1}=\{a_{1}\}\), \(V_{2}=\{b_{1}\}\), and \(V_{3}=\{c_{1}\}\), we have that \(p^{-1}(Y^{{}^{\prime}})=\bigcup_{i=1}^{3}Y_{i}\), where \(Y_{1}=\{a_{1},a_{3},a_{4}\}\), \(Y_{2}=\{b_{1},b_{3},b_{4}\}\), and \(Y_{3}=\{c_{1},c_{3},c_{4}\}\). This gives us that for all \(i\in\{1,2,3\}\), \(Y_{i}\) is a \(\delta_{\Phi}-\)neighborhood of \(V_{i}\). We also observe that \(Y_{i}\neq Y_{j}\) if \(i\neq j\) for \(i\), \(j\in\{1,2,3\}\). In addition, \(p|_{Y_{i}}:Y_{i}\to Y^{{}^{\prime}}\) is a descriptive proximal isomorphism for each \(i\). If one considers \(d_{3}\) and \(d_{4}\), the same process can be repeated. Let \(\{d_{2}\}\ll_{\delta^{{}^{\prime}}_{\Phi}}\{d_{2}\}=Y^{{}^{\prime}}\) in \(X^{{}^{\prime}}\). Then \(p^{-1}(Y^{{}^{\prime}})=Y_{1}\cup Y_{2}\cup Y_{3}\), where \(Y_{1}=\{a_{2}\}\), \(Y_{2}=\{b_{2}\}\), and \(Y_{3}=\{c_{2}\}\). We observe that \(V_{1}=\{a_{2}\}\ll_{\delta_{\Phi}}Y_{1}\), \(V_{2}=\{b_{2}\}\ll_{\delta_{\Phi}}Y_{2}\), and \(V_{3}=\{c_{2}\}\ll_{\delta_{\Phi}}Y_{3}\). Note that \(Y_{1}\neq Y_{2}\neq Y_{3}\). Furthermore, \(p|_{Y_{i}}:Y_{i}\to Y^{{}^{\prime}}\) is a descriptive proximal isomorphism for each \(i=1,2,3\). This proves that \(p\) is a descriptive proximal covering map. **Definition 4.8**.: A dpc-map \(p:(X,\delta_{\Phi})\to(X^{{}^{\prime}},\delta^{{}^{\prime}}_{\Phi})\) is said to have the descriptive proximal homotopy lifting property (DPHLP) with respect to a descriptive proximity space \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}}_{\Phi})\) if, for an inclusion map \(i_{0}:(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}}_{\Phi})\to(X^{{}^{ \prime\prime}}\times I,\delta^{1}_{\Phi})\) defined by \(i_{0}(x^{{}^{\prime\prime}})=(x^{{}^{\prime\prime}},0)\), for every dpc-map \(h:(X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}}_{\Phi})\to(X,\delta_{\Phi})\), and dprox-hom \(G:(X^{{}^{\prime\prime}}\times I,\delta^{1}_{\Phi})\to(X^{{}^{\prime}},\delta^ {{}^{\prime}}_{\Phi})\) with \(p\circ h=G\circ i_{0}\), then there exists a dprox-hom \(G^{{}^{\prime}}:(X^{{}^{\prime\prime}}\times I,\delta^{1}_{\Phi})\to(X,\delta_ {\Phi})\) for which \(G^{{}^{\prime}}(x^{{}^{\prime\prime}},0)=h(x^{{}^{\prime\prime}})\) and \(p\circ G^{{}^{\prime}}(x^{{}^{\prime\prime}},t)=G(x^{{}^{\prime\prime}},t)\). **Definition 4.9**.: A map \(p:(X,\delta_{\Phi})\to(X^{{}^{\prime}},\delta^{{}^{\prime}}_{\Phi})\), which is a dpc-map, is said to be a descriptive proximal fibration if it has the DPHLP for any descriptive proximity space \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}}_{\Phi})\). **Definition 4.10**.: Given two descriptive proximity spaces \((X,\delta_{\Phi})\) and \((X^{{}^{\prime}},\delta^{{}^{\prime}}_{\Phi})\), a dpc-map \(h:X\to X^{{}^{\prime}}\) is said to have a dprox-hom extension property (DPHEP) with respect to a descriptive proximity space \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}}_{\Phi})\) if there exists a dprox-hom \[F^{{}^{\prime}}:(X^{{}^{\prime}}\times I,\delta^{1^{{}^{\prime}}}_{\Phi})\to(X^{ {}^{\prime\prime}},\delta^{{}^{\prime\prime}}_{\Phi})\] satisfying the conditions \(F^{{}^{\prime}}\circ i_{0}^{X^{{}^{\prime}}}=k\) and \(F^{{}^{\prime}}\circ(h\times 1_{I})=F\) for any dpc-map \(k:(X^{{}^{\prime}},\delta^{{}^{\prime}}_{\Phi})\to(X^{{}^{\prime\prime}},\delta ^{{}^{\prime\prime}}_{\Phi})\), and dprox-hom \(F:(X\times I,\delta^{1}_{\Phi})\to(X^{{}^{\prime\prime}},\delta^{{}^{\prime \prime}}_{\Phi})\) with the equality \(k\circ(h\times 1_{0})=F\circ i_{0}^{X}\), where the maps \(i_{0}^{X}:(X,\delta_{\Phi})\to(X\times I,\delta^{1}_{\Phi})\) and \(i_{0}^{X^{{}^{\prime}}}:(X^{{}^{\prime}},\delta^{{}^{\prime}}_{\Phi})\to(X^{{} ^{\prime}}\times I,\delta^{1{}^{\prime}}_{\Phi})\) are inclusions. **Definition 4.11**.: A dpc-map \(f:(X,\delta_{\Phi})\to(X^{{}^{\prime}},\delta^{{}^{\prime}}_{\Phi})\) is said to be a descriptive proximal cofibration if it has the DPHEP with respect to any descriptive proximity space \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}}_{\Phi})\). **Example 4.12**.: Let \((X^{{}^{\prime\prime}},\delta^{{}^{\prime\prime}}_{\Phi})\) be a descriptive proximity space as in Figure 3.3, where \(\Phi\) is a set of probe functions which admits colors of given rounds. Assume that \(\gamma_{1}\) and \(\gamma_{2}\) are descriptive proximal paths on \(X^{{}^{\prime\prime}}\) such that \(\gamma_{1}\) is a descriptive proximal path from \(b\) to \(a\) and \(\gamma_{2}\) is a descriptive proximal path from \(b\) to \(c\). Let \(h:(\{0\},\delta_{\Phi})\to(I,\delta^{{}^{\prime}}_{\Phi})\) be an inclusion map. For a dpc-map \(k:(I,\delta^{{}^{\prime}}_{\Phi})\to(X^{{}^{\prime\prime}},\delta^{{}^{\prime \prime}}_{\Phi})\) defined as \(k=\gamma_{2}\), and a dprox-hom \(F:(\{0\}\times I,\delta^{1}_{\Phi})\to(X^{{}^{\prime\prime}},\delta^{{}^{ \prime\prime}}_{\Phi})\) defined by \(F(0,t)=\alpha(t)\) for all \(t\in I\) with the property \(k\circ(h\times 1_{0})=F\times i_{0}^{\{0\}}\), there exists a dprox-hom \[F^{{}^{\prime}}:(I\times I,\delta^{1{}^{\prime}}_{\Phi})\to(X^{{}^{\prime \prime}},\delta^{{}^{\prime\prime}}_{\Phi})\] defined by \(F^{{}^{\prime}}(0,t_{1})=F(0,t_{1})\) and \(F^{{}^{\prime}}(t_{2},0)=k(t_{2})\) for all \((t_{1},t_{2})\in I\times I\) which satisfy \[F^{{}^{\prime}}\circ(h\times 1_{I})=F,\] \[F^{{}^{\prime}}\circ i_{0}^{I}=k.\] In another saying, the diagram holds. ## 5. Conclusion A subfield of topology called homotopy theory investigates spaces up to continuous deformation. Although homotopy theory began as a topic in algebraic topology, it is currently studied as an independent discipline. For instance, algebraic and differential nonlinear equations emerging in many engineering and scientific applications can be solved using homotopy approaches. As an example, these equations include a set of nonlinear algebraic equations that model an electrical circuit. In certain studies, the aging process of the human body is presented using the algebraic topology notion of homotopy. In addition to these examples, one can easily observe homotopy theory once more when considering the algorithmic problem of robot motion planning. In this sense, this research is planned to accelerate homotopy theory studies within proximity spaces that touch many important application areas. Moreover, this examination encourages not only homotopy theory but also homology and cohomology theory to take place within proximity spaces. The powerful concepts of algebraic topology always enrich the proximity spaces and thus it becomes possible to see the topology even at the highest level fields of science such as artificial intelligence and medicine. ### Acknowledgment The second author is grateful to the Azerbaijan State Agrarian University for all their hospitality and generosity during his stay. This work has been supported by the Scientific and Technological Research Council of Turkey TUBITAK-1002-A with project number 122F454.
2304.11543
Hierarchical bubble size distributions in coarsening wet liquid foams
Coarsening of two-phase systems is crucial for the stability of dense particle packings such as alloys, foams, emulsions or supersaturated solutions. Mean field theories predict an asymptotic scaling state with a broad particle size distribution. Aqueous foams are good model systems for investigations of coarsening-induced structures, because the continuous liquid as well as the dispersed gas phases are uniform and isotropic. We present coarsening experiments on wet foams, with liquid fractions up to their unjamming point and beyond, that are performed under microgravity to avoid gravitational drainage. As time elapses, a self-similar regime is reached where the normalized bubble size distribution is invariant. Unexpectedly, the distribution features an excess of small \textit{roaming} bubbles, mobile within the network of \textit{jammed} larger bubbles. These roaming bubbles are reminiscent of rattlers in granular materials (grains not subjected to contact forces). We identify a critical liquid fraction $\phi^*$, above which the bubble assembly unjams and the two bubble populations merge into a single narrow distribution of bubbly liquids. Unexpectedly, $\phi^*$ is larger than the random close packing fraction of the foam $\phi_{rcp}$. This is because, between $\phi_{rcp}$ and $\phi^*$, the large bubbles remain connected due to a weak adhesion between bubbles. We present models that identify the physical mechanisms explaining our observations. We propose a new comprehensive view of the coarsening phenomenon in wet foams. Our results should be applicable to other phase-separating systems and they may also help to control the elaboration of solid foams with hierarchical structures.
Nicolo Galvani, Marina Pasquet, Arnab Mukherjee, Alice Requier, Sylvie Cohen-Addad, Olivier Pitois, Reinhard Höhler, Emmanuelle Rio, Anniina Salonen, Douglas J. Durian, Dominique Langevin
2023-04-23T05:15:36Z
http://arxiv.org/abs/2304.11543v2
# Hierarchical Bubble Size Distributions in Coarsening Wet Liquid Foams. ###### Abstract Liquid foams are destabilized by three coupled processes: gravity drainage, coalescence (fusion of bubbles) and coarsening (gas transfer between bubbles due to differences in capillary pressure). To focus on coarsening, coalescence can be suppressed by using suitable surfactants, but it is more difficult to suppress drainage, especially in wet foams where the liquid volume fraction is so large that bubbles are approximately spherical. To investigate the structure and time evolution of such foams, we have performed experiments in the International Space Station. Our observations reveal an unexpected excess of small bubbles, moving in the interstices of the randomly close-packed network of larger bubbles. These "roaming bubbles" naturally appear during the coarsening process. They were seemingly overlooked in previous ground-based foam coarsening experiments. We have indeed detected roaming bubbles by performing complementary studies of moderately wet foams \(\phi\lesssim 10\%\) where gravity can be counteracted on Earth by sample rotation. In foams with liquid fractions beyond the random close packing fraction of sphere dispersions with the same polydispersity as in our samples (\(\phi>\phi_{\text{rcp}}\)), the excess of small bubbles disappears, but we observe that bubbles still tend to remain connected due to weak adhesive interactions. A second transition is observed at a larger liquid fraction \(\phi^{*}\), above which the bubble size distribution narrows, becoming similar to the one previously reported for dilute coarsening dispersions (Ostwald ripening regime). We present models that identify the physical mechanisms explaining our observations. Our results suggest that, when solidified, coarsened wet liquid foams naturally have both macropores (foam bubbles) and micropores (roaming bubbles). Materials with such hierarchical structures can exhibit enhanced mechanical strength to density ratios. Foams Coarsening Ostwald Ripening ## Significance Statement Liquid foams are a dispersion of bubbles that have many applications where stability needs to be controlled. Bubble coalescence is easily suppressed, yet samples age by a combination of drainage due to gravity and coarsening due to diffusive gas exchange between bubbles. To suppress drainage and isolate the mechanism of coarsening, we studied foams on the International Space Station in the range of liquid fractions \(15\%<\phi<50\%\), where they are exceedingly difficult to study on Earth. Our experiments reveal an unexpected excess of small bubbles, which is at odds with existing theories and previous experimental results. The natural bubble size distribution created by coarsening makes wet liquid foams prime precursors in the production of hierarchical porous solids, presently the object of much interest. ## 1 Introduction In two-phase systems such as alloys, foams and emulsions, the sizes of grains, bubbles or drops evolve due to the diffusion of the dispersed phase through the continuous phase, from small domains with larger chemical potential, which shrink, towards larger ones with lower potentials, which grow. The theory of this process was elaborated by Lifshitz & Slyosov [30], and by Wagner [49], for highly diluted dispersions of precipitates. This theory also applies to dilute emulsions and bubbly liquids [46] where it is called Ostwald ripening. In agreement with experiments, the theory predicts that after a long time the domain size distribution, scaled by the average domain size, becomes invariant with time, a feature called _statistically self-similar_ growth [36]. In the Scaling State, the average domain size evolves asymptotically with time \(t\) as \(t^{1/3}\). The theory has been extended to smaller continuous phase volume fractions \(\phi\) down to \(\phi=0.7\)[2]. Domain growth due to diffusion is also observed in systems with continuous phase volume fractions so small that neighboring domains are in contact, such as foams (see Figure 0(a)); in this case, the process is called _coarsening_. When the liquid fraction \(\phi\) is decreased below a critical value, \(\phi_{\text{rep}}\), contacts between neighboring bubbles are formed and their shapes progressively evolve from spheres to polyhedra in the limit \(\phi\to 0\)[5]. Equilibrium films separating neighboring bubbles are generally common black films with thickness of a few tens of nanometers. They are connected three by three to channels called _Plateau borders_, themselves connected at _vertices_. For disordered monodisperse foams, \(\phi_{\text{rep}}\approx 36\%\), and \(\phi_{\text{rep}}\) is expected to decrease slightly as polydispersity increases [13]. Experiments with 3D foams of small liquid fractions have shown that the average bubble radius grows at long times as \(t^{1/2}\)[10, 27, 21, 7], in contrast with the \(t^{1/3}\) scaling observed in the case of Ostwald ripening. The modification of the exponent is related to the mechanism of gas transfer between bubbles. In dry foams, it occurs mostly through the thin films between bubbles, whereas in bubbly liquids, gas is transferred through bulk liquid [36]. Another important feature of coarsening is the shape of the bubble size distribution. Several experimental and numerical works [15, 26, 32, 27, 47, 51] show that the normalized distribution is asymmetric, of the Weibull or lognormal type, in the regime associated with the \(t^{1/2}\) growth law, whereas for the \(t^{1/3}\) regime, it is more symmetric and narrower [30]. Foams are not only interesting model systems for coarsening studies, they have numerous practical applications. Solidifying the continuous phase of liquid foams yields solid foams which inherit the structure of their precursors [50, 5, 28, 17]. They are widely used for packaging, insulation or as lightweight construction materials such as foamed cement or metallic foams. Their solid volume fraction is frequently chosen between \(20\%\) and \(50\%\), for instance, to confer sufficient mechanical strength [17]. The solid foam microstructure has an impact on its mechanical properties, for a given density. Hierarchical foam structures were predicted to have an order of magnitude improvement of mechanical strength to weight ratio with just two levels of hierarchy (large bubbles and much smaller bubbles in the interstices between them) [25]. Therefore, such hierarchical structures self-generated by foam coarsening, as we report here, could be of great interest for applications. Foams are metastable systems and evolve with time not only because of coarsening but also due to gravity drainage [43, 5], and possibly due to rupture of liquid films separating neighboring bubbles, called coalescence. Since gravity drainage and coarsening are coupled, studying and modelling coarsening requires gravity drainage to be suppressed. Pioneering foam coarsening experiments were performed with dry horizontal 2D foams (single layers of bubbles) where drainage was not an issue [45]. Studies of 3D foams on Earth are generally restricted to small liquid fractions \(\phi\ll 0.1\), where drainage is slow enough [4, 10]. To rule out artifacts related to gravity in 3D foams whatever the liquid fraction, we have performed foam coarsening experiments in microgravity, on board the International Space Station (ISS), where drainage is suppressed. Samples with arbitrary liquid volume fractions \(\phi\) can thus be studied over long times, up to several days, as required to investigate the Scaling State of foam containing a significant fraction of liquid. ## 2 Results and Discussion ### Excess of small bubbles We have investigated foam coarsening for liquid fractions between 15% and 50% using the instrument described in [3]. Details can be found in the Materials and Methods section. From the sample surface observations (a typical image is shown in Fig. 0(a)), we measure the bubble sizes using image analysis, and determine the bubble size distributions of the radius normalized by its average \(\rho=R/\langle R\rangle\). The initial size distributions produced by our experimental setup are asymmetric (positive skew) with a maximum at \(\rho\approx 0.6\) (see Figure 0(b) for foam with \(15\%\) liquid fraction as an example). The normalized size distributions broaden with time, and a sharp peak builds up progressively for small bubble sizes, _i.e._\(\rho\approx 0.3\), until a stationary form is reached, indicating a Scaling State. This is shown in Figure 1b for times \(t>2000\) s. This evolution is typical of the measurements we have made for foams with liquid fractions within the range \(15\%\leq\phi<\phi^{\star}\), with \(\phi^{\star}\approx 39\%\). The small bubbles corresponding to the peak in the distribution are highlighted in Figure 1a. After an increase in the transient regime, they finally represent about \(35\%\) of the total bubble population in the scaling state (inset of Figure 1b). We also measured the number of those small bubbles per foam vertex to reach a maximum average value of 1.5, due to space limitation in the vertices. As a consequence, the size distribution becomes invariant in time (statistically self-similar) as observed. Up to now, such an excess of small bubbles has not been reported in the literature [32, 15, 27, 51]. In order to check if distributions with an excess of small bubbles are also found in driz foams, we have performed coarsening experiments using the same surfactant and a liquid fraction of \(8\%\), low enough for gravity effects to be compensated in a ground based experiment by rotating the cell around a horizontal axis (clinostat). As shown in Figure 1c we observed a similar excess of small bubbles. The small bubbles were thus seemingly present, but not detected in previous studies. This is probably because high spatial resolution together with a careful image analysis are needed [39]. The only experimental work we have found that indirectly relates to this is that of Feitosa and Durian [15], which reports the development of transient bidisperity for initially monodisperse bubbles in a Steady State column, where drainage and coarsening occur simultaneously. In their simulations of 2D foam coarsening, Khakalo _et al_[24] have observed an excess of small bubbles but the gas transfer through interstitial bulk liquid was not taken into account. For \(\phi>\phi^{\star}\) we have observed a different scenario: the initial bubble size distribution shrinks until a steady state is reached where the size distribution is notably narrow (see Fig. S1 in the SI). The latter distribution is reminiscent of the theoretical distribution predicted for the Ostwald regime [30, 49]. Around \(\phi^{\star}\), a change in the growth laws for the average bubble size is also observed for the same foam samples [39]: \[R_{32}^{2}(t)=R_{32}^{2}(0)+\Omega_{p}\;t\qquad\mbox{for}\quad\phi<\phi^{\star} \tag{1}\] \[R_{32}^{3}(t)=R_{32}^{3}(0)+\Omega_{c}\;t\qquad\mbox{for}\quad\phi>\phi^{\star} \tag{2}\] The Sauter mean radius \(R_{32}=\langle R^{3}\rangle/\langle R^{2}\rangle\) is defined as the ratio of third to second moments of the bubble radius distribution. ### Transition from foam bubbles to roaming bubbles To clarify the origin of the hierarchical bubble population, we have identified bubbles that eventually disappear and tracked evolution of their area. Figure 2a shows examples of such measurements in a foam with \(\phi=15\%\) Figure 1: Excess of small bubbles. (a) Image of foam surface (\(\phi=15\%\)) in the Scaling State regime. Yellow stars have been superimposed on the image to highlight the small bubbles corresponding to the sharp peak in the distribution shown in (b). (b) Probability density function of normalized bubble radius \(\rho=R/\langle R\rangle\) at different foam ages as indicated, for a foam with liquid fraction \(\phi=15\%\). The curve corresponding to age \(>\) 2000 s represents the Scaling State regime, for which the normalized distribution no longer evolves. Inset: evolution of the proportion of small bubbles as a function of time. The number fraction \(\mbox{f}_{\rm small}\) is obtained by dividing the number of bubbles with radius \(R<R_{t}\) by the total number of bubbles in the sample (see Section 2.2 for details). A change in \(R_{t}\) by \(\pm 5\%\) induces a variation of \(\mbox{f}_{\rm small}\) smaller than the point size. (c) Probability density function of normalized bubble radius at different ages as indicated, for a sample with liquid fraction \(\phi=8\%\) studied on ground. Similar data are shown for other liquid fractions between \(\phi=20\%\) and \(\phi=33\%\) in the Supplementary Information (cf. Figure S2). Over time, the individual bubble area can either increase or decrease, depending on the bubble's gas exchanges with its neighbours, but most of the observed bubbles eventually shrink (see Figure 2a). The magnitude of the shrinking rate appears to be initially similar to that characterizing the initial growing rate. Then, a transition occurs and the area decreases much more slowly. Actually, the shrinking after this transition can be extremely slow, and we think this is the underlying mechanism explaining why a peak at smaller than average bubbles builds up in the size distribution. Remarkably, the bubble radius at the transition, \(R_{t}\), is such that its area \(A_{t}=\pi R_{t}^{2}\) increases linearly with time, which is similar to the evolution of the squared mean radius in the Scaling State (Eq. 1). Moreover, the transition to the very small shrinking rate appears to occur when the bubble has become so small that it fits inside the intersite between three larger bubbles at the surface, and possibly loses contacts with them as sketched in Fig. 2b. (See movies S1-S3 in the S1.) They are free to move throughout the intersite without being pressed against multiple neighbors. Such small bubbles can have different configurations in the intersite, _i.e._ near the center of the intersite or in contact with one bubble or two bubbles, but these configurations do not last for the entire life of the bubbles because their positions are jostled as the foam bubbles intermittently rearrange due to the coarsening induced dynamics [8, 5]. We call them _roaming bubbles_. Note that they are reminiscent of rattlers (grains carrying no force) in granular media [1]. We conjecture that the bubble size at the transition, \(R_{t}\), should scale as the maximum radius of a sphere that can be trapped in such an intersite at the wall surface. In a coarsening foam that has reached the Scaling State, there is only one independent length scale of the bubble packing structure. Since the bubbles that form the intersites are bigger than the encaged roaming bubbles, we chose to characterize their average size by the Sautter mean radius. With respect to \(\langle R\rangle\), \(R_{32}\) indeed represents mainly the average radius of the larger bubbles of the distribution and minimizes the contribution of the small bubbles. At a time \(t\), the maximum radius of a sphere trapped in such a vertex can be written, on average: \[R_{t}(t,\phi)=x_{n}(\phi)R_{32}(t) \tag{3}\] where \(x_{n}(\phi)\) is a dimensionless geometrical coefficient. We show in Figure S3 of the SI the plots of \(R_{t}\) versus \(R_{32}\) for each liquid fraction. The plots are reasonably described by equation 3, allowing determination of the average coefficient \(x_{n}\) for each liquid fraction (see Figure 2c). \(x_{n}(\phi)\) varies from \(0.25\) to \(0.55\) as \(\phi\) varies from \(15\%\) to \(38\%\) respectively. Using those \(x_{n}\) values, the transition radii \(R_{t}\) collapse on a linear master curve when plotted versus \(x_{n}(\phi)R_{32}\) (cf. Figure 2: Roaming transition: (a) Evolution of the area of individual bubbles as a function of foam age measured as the time elapsed since the end of the foam sample production, for \(\phi=15\%\). The area \(A_{t}=\pi R_{t}^{2}\) denotes the bubble area at the wall when its shrinking abruptly slows down (see text). Each label corresponds to a different bubble. (b) The transition to the very small shrinking rate was observed to occur when the foam bubble has become so small that it fits inside the intersite between neighboring larger bubbles. The corresponding geometrical transition can therefore be described as follows: When its radius is larger than \(R_{t}\), the small bubble is a foam bubble, in the fact that it shares thin liquid films with its neighbors. In contrast, as its radius reaches values smaller than \(R_{t}\), the bubble loses its contacts with its neighbors: it becomes a roaming bubble and its shrinking rate is strongly decreased. (c) Coefficient \(x_{n}=R_{t}/R_{32}\) as a function of \(\phi\). Filled orange disks: values deduced from the tracking of individual bubbles. Error bars show \(\pm 3SD\), to highlight the observed variability. Black stars/drawings: calculation of \(x_{n}\) from the size of a hard sphere (in red) that can be inserted into the intersite formed by three spheres at the wall, assuming either a compact bubble cage (bottom) or slight loosening (top) of the latter. The dotted line corresponds to equation 4 with \(\xi=2.2\). Figure S3 of the SI). We have performed a geometrical calculation of the size of the interstice between a plane and three perfect spheres of equal radius \(R_{32}\) in contact together and with the plane (see Figure 2c). This leads to \(x_{n}=1/3\). This value is smaller than what is measured for liquid fractions corresponding to the bubble random close packing fraction, _i.e._\(\phi_{\text{rep}}\approx 31\%\) (see section 2.4 for more details), beyond which the bubbles are spherical. As the liquid fraction gets close to \(\phi_{\text{rep}}\), the foam osmotic pressure, which pushes neighboring bubbles against each other at contacts, becomes very low, and it can be inferred that the cage formed by the triplets of bubbles of radius \(R_{32}\) loosens. Note that such a geometrical loosening effect is general and independent of friction [9]. Therefore, as a correction to the previous calculation, a distance \(\epsilon R_{32}\) is added around each sphere (see Figure 2c). The coefficient now reads: \(x_{n}=\frac{\epsilon(2+\epsilon)+\tilde{\lambda}(1+\epsilon)^{2}}{2(2+\epsilon )}\approx\frac{1}{3}+\epsilon\), and it increases significantly due to the loosening effect: assuming a moderate loosening \(\epsilon\approx 0.2\) gives \(x_{n}\approx 0.53\) which is in better agreement with our measurements (see Figure 2c and movie S2). It is reasonable to assume that polydispersity may also impact the size of the interstice. This effect can be estimated by considering two bubbles of size \(R_{32}\) and a third one with size \(\beta R_{32}\). It can be shown that in such a case, the coefficient reads \(x_{n}\approx\frac{1}{3}+0.11(\beta-1)\). Therefore, the magnitude of the polydispersity effect is much weaker than the previous one, in addition to the fact that it can work in both directions, depending on the value of \(\beta\), which we observed to vary in the range \(0.3<\beta<1.5\) (see Figure S4 of the SI). However, it is worth noting that a significant fraction of bubbles have a radius larger than \(R_{32}\), _i.e._\(1\leq\beta\leq 1.5\), and that almost half of the nodes are bounded by one such large bubble (see Figure S4 as an example for foam with \(15\%\) liquid). These findings suggest that the effect of polydispersity is only slightly positive, and should only slightly increase \(x_{n}\) i.e. the size of the wall interstice. We conclude that the loosening of the bubble packing is the main effect accounting for the measured \(x_{n}\) values. To extend our prediction to any liquid fraction \(\phi\leq\phi_{\text{rep}}\), we turn to [31], where the radius of passage of a hard sphere through the liquid channels (so-called Plateau borders [5]) was determined as a function of \(\phi\) and bubble radius \(R\) in a monodisperse foam. Due to uniformity of the capillary pressure through the foam, which sets the radius of curvature of the channels, and thus their cross-section, the bubble radius at the transition \(R_{t}\) should be proportional to this radius of passage. Following the approach proposed in [31] we refer to the effective pore radius introduced by Johnson _et al._[22]: \(\Lambda\approx(8k/\widetilde{\sigma})^{1/2}R\), where \(\widetilde{k}\) is the dimensionless liquid Darcy's permeability through the foam structure, _i.e._\(k/R^{2}\), and \(\widetilde{\sigma}\) is the ratio of the electrical conductivity of the foam to that of the foaming liquid. Therefore, the expression sought for \(x_{n}\) is: \[x_{n}=\xi(8\widetilde{k}/\widetilde{\sigma})^{\frac{1}{2}} \tag{4}\] where \(\xi\) is a geometrical coefficient to be determined. Note that the latter is expected to account for the loosening and polydispersity effects discussed previously. To continue we now need expressions for \(\widetilde{k}\) and \(\widetilde{\sigma}\). Since \(\Lambda\) was initially proposed for solid porous media, the permeability should correspond to foam having rigid interfaces to mimic solid-like boundary conditions. As studied by Rouyer _et al._[41], its expression is given by: \(\widetilde{k}=\phi^{2}/(312(1-2.15\phi+1.37\phi^{2})^{2})\) within the range of liquid fractions \(1\%\leq\phi\leq 40\%\). For foams and bubbly liquids, Feitosa _et al._[14] proposed an approximate analytical expression for \(\widetilde{\sigma}\), _i.e._\(\widetilde{\sigma}=2\phi(1+12\phi)/(6+29\phi-9\phi^{2})\). Using these expressions, we set \(\xi=2.2\) in equation 4 in order to get a predicted value of \(x_{n}\) close to the measured value 0.53 for \(\phi\approx\phi_{\text{rep}}\) (see Figure 2c). Remarkably, the agreement with our experimental data is very good over the whole range of liquid fractions, which reinforces the physical picture that \(R_{t}\) actually corresponds to the size of the interstices formed by the foam bubbles around the roaming bubbles. Note that in all of the above, nothing is really specific to the fact that we are looking at the wall. In bulk, typical interstices are formed by four bubbles in a tetrahedral assembly. The geometrical calculation for four bubbles in contact gives \(\sqrt{\frac{1}{2}}-1\approx 0.225\), compared to \(1/3\) at the wall. Therefore we can estimate \(x_{n}\) for bulk by using equation 4 with coefficient \(\xi=2.2\times(0.225/0.333)\approx 1.5\). Provided this value is used, the behavior observed at the wall should be similar to the behavior observed in the bulk of the foam. ### Dissolution rate of the roaming bubbles In this section, we focus on the dissolution rate of the roaming bubbles in the range \(\phi<\phi^{*}\). We first consider the data for times longer than those that mark the intersection of the dissolution curve with \(A(t)=\pi R_{t}^{2}\) (Figures 2a and S2 in the SI). We follow the evolution of the radius of roaming bubbles \(R(t)\) for \(R(0)\lesssim R_{t}\). For comparison, we similarly analyze individual bubbles roaming in the bubbly liquids (\(\phi>\phi^{*}\)), from the instant they start to continuously shrink. Several examples of the curves are presented in Figure 3a. We observe that the following function fits well all the Figure 3: Roaming bubble dissolution: (a) Radius evolution of dissolving roaming bubbles where each curve represents a single bubble. The solid lines correspond to fits of Eq. 5. (b) Average shrinking rate of roaming bubbles \(\Omega_{r}\) as a function of liquid fraction compared to the growth rate of average bubble size in the foam \(\Omega_{p}\) (Eq. 1, data from [40]). The lines are guides to the eye. \(\Omega_{r}\) values fall within the range (highlighted in green) predicted by the shell model (Eq. 1 in the SI), schematically illustrated by the inside drawing. Error bars correspond to \(\pm 1SD\). The growth rate \(\Omega_{p}\) is strongly dependent on the liquid fraction, at the difference of the dissolution rate \(\Omega_{r}\). (c) Measured shape parameter \(\sigma_{2}\) of the foam bubbles size distribution (Eq. 6 in the SI) as a function of liquid fraction (blue circles). The (orange) continuous line represents the maximum packing volume fraction predicted for a lognormal distribution of spheres with shape parameter \(\sigma\)[12, 13]. The gray vertical area highlights the range where \(\sigma\) and \(\sigma_{2}\) coincide, from which we deduce \(\dot{\phi}_{\rm rep}\approx 30-32\%\). This also corresponds to the range of liquid fractions where \(\Omega_{r}\) is comparable to \(\Omega_{p}\) in b. curves[11, 34]: \[R^{2}(t)=R^{2}(0)-\Omega_{r}t \tag{5}\] where the only fitted parameter \(\Omega_{r}\) represents the dissolution rate of the roaming bubble. Such fits were performed for all the liquid fractions and the average values of \(\Omega_{r}\) are presented in Figure 3b. \(\Omega_{r}\) is found to depend only weakly on liquid fraction: \(\Omega_{r}\approx 1-2~{}\mu\text{m}^{2}/s\). We also plot on Figure 3b the growth rate \(\Omega_{p}\) that characterizes the coarsening of the foam in the Scaling State (Eq. 1). It appears that \(\Omega_{p}\gg\Omega_{r}\) for \(\phi\lesssim\phi_{\text{rep}}\approx 31\%\), and \(\Omega_{p}\approx\Omega_{r}\) for \(\phi_{\text{rep}}<\phi<\phi^{*}\). This comparison reinforces our discussion in section 2.1: the size of the roaming bubbles, represented on the left side of the distribution, varies more slowly than the average bubble size. As a result, the roaming bubbles accumulate in the interstices formed by the larger bubbles. As the dissolution rate \(\Omega_{r}\) plays a crucial role in the accumulation mechanism of the roaming bubbles, we seek here to understand this value. The starting point is the comparison of our data with theory for the dissolution of isolated bubbles [11, 34], which gives the steady dissolution rate far enough from the final instant of bubble disappearance as: \(\Omega_{r}=-dR^{2}/dt=2D_{m}V_{m}\left(c(R)-c_{\infty}\right)=2D_{m}V_{m} \text{He}P_{0}(1-\zeta)\), where the saturation parameter \(\zeta=c_{\infty}/\text{He}P_{0}\) characterizes the gas saturation of the liquid environment, \(c(R)\) and \(c_{\infty}\) are respectively the gas concentrations in the liquid at the bubble surface and at infinity, \(P_{0}\) is the gas pressure at infinity, He and \(D_{m}\) are respectively the Henry solubility and the diffusion coefficient of the air molecules in the foaming solution. \(V_{m}\) is the molar volume of the gas at the pressure \(P_{0}\). From the measured \(\Omega_{r}\), we deduce an effective value for the saturation parameter: \(\zeta=0.973-0.987\), which suggests that the bubbles dissolve faster than if they were isolated, and despite the presence of the large neighbouring bubbles which impose at their interface a gas concentration larger than He\(P_{0}\). To explain this apparent contradiction, it is important to understand that the gas transfer is controlled by the concentration gradient, and not only by the concentration difference. Due to the short distances involved between the roaming bubble interface and the interfaces of the large neighbouring bubbles, the concentration gradient around the roaming bubble reaches relatively high values compared to the case of the isolated bubble. Therefore, to mimic this situation, we consider the configuration illustrated in the inset of Figure 3b, where a roaming bubble of radius \(R\) is centered in a cavity of radius \(R_{t}\), and is surrounded by a liquid shell of thickness \(R_{t}-R\). The _local_ concentration at the outside boundary of the shell is estimated as that at the surface of a bubble of average size \(R_{32}\). From Fick's first law, we then predict the bubble dissolution rate \(\Omega_{r}\) in that shell environment (See more details in the SI). For the range of values of \(R_{32}\) in the scaling state in our experiments and typical ratio \(R/R_{t}\), we expect \(\Omega_{r}\approx 0.75-4~{}\mu\text{m}^{2}/\text{s}\) which provides boundaries consistent with the measured values of \(\Omega_{r}\) (cf. Fig. 3b). A drawback of this shell-like model is that the roaming bubble is assumed to remain at the center of the interstice, which is not always the case. Indeed, we often noticed transient apparent contacts between the roaming bubble and either one of the bubbles delimiting the interstice or two larger bubbles forming a corner. These transient contacts can result from adhesive forces. We have indeed observed that under microgravity conditions, persistent aggregates form spontaneously in dilute bubble dispersions. In complementary ground-based experiments, we have observed a contact angle close to \(3-4^{o}\)[40]. The underlying configuration may be an adhesive contact with the formation of a liquid film that slightly flattens the bubbles or it can be a near-contact with a small separation distance so that the roaming bubble is spherical. Since it was not possible to distinguish between these two types of contact, we estimated the dissolution rate for both cases (See details of the calculation in the SI). In the range of average bubble sizes \(R_{32}\) of our experiments, assuming a film thickness effective for the transport of gas of the order of 40-60 nm [40], we found that the expected rates fall within the range of values measured for \(\Omega_{r}\). This remains broadly true if the bubble is in a corner, where the corresponding dissolution rate is twice larger. Therefore, whatever the configuration considered for the roaming bubble in the interstice, we find values for its dissolution rate that are compatible with our measurements, which gives robustness to the proposed mechanism based on the accumulation of long-lasting roaming bubbles in the foam interstices. ### Bubble size distributions and random close packing fraction in the Scaling State Let us analyze now the role of liquid fraction on the distribution shape. Details on the analysis are given in the SI. Figure 4 shows the normalized bubble size distributions observed in the Scaling State for each sample liquid fraction. The PDF for \(\phi=15\%\) is the same as that of Figure 1 in the Scaling State. It exhibits a prominent narrow peak, that we identified to the roaming bubble population in Section 2.1, followed by a broad peak for the foam bubble population. These features qualitatively persist up to \(\phi<38\%\) but the narrow peak progressively shifts towards larger \(\rho\) while its height decreases. For \(\phi\geq 40\%\), PDFs exhibit a single peak, which is consistent with the fact that all bubbles should be roaming bubbles. PDFs become narrower as \(\phi\) increases and their peak height increases. This qualitative change is also captured by the abrupt variation of statistical quantities like polydispersity and standard deviation (cf. Fig. S5 of the SI). None of the existing theories predict such distributions [2]. These findings indicate a cross-over between qualitatively different PDFs occurring for a liquid fraction \(\phi^{\star}\approx 39\%\). This transition coincides with the observed change of growth laws Eq. 1 and Eq. 2 and it is attributed to the onset of the formation of a foam gel due to weak attraction between bubbles as evidenced by finite contact angle at films junctions [40]. The expected jamming liquid fraction for randomly close-packed monodisperse hard spheres is \(\phi_{\rm rep}=36\%\). However, polydispersity will reduce this value since smaller bubbles can fit into the interstices between larger ones. This effect has been predicted by numerical simulations of polydisperse close packings of spherical particles with lognormal PDF, as a function of the shape parameter \(\sigma\)[12, 13]. In our foams, the close packing concerns the population of foam bubbles, which are connected to each other _via_ films. Therefore, we compare the measured shape parameter of the Figure 4: Bubble size distributions of normalized radius \(\rho=R/\langle R\rangle\) for each liquid fraction as labelled. The data are represented by black continuous lines. The green dashed lines represent the bi-lognormal PDFs (see Eq. 6 in the SI) fitted to the data. The red (resp. blue) shaded area corresponds to the roaming bubble PDF \(w\)\(\mathcal{L}(r;m_{1},\sigma_{1})\) (resp. to the foam bubble PDF \((1-w)\)\(\mathcal{L}(\rho;m_{2},\sigma_{2})\)) with the parameters given in Fig. S5 of the SI. In the plots for \(\phi\) up to \(38\%\) the width of the roaming bubble distributions is characterized by \(\rho_{t}\), defined in Eq. 8 in the SI. For \(\phi=15\%\), the dotted line is the PDF predicted for wet foams by Markworth [33] based on Lemlich’s model [29] for that \(\phi\). As a comparison, for \(\phi=50\%\), the dotted line is the LSW prediction [2] (\(\phi=1\)). -foam bubble distribution \(\sigma_{2}\) to the predicted ones (cf. Fig. 3c). We find them to coincide within the range \(\phi=30\%\) and \(\phi=32\%\): we expect the close packing fraction \(\phi_{\text{rep}}\) of our foams to lay inside the range between these 2 values. To provide an independent result of the close packing fraction of frictionless spheres with the polydispersity observed in our samples in the Scaling State, we have performed molecular dynamics simulations. Since here we are only interested in the geometrical sphere packing problem at the jamming point where the confinement pressure and interaction forces drop to zero with increasing \(\phi\), we expect the nature of the interaction law used in the simulations to have only a minor impact. Using Hertzian interactions, in the framework of the molecular dynamics code LAMMPS (see Materials and Methods), we obtained \(\phi_{\text{rep}}=30.5\pm 0.5\%\), in remarkable agreement with our analysis based on the work of Farr and Groot [13]. Note that strictly speaking, our simulations only provide an upper bound for the optimal random close packing fraction of such polydisperse spheres, which may be obtained by more sophisticated simulation procedures described in the literature [23]. However, in the context of our experiments, the truly relevant packing fraction is the one of a coarsening foam. In this case we expect a local packing which is not exactly the most compact possible one. A jammed foam regularly undergoes rearrangements, helping it to settle into new minimal energy configurations. This implies that in between rearrangements, the packing is not always optimally close-packed. Simulations of this where we also replace the Hertzian interaction by the more realistic Morse Witten law [35, 20] are the subject of ongoing work. ### Potential consequences on foam properties The roaming bubbles represent only a small number fraction of the foam volume, of the order of a few percent. However if this ratio is counted with respect to the liquid volume, it is larger, up to ten percent depending on \(\phi\). It can therefore be expected that their impact is important for certain properties, such as foam drainage, where they slow down the flow of the liquid. A study with solid spheres, located in the nodes of the liquid network of the foam, showed that such an amount of particles in the liquid could reduce the permeability of the foam by \(40\%\)[42]. Let us mention that to date, this effect has never been taken into account in permeability modelling. On another hand, in the production of foamed insulation materials, the yield stress of the foamed material will prevent gravity from evacuating the roaming bubbles: one expects to find relatively high volume fractions of roaming bubbles in such systems, as suggested by recent work [16, 18]. Note that the stakes are high in terms of producing solid foam structures with hierarchical porosity, including both macro- and micro-scaled pores. Such structures have recently been produced by 3D printing [6] and they were found to present enhanced energy absorption properties and enhanced mechanical resistance to cyclic loading. ## Conclusions Coarsening studies of foam samples where the liquid fraction remains constant over periods of several days, without any confounding effects of gravitational drainage, reveal that their natural size distribution shows a well-defined peak towards small sizes, _i.e._ an excess of bubbles is observed for sizes close to \(0.3\langle R\rangle\). This feature, which we show to appear in liquid foams in the Scaling State, was not expected based on existing theories. Surprisingly, no previous experimental study mentions the presence of these small bubbles, although we have been able to reproduce this effect on Earth. We show that during coarsening, when the bubbles that shrink become smaller than the size of the interstices between the larger bubbles, they can disconnect from the network of larger bubbles: we call them _roaming_ bubbles. The dissolution rate of these roaming bubbles was found to be approximately constant, whatever the liquid fraction of the samples was. The order of magnitude of the dissolution rate is consistent with calculations based on the gas transfer through the liquid shell that surrounds the roaming bubble, or through the "contact" between one roaming bubble and larger foam bubbles surrounding them. The key point in the accumulation of the small bubbles in the interstices formed by the larger bubbles, is the fact that the rate of disappearance of these bubbles is much smaller than the growth rate of the foam bubbles. This behaviour was observed for samples with liquid fraction smaller than \(\approx\phi_{\text{rep}}\). In contrast, for \(\phi>\phi_{\text{rep}}\) the roaming bubbles disappear at a rate which is comparable to the growth rate of the foam bubbles, which kills the accumulation mechanism. For even larger liquid fractions, the bubble assembly will enter the regime of _bubbly liquids_, where all the bubbles are expected to be roaming bubbles. As a consequence, the peak initially observed for liquid fractions \(\phi<\phi_{\text{rep}}\) shifts towards \(\langle R\rangle\) and a distribution almost centered on \(\langle R\rangle\), characteristic of bubbly liquids, is eventually observed. In closing, we have shown the existence of naturally-developed hierarchical bubble size distributions in coarsening foams. The persistent co-existence of usual foam bubbles with small roaming bubbles, challenges our current understanding of foam coarsening and has potential implications in the design and performance of foamy materials. ## Materials and Methods The foams were made with aqueous solutions of a ionic surfactant, tetradecyl-trimethyl-ammonium bromide (TTAB), with purity \(\geq 99\) % and used as received from Sigma-Aldrich. It was dissolved at 5 g/L in ultrapure water (resistivity 18.2 M\(\Omega\)-cm). This concentration is 4 times larger than the critical micellar concentration and large enough to prevent coalescence. The surface tension of the TTAB solution measured at room temperature is: \(\gamma=37.1\) mN/m. The Henry solubility coefficient of the air molecules in the foaming solution is [40]\(He=7.4\)\(10^{-6}\) mol m\({}^{-3}\) Pa\({}^{-1}\)[40] and their diffusion coefficient in the foaming solution is [40]\(D_{m}=2.0\)\(10^{-9}\) m\({}^{2}\)s\({}^{-1}\). The majority of the experiments were performed on board the International Space Station using the experiment container described in [3]. In this environment, the residual gravity acceleration fluctuations are reported to be on the order of or less than a ug, for frequencies below 0.01 Hz [37]. Each foam cell was filled on Earth with a given volume of foaming solution (measured by weight at controlled temperature) and air, then hermetically sealed. The liquid volume fraction \(\phi\) contained in each cell was deduced from the liquid volume and the total cell volume. After the completion of the experiments, the cells were send back to Earth and we checked that their weight had varied by less than 1%. All the experiments were repeated three times and found reproducible, even a few months apart. In addition, we made a Ground experiment with a foaming liquid of composition identical to that of the ISS experiments. The foams were produced with the double syringe method [39] filled with air and a volume of the foaming solution in order to set the liquid fraction to 7.8%\(\pm 0.2\%\). Note that the initial bubble size distribution with this foam production is close to that of the Scaling State. The sample was placed in a cylindrical cell (diameter 30 mm, thickness 12.8 mm) with transparent flat faces. The cell was kept with its symmetry axis aligned in the horizontal direction and rotated about this axis with a speed of rotation equal to 15 rpm. Foam age is counted from the instant when the foaming process stops. Bubbles at the surface of the sample are recorded using a video camera. Every image (such as the one shown in Fig. 1a) was analysed as described in [39]. We checked that the radial profile of liquid fraction remained constant throughout the measurement duration, indicating that the effect of gravity drainage was indeed counteracted and that the rotation did not induce radial drainage either.The bubble area \(A\) deduced was from the area inside the contour of the bubbles measured using the ellipses method [39]. Finally, the bubble radius is calculated as \(R=\sqrt{A/\pi}\). In the ISS experiments, simultaneously to the video recording, the intensity of light transmitted through the sample was recorded, which provided the average bubble size in the bulk of the sample as explained in [39]. Our results showed that the evolution of the average bubble radius measured either at the surface or in the bulk are similar. We also performed numerical simulations to evaluate the random close packing liquid fraction of the bubbles. In the framework of the molecular dynamics code LAMMPS [48], a cubic simulation box was filled by spheres with repulsive, Hertzian interactions with radii randomly chosen from a distribution corresponding to the one we observe experimentally for \(\phi=33\%\) in the Scaling State (see Figure 4). The number of spheres was of the order of 2000, similar to our foam coarsening experiments at the largest investigated foam ages. To fill the simulation cell, we started with an initial cell volume so large that the sphere dispersion was highly diluted. Using the pressostat provided by LAMMPS, we then shrunk the cubic cell and compacted these structures until a very small osmotic pressure appeared. We then turned off the pressostat and equilibrated the sample for imposed simulation box volumes, varied by small steps around the previous value. The close packing fraction was estimated by plotting confinement pressure versus packing fraction, and by detecting the \(\phi\) value where zero pressure is reached within numerical accuracy. We did this for 5 different initial random seeds, and found \(\phi_{\text{rcp}}=30.5\pm 0.5\%\). The way you compact a packing has a large impact on the final close packing fraction in frictional granular materials, and to a lesser extent also in frictionless systems. To investigate this effect, we applied simulated gravity to dilute sphere dispersions as an alternative to the initial pressostat procedure. Kinetic energy was dissipated by introducing viscous friction in the contact law. This procedure mimics foams that form when a bubbly liquid is subjected to buoyancy, as it is common on earth. Once equilibrium was reached, we switched off gravity, and simulated pressure versus packing fraction as previously. The final values of \(\phi_{\text{rcp}}\) are within statistical errors the same as the those obtained with the pressostat. ## Author contributions M.Pasquet, E.Rio, A.Salonen, D.Langevin, S.Cohen Addad, R.Hohler and O.Pitois contributed to the preparation and follow-up of the ISS experiments, N.Galvani, M.Pasquet and O.Pitois performed the image analysis, N.Galvani, O.Pitois and S.Cohen Addad performed the analysis of the behavior of the roaming bubbles, N.Galvani performed the ground experiments, A.Mukherjee and R.Hohler performed the numerical simulations, all the authors participated in the discussions and writing of the manuscript. ## Acknowledgments We acknowledge funding by ESA and CNES (via the projects "Hydrodynamics of Wet Foams") focused on the Soft Matter Dynamics instrument and the space mission Foam-C, as well as NASA via grant number 80NSSC21K0898. Marina Pasquet, Nicolo Galvani and Alice Requier benefited from CNES and ESA PhD grants. The authors are grateful to the BUSOC team for their invaluable help during the ISS experiments. We also want to warmly thank Marco Braibanti and Sebastien Vincent-Bonnieu from ESA, Christophe Delaroche from CNES and Olaf Schoele-Schulz from Airbus for their continuing support.
2303.05591
Future of neutron star studies with fast radio bursts
Fast radio bursts (FRBs) were discovered only in 2007. However, the number of known events and sources of repeating bursts grows very rapidly. In the near future the number of events will be $\gtrsim 10^4$ and the number of repeaters $\gtrsim100$. Presently, there is a consensus that most of the sources of FRBs might be neutron stars (NSs) with large magnetic fields. These objects might have different origin as suggested by studies of their host galaxies which represent a very diverse sample: from regions of very active star formation to old globular clusters. Thus, in the following decade we expect to have a very large sample of events directly related to extragalactic magnetars of different origin. This might open new possibilities to probe various aspects of NS physics. In the review we briefly discuss the main directions of such future studies and summarize our present knowledge about FRBs and their sources.
Popov S. B., Pshirkov M. S.
2023-03-09T21:33:51Z
http://arxiv.org/abs/2303.05591v1
# Future of neutron star studies with fast radio bursts ###### Abstract Fast radio bursts (FRBs) were discovered only in 2007. However, the number of known events and sources of repeating bursts grows very rapidly. In the near future the number of events will be \(\gtrsim 10^{4}\) and the number of repeaters \(\gtrsim 100\). Presently, there is a consensus that most of the sources of FRBs might be neutron stars (NSs) with large magnetic fields. These objects might have different origin as suggested by studies of their host galaxies which represent a very diverse sample: from regions of very active star formation to old globular clusters. Thus, in the following decade we expect to have a very large sample of events directly related to extragalactic magnetars of different origin. This might open new possibilities to probe various aspects of NS physics. In the review we briefly discuss the main directions of such future studies and summarize our present knowledge about FRBs and their sources. neutron stars; fast radio bursts; magnetic field; radio astronomy + Footnote †: journal: Physics Letters ## 1 Introduction Neutron stars (NSs) are probably the most interesting physical bodies in in inanimate nature as they are very rich in extreme physical processes and conditions. These objects are far from being completely understood. Thus, any new approach to study them is welcomed, especially if it promises to provide a large new sample of sources. Observations of fast radio bursts (FRBs) is one of such examples. The field of FRB studies was born in 2007 when the first event - the Lorimer burst, - was announced [1]. FRBs are millisecond extragalactic radio transients, see a detailed review in [2]. Up to now, no counterparts in other wavelength have been ever detected for them. An illustrative exception is the Galactic source SGR 1935+2154. In April 2020 a simultaneous detection of an FRB-like radio burst [3, 4] and a high energy flare [5, 6, 7, 8] from this magnetar happened. This came out to be a long waiting proof that magnetars are the sources of FRBs. Of course, formally this does not certify that _all_ FRBs are due to magnetar flares. The situation can be similar to the one with short gamma-ray bursts. Mainly, they are due to coalescence of NSs [9]. But some fraction of events can be due to core collapse [10], some can be due to giant flares of extragalactic soft gamma-ray repeaters [11], etc. In the same way, the FRB sources population can be non-uniform, but it is widely believed now that it is dominated by magnetars [2]. Since the paper by Lorimer et al. [1] has been published, many proposals to explain the origin of FRBs were proposed (e.g., [12]). However, at the moment the set of basic scenarios under discussion is very limited - see an extensive recent review about the most plausible emission mechanisms in e.g., [13], - and all of them involve magnetars (see, however, [14] for descriptions of some models not involving NSs - the population of FRB sources maybe non-uniform, i.e. some events can be unrelated to magnetar flares). The idea that FRBs are related to \(\gamma\)/X-ray flares of magnetars was proposed already in 2007 [15]. Observations of SGR 1935+2154 basically confirm this hypothesis, but the exact emission mechanism is still not known [16]. Presently, there are two main families of models to explain radio emission of FRBs, see e.g., [17]. Either radio waves are produced in the magnetosphere of a magnetar, or they are due to a coherent maser-like emission mechanism operating at a relativistic shock far from the NS surface, at a typical distance \(\sim 10^{14}\) cm. Up to now data on many hundreds (\(\lesssim 10^{3}\)) one-off FRBs have been published (and, presumably, many more will be published soon). In addition, there are \(\gtrsim 50\) repeating sources [18]. From some of them hundreds, or even thousands in few cases, individual radio flares have been detected. The number of events rapidly grows with time. One-off events are actively discovered e.g., by CHIME - the Canadian radio facility [19]. Numerous bursts from repeaters are detected due to monitoring of known sources. In particular, the FAST radio telescope [20] is very productive in this respect due to its huge collecting area. Already now the number of sources of FRBs is by an order of magnitude comparable to the number of known radio pulsars (PSRs), see the ATNF catalogue [21]. Note, that the number of PSRs exceeds any other population of known sources with NSs, and the situation is not going to change qualitatively in the near future. It is expected that the Square Kilometer Array (SKA) will discover all radio pulsars in the Galaxy pointing towards us [22]. This number is just \(\sim\)few\(\times 10^{4}\). On other hand, it is expected that SKA will detect \(\sim 10^{4}-10^{5}\) FRBs per day [23]. Another proposed facility - the PUMA survey - is expected to discover \(10^{6}\) FRBs during its operation [24]. To conclude, in the following decade FRBs will be the most abundant sample of known NSs. What is worth noting - mostly, they are going to be extragalactic sources. Large sample of events associated with (young) strongly magnetized NSs up to redshifts \(z\gtrsim\) few, might allow various interesting studies of NS physics. In addition, FRBs are known to be important probes of inter- and circumgalactic medium. Observations of these radio transients allow us to derive cosmological parameters and test predictions of fundamental physical theories. All these possibilities are the subject of the present review, with a focus on properties of NSs. ## 2 Different channels for magnetar formation All known Galactic magnetars1 are young objects whose properties (spatial distribution, association with young stellar population, etc.) indicate that mostly (or even totally) they are formed via the most standard channel - core-collapse supernovae (CCSN). Population synthesis models of the Galactic magnetars usually assume that this is the only way to produce such objects. This is a valid assumption as this channel indeed dominates over all others. Modeling shows that at least few percent of newborn NSs start their lives as magnetars [26; 27; 28]. Some studies even show that this fraction can be an order of magnitude higher [29]. I.e., the rate of magnetar formation through the CCSN channel is at least once every few hundred years. Footnote 1: See the McGill on-line catalogue [25] at [http://www.physics.mcgill.ca/](http://www.physics.mcgill.ca/) pulsar/magnetar/main.html. However, a highly magnetized NS can be formed via several different evolutionary channels. Below we give the complete list: * Core collapse; * NS-NS coalescence; * NS-WD coalescence; * WD-WD coalescence; * Accretion induced collapse (AIC). As it is seen from the list, many channels represent evolution in a binary system.2 What is important, a NS formed through one of these channels can belong to an old population. The problem of magnetar formation in old stellar populations is very actual in FRB astrophysics in the context of host galaxies identification. The first FRB source for which a host galaxy has been identified [32] appeared to be situated in a region of intensive star formation. The same can be said for several other sources, e.g., [33] and references therein. Still, there are also opposite examples, when an FRB source is located in a galaxy with low rate of star formation, e.g., [34] (see analysis of host galaxies of FRBs in [35]). The extreme case is FRB 20200120E situated in a globular cluster of the M81 galaxy [36], as globular clusters are known to contain only old stars. A detailed study of 23 hosts (17 for non-repeating FRBs and 6 for repeating sources) was recently presented in [37]. Contrary to some previous studies, the authors claim that FRB hosts have on average properties which a similar to majority of galaxies at corresponding redshifts. Generally, they do not find that the CCSN origin of the FRB sources contradicts observations. Still, in some peculiar cases alternative channels might be operating. Let us compare rates of NS formation in different channels specified above. The rate of NS-NS coalescence is about few\(\times 10^{-5}\) yr\({}^{-1}\) per a Milky way-like galaxy [38]. Note, that typically a NS-NS coalescence results in a black hole (BH) formation. For NS-WD coalescence the rate is a little bit higher: \(\sim 10^{-4}\) yr\({}^{-1}\)[39]. Coalescence of two WDs are relatively frequent \(-\sim 10^{-2}\) yr\({}^{-1}\)[40], but just a small fraction of them result in a NS formation. So, WD-WD and AIC (here and below we distinguish a NS formation due to WD-WD coalescence from other types of AIC) provide the rate from few\(\times 10^{-6}\) yr\({}^{-1}\) up to few\(\times 10^{-5}\) yr\({}^{-1}\)[41].3 Thus, altogether all channels additional to the CCSN provide less than 1% of NSs. Even if the fraction of magnetars is high in these channels, their total contribution is much less than that from the CCSN, so we expect less than one object with an age \(\lesssim 10^{4}\) yrs per galaxy. Footnote 3: Here NS formation via WD-WD coalescence does not include processes in globular clusters. About this possibility see e.g. [42] and references therein. As the rate of NS formation due to the AIC or different coalescences is very low, it is impossible to find a representative sample of such sources in the Galaxy (or even in near-by galaxies). FRB observations provide a unique possibility to probe the populations of these rare sources, even at different \(z\). At the moment, it is not known if magnetars formed through different evolutionary channels mentioned above can appear as distinguishable subclasses of FRBs sources when only radio observations are available. If it is possible in near future to distinguish between them (at least in a statistical manner), then we have a perfect tool to study evolution of formation rates in different channels through cosmic history. ## 3 Properties of the surrounding medium Pulsars have been used as excellent probes of the Galactic interstellar medium (ISM) almost since their discovery. Observations of FRBs allow us to implement already developed methods to study medium along the path from the bursts, starting from the circumburst environment and ending with the Milky Way halo and ISM, see a review in [43]. There are three major effects that affect a signal during its propagation. First, there is dispersion of the signal propagating in the plasma with electron concentration \(n_{\rm e}\) - the group velocity \(v_{\rm g}\) depends on frequency \(v\): \[v_{\rm g}=c\sqrt{1-\left(\frac{v_{\rm g}}{v}\right)^{2}}, \tag{1}\] where \(\nu_{\rm p}=\sqrt{\frac{n_{\rm e}c^{2}}{\pi m_{\rm e}}}=8.98(n_{\rm e}/1\ {\rm cm}^{-3})^{1/2}\) kHz is the plasma frequency, \(m_{\rm e}\) is the electron mass. The time delay between two observing frequencies \(\nu_{1}\) and \(\nu_{2}\) is : \[\delta t\approx 4.15\ {\rm ms}\bigg{[}\left(\frac{\nu_{1}}{1\ {\rm GHz}}\right)^{ -2}-\left(\frac{\nu_{2}}{1\ {\rm GHz}}\right)^{-2}\bigg{]}\ {\rm DM}, \tag{2}\] where \({\rm DM}=\int_{0}^{d}n_{\rm e}{\rm d}l\) is the dispersion measure of the source. The dispersion measure is just the column density of free electrons. As a rule, DM values are given in units [pc cm\({}^{-3}\)], we use these units below. In the cosmological context for sources at redshift \(z\) the equation for dispersion measure is slightly modified: \({\rm DM}=\int_{0}^{d}n_{\rm e}/(1+z){\rm d}l\). Large values of DM, strongly exceeding those expected from the Galactic contribution, are the primary indicator of an extragalactic nature of the FRBs. Second, a signal undergoes scattering during propagation through inhomogeneous medium. This results in formation of an extended exponential 'tail' in the pulse shape. Scattering is stronger at lower frequencies, \(\tau\propto\nu^{-\alpha}\), with the spectral index \(\alpha\) which depends on properties of inhomogeneities. For the Kolmogorov spectrum of inhomogeneities \(\alpha=4.4\). Third, in a presence of magnetic fields, the polarization position angle of a linearly polarized signal would experience frequency (or wavelength)-dependent Faraday rotation: \[\Delta\Psi={\rm RM}\,\lambda^{2}, \tag{3}\] where \(\lambda\) is the wavelength and RM is the rotation measure: \[{\rm RM}=\frac{e^{3}}{2\pi m_{\rm e}^{2}c^{4}}\int_{0}^{d}n_{\rm e}B_{||}{\rm d }l=812\int_{0}^{d}n_{e}B_{||}{\rm d}l\ {\rm rad\ m}^{-2}. \tag{4}\] Here \(n_{\rm e}\) is measured in cm\({}^{-3}\), \(B_{||}\) is the component of the magnetic field measured in \(\mu G\) (positive when directed towards the observer) parallel to the line of sight, and all distances are measured in kpc. If rotation takes place at redshift \(z\) there would be a correction: \({\rm RM}_{\rm obs}={\rm RM}_{\rm int}/(1+z)^{2}\), i.e. the observed \({\rm RM}_{\rm obs}\) would be smaller than the intrinsic \({\rm RM}_{\rm int}\). All plasma along the path contributes to these effects: there are contributions from the host galaxy (including a circumburst region), the intergalactic medium (IGM), the halo and the ISM of the Milky Way. For some bursts there would be considerable contribution from circumgalactic medium of intervening galaxies located at the line of sight and from regions of the large scale structure such as galaxy clusters and filaments. The relative contributions from these regions are different for the three effects mentioned above. While DM is mostly accumulated in the IGM, most of the scattering comes from the ISM in the host galaxy and the Milky Way. Finally, the Faraday rotation mostly takes place in the ISM of galaxies and, especially, in the circumburst medium (CBM). NSs born via various evolutionary channels discussed in the previous section might have different properties of the surrounding medium. Some valuable information could be extracted from the observations of one-off bursts, e.g., detection of excessive scattering, which is most probably associated with the CBM, might shed light on properties of turbulence in the immediate vicinity of some bursts [44]. Still, observations of repeating bursts are better suited for studying the CBM. Recurring activity gives an opportunity to use variety of instruments working with different temporal resolutions and in different frequency ranges. E.g., emission from FRB 121102 initially has been supposed to be unpolarized. Only subsequent follow-up observations of this repeating source at higher frequencies with high temporal resolution let the authors to measure the degree of linear polarization and to obtain an extreme value of the rotation measure: RM \(\sim 10^{5}\) rad m\({}^{-2}\)[45]. Even more important, observations spanning several years could give an opportunity to detect time evolution of DM and RM, therefore seriously constraining properties of the CBM because the evolution of the CBM would be the leading factor in the observed variation of DM and RM [46; 47; 48]. In the early stage of expansion of a SNR, likely the most relevant situation for repeaters, DM evolves as \(t^{-1/2}\) if the supernova exploded into the medium of constant density, and DM \(\propto t^{-3/2}\) if the explosion took place in a wind-formed environment. For RM the scaling in such a situation is \(t^{-1/2}\) and \(t^{-2}\), correspondingly [47]. RM evolution was relatively quickly discovered in the case of FRB 121102. Two and a half years of observation demonstrated rapid decrease of RM in the source frame from \(1.4\times 10^{5}\) rad m\({}^{-2}\) to \(1.0\times 10^{5}\) rad m\({}^{-2}\) just in one year and considerable levelling off afterwards. The DM slightly _increased_ by \(\sim 1\) pc cm\({}^{-3}\). This behaviour could be explained by evolution of a very young pulsar wind nebula (PWN) embedded in a supernova remnant (SNR) with an age about 10-20 years. Even more extreme example was presented by observations of FRB 190520B. This burst has one of the largest excess DMs known, \(\sim 900\) pc cm\({}^{-3}\), which is decreasing at an astounding rate \(\sim 0.1\) pc cm\({}^{-3}\) day\({}^{-1}\). Thus, the inferred characteristic age is only 20-30 years. Between June 2021 and January 2022 the observed RM demonstrated an extreme variation from \(\sim+10000\) rad m\({}^{-2}\) to \(\sim-16000\) rad m\({}^{-2}\), implying a drastic reversal of the \(B\)-field of \(\sim\) mG strength. This behaviour can again be explained by a SNR evolution or by a close proximity to a massive BH with strongly magnetized outflows, or alternatively by a magnetized companion [49]. In the latter case the RM and DM variations could be periodic and this would be tested in the near future. It could be a relevant fact that FRB 121102 and FRB 190520B are the only bursts known which have spatially coinciding persistent radio sources. These sources could be related to regions which produce extreme behaviour of the RM. Some other repeaters also demonstrate RM variations, although not so extreme [50]. **Extensive study of varying magneto-ionic environment of 12 repeating FRBs was performed in [51] where it was shown that the RM variations in these FRBs are much more extreme than in known young pulsars in the Milky Way. It may imply that the properties of the surrounding medium are considerably different in these two cases.** It is obvious that detection of many more new repeaters in a wide range of NS ages would significantly expand our understanding of the earliest epoch of evolution of complicated systems comprising NS: PWN and SNR. Another way to study the CBM, or at least to constrain models which suggest interaction of magnetar flares with the surrounding medium as a source of FRBs, is to search for prompt emission and an afterglow from FRBs at different frequencies including optics and X-rays [52; 53; 54]. Due to the weakness of the expected signal future observations of the Galactic (SGR 1935+2154) or near-by extragalactic FRBs would be especially valuable. ## 4 Very short-term periodicity and quasiperiodic features Up to now, there are no robust measurements of spin periods of the sources of FRBs (see also the next section). Still, there are already several very interesting and important results related to short-term (quasi)periodicity. In the first place, this is the periodicity detected (at the \(6.5\sigma\) significance level) in a burst of the one-off source FRB 20191221A [55]. The event is atypically long \(-\sim 3\) seconds, - and has a complicated structure. At least nine components are well distinguished. Analysis demonstrates that these components are separated by intervals which are multiples of \(0.217\) second (no significant deviations from the strict periodicity in the time of arrivals of single components are observed). The origin of this periodicity is unclear. It is tempting to say that the periodicity reflects the spin of the NS. This can be checked if another burst with periodicity is detected from this source. If we are dealing with a young magnetar with the spin period 0.217 s, then it must have \(\dot{P}\sim 10^{-9}-10^{-8}\). On a time scale of several years it might be quite easy to detect its spin-down. Another possibility discussed in [55] is related to quasiperiodic oscillations similar to those detected in Galactic magnetar flares (e.g., SGR 1806-20 [56] and SGR J15050-5418 [57]). They have frequencies from \(\sim\)20 Hz up to \(\sim 1\) kHz, i.e., somewhat higher than in the burst of FRB 20191221A. Alternatively, periodicity can be related to magnetospheric processes. But this possibility is less probable (see discussion in [55]). Quasiperiodic behaviour with frequency \(\sim 40\) Hz is also suspected for one burst of the Galactic magnetar SGR 1935+2154 [58]. This is exactly the burst which was observed simultaneously in radio and X/gamma-rays. The quasiperiodic structure is found in the high energy data obtained by _Insight_-HXMT. Identification of this quasiperiodicity is not very significant (\(3.4\sigma\)) as only three peaks are well identified in the burst. Still, the result is very intriguing. New observations of this source might clarify the situation. Let us move towards higher temporal frequencies. Observations of FRB 20201020A demonstrated the existence of a quasiperiodic structure with characteristic time scale 0.415 msec [59]. In this one-off FRB observations with Apertif distinguished five components. Of course, a submillisecond spin period can be excluded, as the frequency is very high. It is even too high for crustal oscillations. A frequency \(\sim 2\) kHz can appear in a NS-NS coalescence, and now there is an observational example: quasiperiodic oscillations are discovered in two short GRBs [60]. Still, this possibility looks rather exotic. It is more probable, that the 0.415 msec structure is due to properties of a magnetospheric emission mechanism, see discussion in [59]. If so, more data on such features in FRB emission will open the possibility to study in detail emission properties of magnetospheres of extreme magnetars. In radio observations even nanosecond time scales can be probed. This opens a possibility to study processes in magnetospheres of the sources or/and at relativistic shocks (depending on the emission mechanism), as well as vibrations of a NS crust. At the moment, resolution \(\sim\) tens of nanoseconds is already reached in FRB observations. In one case (repeating FRB 20200120E associated with a globular cluster of the M81 galaxy) it is demonstrated that sub-bursts are structured at the \(\sim 2\,\mu\)sec scale [61]. In this case, the feature is most probably related to a magnetospheric emission mechanism4, but the exact nature remains unclear. Footnote 4: In the case of the Crab pulsar observations show the existence of pulses with duration \(<1\) nanosecond [62]. These events definitely have magnetospheric origin. Observations of FRBs might open a wide perspective of studies of periodic and quasiperiodic processes related to different aspects of NSs physics (crust oscillations, magnetospheric processes, etc.). Accounting for the growing number of observations of repeating and one-off sources at different frequencies, in near future this might be an important channel of information about magnetars. Still, it is also very important to determine the basic temporal characteristic of a NS - its spin period. ## 5 Spin periods Measurements of the spin period and its derivative, \(\dot{P}\), of the first radio pulsar made possible to identify the nature of the emitting object [63]. Spin measurements for sources of FRBs are very much welcomed as they can allow us to understand better properties of these NSs, in particular - to prove their magnetar nature. Determination of a spin period can give some clues to the magnetar properties. If the period derivative is measured, too, then it is possible to estimate the characteristic age \(\tau_{\rm ch}=P/2\dot{P}\) and the magnetic field of the NS with the simplified magneto-dipole formula: \[I\omega\dot{\omega}=\frac{2\mu^{2}\omega^{4}}{3c^{3}}. \tag{5}\] Here \(I\) is the moment of inertia of a NS, \(\omega=2\pi/P\) - spin frequency, \(\mu\) - magnetic moment, and \(c\) - the speed of light. Such measurements might help to understand the origin of the source. Short spin periods will definitely point towards young ages of the magnetars, as \(\tau_{\rm ch}\propto P^{2}/B^{2}\) if the initial spin period, \(P_{0}\), is much smaller than the observed one. Long spins can be explained in several models (see e.g. [64] and references therein in application to the 1000-second pulsar GLEAM-X J162759.5-523504.3). One option is related to the fallback accretion soon after the NS formation [65]. Fallback matter can form a disc around a newborn NS. Interaction of a magnetar with the fallback disc can result in significant spin-down, e.g. [66] where the authors explain the 6.7 hour period of the NS in the supernova remnant (SNR) RCW 103 and [67] (see also the next section). Another option is related to a large initial field which at some point stops to decay significantly, so the NS rapidly spins down: \(P\propto\mu\sqrt{\dot{t}}\approx 10\,\mathrm{s}\,\mu_{33}(t/3000\,\mathrm{ yrs})^{1/2}\) for \(P\gg P_{0}\). Here \(\mu_{33}=\frac{\mu}{10^{33}\,\mathrm{G\ cm^{3}}}\). Spin periods can be measured directly or indirectly by different observations and analysis. Some (quasi)periodic structures in bursts (see the previous section) can provide information about the spin. Alternatively, appearance of repeaters' bursts can be phase dependent. From several repeating sources many hundreds of bursts are detected, see analysis in [68]. Potentially, such huge statistics can provide an opportunity to search for the spin period. Unfortunately, there is little hope to obtain a period value using large statistics of burst of repeating sources. It can be understood if we consider that FRB bursts might be related to high energy flares of magnetars. It is well-known that some Galactic magnetars produced hundreds of detected high energy flares. But even with such significant statistics their distribution along the phase of spin period is typically found to be consistent with the uniform distribution, see e.g. [69; 70]. I.e., the reverse task - determination of the spin period of a soft gamma-ray repeater from burst timing statistics, - cannot be performed. The same might be true for FRBs, especially if radio emission is produced far away from a NS in a relativistic shock. But analysis of numerous bursts from two very active repeaters gave an opportunity to find a different type of periodicity which we discuss in the following section. ## 6 Long-term periodicity The source FRB 180916.J0158+65 (aka R3) is an active near-by repeater situated in a star forming region in a spiral galaxy at 149 Mpc from the Earth. Relative proximity and high rate of bursts resulted in many observational campaigns dedicated to this particular source. CHIME observations resulted in a discovery of 16.5-day periodicity in activity of this source [71]. Later on, observations with different instruments at different frequencies confirmed this result. The 16.5-day cycle consists of a (frequency dependent) window of activity and a quiescent period. Immediately, several different interpretation of the observed periodicity were proposed. Below we briefly describe three models proposed in the magnetar framework. Still, alternative explanations are also possible. E.g., a model based on an accretion disc precession is described in [72]. Probably, the most natural assumption which can explain the detected long-term periodicity is the binarity of the source [73], see also [74] for a review and development of the model. Time scale \(\sim 10-20\)-days is quite typical for orbital periods of binary systems with a NS and a massive companion, see a catalogue of high-mass X-ray binaries in [75]. Intensive stellar wind from the massive star (e.g., a supergiant) would provide an environment which e.g., can modulate (frequency dependent) windows of transparency for the radio emission of the NS. It is not difficult to formulate a realistic scenario of formation of a magnetar in such systems [76]. Another option is related to precession. Understanding free precession of NSs is a long standing problem [77; 78]. A NS might not be an ideally symmetric object, but oblate (biaxial in the first approximation) with non-equal principal moments of inertia \(l_{3}>l_{2}=l_{1}\). If \(l_{3}=l_{1}(1+\epsilon)\) where \(\epsilon=(l_{3}-l_{1})/l_{1}\ll 1\) is the oblateness, then, the precession period can be written as \(P_{\rm p}\approx P/\epsilon\). For \(P\sim 1\) s and \(\epsilon\sim 10^{-6}\) we obtain a precession period similar to the one observed for the R3. Finally, the third proposed hypothesis simply relates the observed periodicity to the spin period of the NS [79]. As it was already mentioned in the previous section, there are several variants how a NS can achieve a very long spin period, much larger than it is observed for the vast majority of PSRs or/and known Galactic magnetars. R3 is not the only source of FRBs for which that type of periodicity is detected. The first repeater - FRB 121102, - became the second one for which activity is limited to periodically repeating cycles. In the case of FRB 121102 the period appeared to be an order of magnitude longer [80; 81]. Up to now, all three scenarios to explain the observed periodicity seem to be plausible. No doubt, in near future the same type of behavior will be discovered for other sources. In any case, this will open opportunities to get new information about NSs. This is very promising because up to now we do not know active magnetars in binaries [82] or precessing magnetars as well as NSs with spin periods \(\sim 10\)-\(100\) days. So, whichever option is correct - a growing sample of FRB sources with periodic activity will bring us new information about physics and astrophysics of NSs. However, observations of FRBs can be useful not only for NS studies, but also for testing fundamental properties of Nature and measuring some basic physical parameters. ## 7 Fundamental theories FRBs in many respects are unique sources. They produce very narrow bursts (sometimes with microstructure visible down to the scale of tens of nanoseconds) and they are visible from cosmological distances corresponding to \(z>1\). This makes them a powerful tool to measure (or put limits on) some fundamental parameters. ### Testing the equivalence principle In General relativity (GR) photons of different energy experience the same gravitational effects. E.g., if a burst with extended spectrum is emitted at cosmological distances, then we expect to receive all photons at the same time (neglecting dispersion of the signal in the medium). However, in many theories of gravity this is not the case. Thus, astronomical observations of distant transient sources can be used to test theoretical predictions. Historically, gamma-ray bursts (GRBs) were actively used for fundamental theories tests, e.g. [83] and references therein. However, FRBs have some advantages due to their very sharp short pulses and high precision of radio astronomical observations. These sources were proposed as probes for the Einstein equivalence principle soon after their discovery [84]. Testing the equivalence principle is typically defined as a limit on the post-Newtonian parameter \(\gamma\). This quantity defines how much curvature is produced by unit rest mass. In some cases it can be given as: \[\gamma=\frac{1+\omega_{\rm BD}}{2+\omega_{\rm BD}}, \tag{6}\] here \(\omega_{\rm BD}\) is the Brans-Dicke dimensionless parameter. GR is reproduced for \(\omega_{\rm BD}\rightarrow\infty\) (i.e., if \(\gamma=1\)). Observationally, the delay \(\Delta t\) between signals at different frequency is measured. The hypothetical effect of the equivalence principle violation can be hidden by others, but if we separate it then we obtain: \[\Delta t=\frac{\gamma(\nu_{1})}{c^{3}}\int_{r_{\rm em}}^{r_{\rm obs}}U(r){\rm d }r-\frac{\gamma(\nu_{2})}{c^{3}}\int_{r_{\rm em}}^{r_{\rm obs}}U(r){\rm d}r. \tag{7}\] Here \(\nu_{1}\) and \(\nu_{2}\) are two different frequencies of electromagnetic radiation. \(U(r)\) is the gravitational potential. The integral is taken from the point of emission to the point of detection. A simplified conservative approach is based on the time delay due to photons propagation in the gravitational potential of the Galaxy in eq.(7), e.g. [84]. Such approach results in limits: \(\Delta\gamma\equiv|\gamma-1|\lesssim 10^{-8}\). However, detailed calculations for cosmological sources are non-trivial [85]. Calculations along the cosmological path requires an accurate consideration, as just decaying continuation of the Galactic potential in Minkowski space leads to an incorrect result. Under reasonable assumptions about the value of the effect during the whole trajectory on a cosmological scale much more tight limits can be derived. E.g., in [86] the authors obtain \(\Delta\gamma\lesssim 10^{-21}\) for FRBs beyond \(z=1\). Most probably, in the near future FRB observations will remain the most powerful astronomical tool to test the equivalence principle. This might be possible not only due to an increase of the number of known sources, discovery of bursts at larger redshift, and improvements in the model parameters but also due to usage of new measurements, e.g. related to statistical properties of the dispersion measure [87]. In principle, violation of the Lorentz-invariance can also be tested with FRBs, especially if gamma-ray counterparts are detected. Such observations for extragalactic FRBs are quite possible in near future as already FRBs are detected at distances about few Mpc and gamma-detectors can detect a hyperflare of a magnetar at distances about few tens of Mpc, e.g. [88] and references therein. ### Measuring the photon mass limits Another fundamental parameter which can be constrained by FRBs observations is the photon mass, \(m_{\gamma}\). If photons have non-zero masses then velocity of their propagation becomes frequency-dependent: \[v=c\sqrt{1-\frac{m_{\gamma}^{2}c^{4}}{E^{2}}}\approx c\left(1-\frac{1}{2}Av^{- 2}\right). \tag{8}\] Here \(m_{\gamma}\) is the photon mass and \(E\) - its energy; \(A=\frac{m_{\gamma}^{2}c^{4}}{h^{2}}\). Different methods are used to put a limit on \(m_{\gamma}\). In particular, astronomical rapid transient sources at cosmological distances can be a very good probe. Previously, the most strict limit on the photon mass derived with such sources (GRBs) was \(m_{\gamma}\lesssim 10^{-43}\) g. With FRBs it became possible to improve it significantly. If we observe a source at a redshift \(z\), then the time delay between two simultaneously emitted photons with frequencies \(\nu_{1}\) and \(\nu_{2}\) due to a non-zero photon mass can be written as: \[\Delta t_{\rm m}=\frac{A}{2H_{0}}\left(\nu_{1}^{-2}-\nu_{2}^{-2}\right)H_{1} (z). \tag{9}\] Here \(H_{0}\) is the present day Hubble constant and \(H_{1}\) is defined as: \[H_{1}=\int_{0}^{z}\frac{(1+z^{\prime})^{-2}{\rm d}z^{\prime}}{\sqrt{\Omega_{ \rm m}(1+z^{\prime})^{3}+\Omega_{\Lambda}}}. \tag{10}\] Thus, if the redshift of an FRB source is known, then timing information about properties of the burst (width of pulses, distance between subpulses) can be used to constrain \(m_{\gamma}\). Usage of FRBs to constrain the photon mass was proposed independently in two papers: [89] and [90]. Curiously, these authors based their estimates on an erroneous identification of FRB 150418 with a galaxy at \(z\approx 0.5\). The derived limit was: \(m_{\gamma}\lesssim 3\times 10^{-47}\) g. The first secure identification of the host galaxy of FRB 121102 was made a year later [32]. Immediately, this information was used [91] to put a realistic limit on the photon mass. The value appeared to be of the same order: \(3.9\times 10^{-47}\) g. Note, that this limit is much better than those obtained with GRB observations. In [92] the authors also used observations of FRB 121102 and slightly improved the limit as they used a distance between well-measured subpulses: \(m_{\gamma}\lesssim 5.1\times 10^{-48}\) g. Later in [93] the authors used nine FRBs with known redshift to put a joint limit on \(m_{\gamma}\): \(<7.1\times 10^{-48}\) g. Finally, the authors of [94] obtained a better limit on the basis of the data on 17 well-localized FRBs: \(m_{\gamma}<4.8\times 10^{-48}\) g. Future simultaneous observations at significantly different frequencies of very narrow pulses with nanosecond scale time resolution will help to improve the limits on \(m_{\gamma}\) significantly. ## 8 Discussion ### Intergalactic medium and baryonic matter The FRB observations is the very powerful tool to study medium along the propagation from the FRB source to the Earth. It is particularly suited for studies of the IGM. The analysis of DMs of localized FRBs (i.e., with measured \(z\)) could be used to search for so-called'missing baryons'. Various cosmological probes show that the baryons make up around 5% of the total energy density of the Universe [95]. Still, only a minor fraction of these baryons was detected in observations of galaxies and galaxy clusters. The most popular explanation is that the remaining baryons reside in the IGM and due to its tenacity are almost undetectable by direct observations. However, DM measurements produce the total column density, thus they are ideally suited for the task. For this test one needs to extract the IGM-related part \(\rm DM_{IGM}\) from the total DM which is the sum of several components: \[\rm DM=DM_{MW,ISM}+DM_{MW,halo}+DM_{intervening}+DM_{IGM}+DM_{host}, \tag{11}\] where the first and the second terms describe contributions from the Milky Way (MW) ISM and the halo, correspondingly. \(\rm DM_{host}\) combine contributions from the halo and the ISM of the host galaxy, including the circumburst region. For the aims of this analysis it is better to avoid bursts where there are intervening galaxies with large \(\rm DM_{intervening}\) close to the line of sight. \(\rm DM_{MW,ISM}\) could be estimated using existing models of electron distribution in the Galaxy [96; 97] with precision around 20%. The MW halo contribution is usually assumed to be less than 100 pc cm\({}^{-3}\) and a benchmark value \(\rm DM_{MW,halo}=50\) pc cm\({}^{-3}\) is frequently used [98]. It is crucial to estimate the host contribution, which is also around \(O(100\) pc cm\({}^{-3})\). At the moment these estimations are performed using statistical distributions [98; 99], informed mainly by the cosmological simulations. **The host contribution is modelled using the log-normal distribution with a median \(\exp(\mu)\) and a logarithmic width parameter \(\sigma_{\rm host}\). The parameter space is studied in a wide range, e.g. in [98]\(\mu\) was set in the range \(20-200\) pc cm\({}^{-3}\), \(\sigma_{\rm host}\) in the range 0.2 - 2.0. Host contribution parameters are included in the joint fit, along with the baryon fraction \(\Omega_{b}\). The analysis in [98] shows that this distribution with parameter values of \(\mu=100\) pc cm\({}^{-3}\) and \(\sigma_{\rm host}\sim 1\) successfully describes the observations. The host contribution comprises contributions from the host galaxy and the CBM. Although the latter one could be large for very young sources, it become subdominant (\(<100\) pc cm\({}^{-3}\)) after several decades of the evolution of the remnant [47] thus it would not affect the analysis of the majority of one-off bursts**. An expected increase in quality of observations and modelling of host galaxies **and the CBM evolution** will certainly increase the precision of DM\({}_{\rm host}\) estimates. Finally, individual realizations of DM\({}_{\rm IGM}\equiv{\rm DM}-({\rm DM}_{\rm MW,ISM}+{\rm DM}_{\rm MW,halo}+{\rm DM }_{\rm host})\) are prone to inevitable fluctuations due to inhomogeneities in the IGM, so it is necessary to work with averaged (binned) values \(\overline{\rm DM}_{\rm IGM}(z)\). This observational estimate should be compared with the theoretical expectations (e.g. [100]): \[{\rm DM}(z)=\frac{3cH_{0}\Omega_{\rm b}}{8\pi Gm_{\rm p}}\int_{0}^{z}\frac{(1+ z^{\prime})f_{\rm IGM}(z^{\prime})\chi(z^{\prime})}{\sqrt{(1+z^{\prime})^{3} \Omega_{\rm m}+\Omega_{\Lambda}}}dz^{\prime}, \tag{12}\] where \(m_{\rm p}\) is the mass of the proton, \(f_{\rm IGM}\) is the fraction of baryons residing in IGM, given that some baryons are sequestered in the stars, stellar remnants, ISM in galaxies and so on. \(\chi(z)\) is the number of free electrons per one baryon and it depends on ionization fractions \(\chi_{\rm H}(z)\) and \(\chi_{\rm He}(z)\) of hydrogen and helium respectively: \[\chi(z)=Y_{\rm H}\chi_{\rm H}(z)+Y_{\rm He}\chi_{\rm He}(z), \tag{13}\] \(Y_{\rm H}=3/4\), \(Y_{\rm He}=1/4\) are mass fractions of hydrogen and helium. For \(z<3\) both species are fully ionized, so \(\chi(z<3)=7/8\). As there is a degeneracy between \(f_{\rm IGM}\) and \(\Omega_{\rm b}\) there are two ways to exploit DM data. First, one can fix \(f_{\rm IGM}\) from some models of galaxy evolution and put constraints on \(\Omega_{b}\): the latest results from observation of 22 localized burst gave stringent constraints: \(\Omega_{b}=0.049^{+0.0036}_{-0.0033}\)[101]. Alternatively, the \(\Omega_{b}\) value could be fixed using, e.g. cosmic microwave background (CMB) observations and some meaningful constraints on the \(f_{\rm IGM}\) could be obtained: \(f_{\rm IGM}=0.927\pm 0.075\)[99]. In any case, it could be stated that the long-standing problem of'missing baryons' has been solved using FRB observations. At the moment, estimates of \(f_{\rm IGM}\) due to limited statistics found no evidence of redshift evolution. However, such evolution is expected - at higher redshifts the fraction of baryons residing in IGM increases, approaching unity at \(z>5\). Simulations show that \(N=10^{3}\) of localized FRBs would be enough to detect this evolution at statistically significant level and begin to probe various models of accretion of matter from IGM [102]. Also, as it can readily be seen from eq. (12), DM observations could be used to study the reionization history of the Universe given by the function \(\chi(z)\). Robust detection of He II reionization, which is expected to occur at \(z\sim 3\) could be achieved with detection of \(N=500\) FRBs with \(z<5\). The moment of sudden reionization would be pinpointed with \(\delta z=0.3\) precision [103]. Even more ambitious goals could be reached if FRB were produced by remnants of Pop III stars at \(z=15\) onwards. \(N=100\) of localized FRBs with \(5<z<15\) redshifts could constrain CMB optical depth at \(\sim 10\%\) level. This might let us to find the midpoint of reionization with \(4\%\) precision; detection of \(N=1000\) FRBs would give an opportunity to describe the whole history of reionization [104]. It also would be very important for CMB analysis, reducing uncertainties on the CMB optical depth due to reionization and leading to more precise determination of various cosmological parameters, such as the amplitude of the power spectrum of primordial fluctuations. Properties of haloes of intervening galaxies could also be inferred from DM observations: only \(N=100\) of localized FRBs at \(z<1\) would suffice to mildly constrain radial profile of CGM [105]. With \(N=1000\) it would be possible to describe this profile much better and for different types of galaxies. Scattering mostly arises in the ISM of the host galaxy and the Milky Way. After extraction of the first contribution it would be possible to thoroughly study turbulence of the ISM for a very large set of galaxies at different redshifts [106]. The largest contribution to Faraday rotation also comes from the ISM of the galaxies, including circumburst medium. That makes them less suitable for studies of the intergalactic magnetic fields (IGMF). Still, as there is strong cosmological dilution of observed RMs: \(\mathrm{RM_{obs}}=\mathrm{RM_{int}}/(1+z)^{2}\), observations of 100-1000 distant enough FRBs with \(z>2-3\) could be used to study origin of magnetic fields and discriminate between their primordial and astrophysical origin [107; 108]. However, this requires extremely precise knowledge of the MW contribution at 1 rad m\({}^{-2}\) level. Non-trivial limits on the IGMF with very small correlation lengths (\(<\)kpc), which are extremely difficult to constrain by other means, were recently obtained by analysis of FRB scattering - presence of \(\mathcal{O}(10\) nG) fields shift the inner scale of turbulence, boosting the scattering [109] Naturally, observations of FRB RMs could be used to study magnetic fields in the host galaxies. Existing observations already gave an opportunity to detect magnetic fields with average magnitude \(\left<B_{||}\right>\sim 0.5\)\(\mu\)G in nine star-forming galaxies at \(z<0.5\)[110]. In the future it would be possible to investigate magnetic fields in hundreds and thousands of galaxies of different types. ### Gravitational lensing of FRBs High rate and short burst duration make FRBs very attractive candidates for gravitational lensing studies. Gravitational lensing is caused by deflection of electromagnetic waves by a massive body (lens) located very close to the line of sight towards the source. In the simplest case of the point-like mass (which is true e.g., in the case of primordial BHs), the characteristic angular scale is set by the so-called Einstein radius: \[\theta_{\mathrm{E}}=2\sqrt{\frac{GM_{1}}{c^{2}}\frac{D_{\mathrm{ls}}}{D_{ \mathrm{s}}D_{\mathrm{l}}}}, \tag{14}\] where \(M_{1}\) is the lens mass, \(D_{\mathrm{ls}}\), \(D_{\mathrm{l}}\), \(D_{\mathrm{s}}\) are the distances from the lens to the source and from the observer to the lens and to the source, correspondingly. Gravitational lensing by a point-like lens will produce two images, with the following angular positions: \[\theta_{1,2}=\frac{\beta\pm\sqrt{\beta^{2}+4\theta_{\mathrm{E}}^{2}}}{2}, \tag{15}\] where \(\beta\) is the impact parameter - the angular distance between the unperturbed position of the source and the lens. In case of FRBs we are mostly interested in the temporal properties and we would talk about two bursts, rather than two images [111]. It takes different time for the signal to travel by two slightly different trajectories, so a certain time delay would emerge between these bursts: \[\Delta t=\frac{4GM_{1}}{c^{3}}(1+z_{1})\left[\frac{y}{2}\sqrt{y^{2}+4}+\ln \left(\frac{\sqrt{y^{2}+4}+y}{\sqrt{y^{2}+4}-y}\right)\right], \tag{16}\] where \(y\equiv\beta/\theta_{\mathrm{E}}\) is the normalized impact parameter, \(z_{1}\) is the lens redshift. Another important property is the relative brightness of two bursts: \[R=\frac{y^{2}+2+y\sqrt{y^{2}+4}}{y^{2}+2-y\sqrt{y^{2}+4}}>1, \tag{17}\] this means that the first burst is always brighter than the second one. In all other respects the properties of two bursts might be very close, besides some minor differences caused by propagation in a medium with slightly different properties. From eq. (16) it could be seen that the delay time is several times larger than the time of crossing of the gravitational radius of the lens, i.e. \(\mathcal{O}(\mathrm{ms})\) delay corresponds to a \(\sim 30\ M_{\odot}\) lens. Compact objects such as e.g., primordial black holes (PBHs) are natural targets for searches with FRB lensing [112]. Initial searches were performed with the shortest detectable delay corresponding to duration of burst and succeeded in constraining the PBH fraction in the dark matter \(f_{\mathrm{PBH}}<1\) (\(f_{\mathrm{PBH}}\equiv\Omega_{\mathrm{PBH}}/\Omega_{\mathrm{DM}}\)) for \(M_{\mathrm{PBH}}>30\ M_{\odot}\). Recently, a new approach was developed: instead of correlating intensity curves it was suggested to look for correlation in the voltage (or, equivalently, electric field) curves. That gave an opportunity to progress from incoherent to coherent method with the lower limit on the detectable time delay now set by the Nyquist frequency, \(\mathcal{O}(\mathrm{ns})\) and, correspondingly sensitive to the lenses in the mass range \(10^{-4}-10^{4}\ \mathrm{M_{\odot}}\)[111; 113]. What are the prospects of this method? Using the word pun from [114] they are'stellar': with \(5\times 10^{4}\) FRBs detected during several years of operations of next generation instruments, it would be possible to constrain PBHs fraction at \(<10^{-3}\) level in the whole \(10^{-4}-10^{4}\ M_{\odot}\) mass range, setting the most stringent limits there. FRBs could also be lensed by much more massive objects, such as galaxies. In this case the corresponding time delay \(t_{\mathrm{d}}\) will be \(\mathcal{O}(10\ \mathrm{days})\). In this case the best strategy would be to search for lensed repeating bursts. Short duration of bursts would make possible to determine \(t_{\mathrm{d}}\) with extreme precision, much higher than allowed by observation of lensed AGNs, where an error in \(t_{\mathrm{d}}\) can exceed several hours. Measurement of time delay along with gravitational model of the lens (galaxy) allow one to estimate the Hubble constant and curvature term \(\Omega_{\mathrm{k}}\) in a straightforward and cosmological model-independent way. Observations of 10 lensed repeaters would give an opportunity to constrain \(H_{0}\) at sub-percent level and reach \(<\)10% precision for \(\Omega_{\mathrm{k}}\) determination [115]. However, given that the lensing probability is around \(3\times 10^{-4}\) and repeaters fraction is \(\sim 3\times 10^{-2}\) that would demand \(\mathcal{O}(10^{6})\) detected FRBs or, possibly, several decades of SKA operations. More optimistically, almost the same level of precision, \(\frac{\Delta H_{0}}{H_{0}}\sim 10^{-2}\), \(\frac{\Delta\Omega_{\mathrm{k}}}{\Omega_{\mathrm{k}}}\sim 10^{-1}\) could be reached with 10 lensed non-repeating FRBs [116], which decreases the number of needed detections to \(\sim 30,000\). ## 9 Conclusions FRB study is a new frontier of NS astrophysics. If we talk about magnetar bursting activity, then statistics of FRBs is already larger than statistics of SGR flares. If we consider just absolute numbers of known NSs related to any kind of astrophysical sources, then statistics of FRBs is already comparable with the PSR statistics, and soon will significantly outnumber it. FRB studies initiated advances in methods of radio observations of short transients. New instruments are under construction or under development. This promises new discoveries. Already now studies suggest that the known population of FRBs can be supplemented by shorter and longer events. In [117] the authors used Parkes archive of low-frequency observations to look for new FRBs. Indeed, they found four events which have specific feature: they have duration \(\gtrsim 100\) msec, i.e. they are longer than typical FRBs by more than an order of magnitude. The authors suggest that such long events are often missed in standard FRB searches. The same situation can be with very short events. In [118] the authors presented the discovery of FRB 20191107B with an intrinsic width \(11.3\,\mu\mathrm{sec}\). Observations have been done with the UTMOST radio telescope. The authors argue that such short bursts can be mostly missed by UTMOST and also in many other surveys. In future, observations of FRBs with non-typical (from the present point of view) properties can bring new information and new puzzles. Intense observations can result in discovery of new types of radio transients which are not related to FRBs. Since 2007 (and especially since 2013) numerous interesting theoretical models have been proposed in order to explain FRBs [12, 119]. However, now we can treat many of them as predictions for new types of events. Thus, predictions of radio transients from cosmic strings [120], PBHs evaporation [121], white holes [122], deconfinement [123], etc. can be verified. Deeper understanding of physics of FRB emission and related processes together with better knowledge of astrophysical picture of the FRB sources formation and evolution will allow obtaining even more information using FRBs as probes and indicators. The reason is in better understanding of links and correlations of different observables with physical and astrophysical parameters. If we understand the emission mechanism, origin of different types of (quasi)periodicities, polarization properties, etc. - then we can directly calculate related physical parameters of NSs, and so with a large sample of observed bursts we can obtain statistically significant information about these properties. E.g., FRBs can be related to glitches (and/or anti-glitches) of NSs. This possibility is based on recent observations of the Galactic magnetar SGR 1935+2154. Few days before the famous FRB-like burst (28 April, 2020) a strong glitch occurred in this object [124]. The authors used X-ray data from several satellites (NICER, NuSTAR, Chandra, XMM-Newton). The glitch is one of the strongest among all observed from magnetars. Relations between the glitch and the FRB-like burst is not clear. It was suggested [124] that glitches are related to active periods of magnetars, as after a glitch the magnetic field of a NS is significantly modified due to crust movements. However, a strong radio burst cannot be emitted soon after a glitch due to an abundance of charged particles in the magnetosphere. So, FRB-like bursts might appear few days after glitches (but not much more, as the activity is decreasing, and necessary conditions for a fast radio burst emission are not fulfilled any more). In October 2022 SGR 1935+2154 showed another period of activity with two FRB-like bursts accompanied by X/gamma-ray flares [125, 126, 127, 128, 129, 130]. During this period of activity glitches were not reported. However, for the period of activity in October 2020, when three FRB-like bursts have been detected (without high energy counterparts) [131] there is an observations of a rapid spin frequency variation. In this case an anti-glitch was detected before radio bursts [132]. How glitches and anti-glitches are related to FRB-like bursts is unclear, but if this is figured out, then we can have an additional tool to study glitch/anti-glitch activity for a large sample of extragalactic magnetars. Many other applications of FRBs in different areas of astrophysics are waiting for us in the near future. With tens of thousand of FRBs, for many hundreds of which precise redshifts will be independently measured, it is possible to perform 3D-mapping of space medium: from the Galactic ISM up to cosmological scales. Discovery of counterparts at other wavelengths, on top of all other applications, will make it possible to test Lorentz-invariance at a new level of precision. No doubt, in the next few years we will have more Galactic sources of FRBs and sources in near-by galaxies at distances a few tens of Mpc for which observations of counterparts is possible. This might help to understand the mechanism of emission and solve several other puzzles related to physical conditions which lead to such bursts. Proliferation of high-cadence wide-angle surveys, especially in optics (e.g., Vera C. Rubin Observatory - LSST) and X-rays would greatly increase chances for simultaneous observations of nearby FRBs at different wavelengths [133, 134]. Also, high sensitivity of the next generation gravitational wave observatories (Einstein telescope, Cosmic explorer) maybe would open a new area of FRB multi-messenger observations. To summarize, in the following years studies of FRBs might open many possibilities to look deeper in the physics of NSs. The authors contributed equally to this review. S.P. acknowledges support from the Simons Foundation which made possible his visit to the ICTP. The work of M.P. was supported by the Ministry of Science and Higher Education of Russian Federation under the contract 075-15-2020-778 in the framework of the Large Scientific Projects program within the national project "Science". No original data is presented in the review. All comments and questions might be addressed to the corresponding author. We thank the referees for useful comments and suggestions. The authors actively used the NASA ADS database while preparing this review. The authors declare no conflict of interest. The following abbreviations are used in this manuscript: \begin{tabular}{l l} AIC & Accretion induced collapse \\ BH & Black hole \\ CBM & Circumburst medium \\ CCSN & Core-collapse supernovae \\ CMB & Cosmic microwave background \\ DM & Dispersion measure \\ FRB & Fast radio burst \\ GR & General relativity \\ GRB & Gamma-ray burst \\ IGM & Intergalactic medium \\ IGMF & Intergalactic magnetic fields \\ ISM & Interstellar medium \\ MW & Milky Way \\ NS & Neutron star \\ PSR & Radio pulsar \\ PWN & Pulsar wind nebula \\ RM & Rotation measure \\ SGR & Soft gamma-ray repeater \\ SNR & Supernova remnant \\ WD & White dwarf \\ \end{tabular}
2305.01479
On the properties of Gaussian Copula Mixture Models
This paper investigates Gaussian copula mixture models (GCMM), which are an extension of Gaussian mixture models (GMM) that incorporate copula concepts. The paper presents the mathematical definition of GCMM and explores the properties of its likelihood function. Additionally, the paper proposes extended Expectation Maximum algorithms to estimate parameters for the mixture of copulas. The marginal distributions corresponding to each component are estimated separately using nonparametric statistical methods. In the experiment, GCMM demonstrates improved goodness-of-fitting compared to GMM when using the same number of clusters. Furthermore, GCMM has the ability to leverage un-synchronized data across dimensions for more comprehensive data analysis.
Ke Wan, Alain Kornhauser
2023-05-02T14:59:37Z
http://arxiv.org/abs/2305.01479v2
# On the properties of Gaussian Copula Mixture Models + ###### Abstract This paper investigates Gaussian copula mixture models (GCMM), which are an extension of Gaussian mixture models (GMM) that incorporate copula concepts. The paper presents the mathematical definition of GCMM and explores the properties of its likelihood function. Additionally, the paper proposes extended Expectation Maximum algorithms to estimate parameters for the mixture of copulas. The marginal distributions corresponding to each component are estimated separately using non parametric statistical methods. In the experiment, GCMM demonstrates improved goodness-of-fitting compared to GMM when using the same number of clusters. Furthermore, GCMM has the ability to leverage un-synchronized data across dimensions for more comprehensive data analysis. Gaussian Mixture, Copula, Model Clustering, Gaussian Processes, Machine Learning, Other Algorithms and Architectures, Kernels ## 1 Introduction Gaussian Mixture models have been employed in various areas of research (Yang 1998 [13] and Pekka 2006 [8]). In the present study, we extend Gaussian Mixture Models into Gaussian Copula Mixture Models to address the following two concerns: * Heavy-tailed data require increasing numbers of clusters to fit with GMMs. To control number of clusters, heavy tails on marginal distributions should not lead to significantly greater clusters given the same underlying dependence structure. * GMMs are usually applied to a synchronized data matrix of dimension \(M\) and number of observations \(N\). In many problems, there are numerous unsynchronized data each dimension, the number of which is denoted as \(n_{m}\) for the \(m\)-th dimension. Such data should be utilized to update the joint distribution shared by the different dimensions. To address the concerns, we introduced copulas into mixture models and new Expectation Maximum type algorithms are developed to estimate their parameters. ## 2 Related Studies Gaussian mixture models have been used widely in various applications and the Expectation Maximum algorithm has been utilized for estimating their parameters. The convergence properties of such Expectation Maximum algorithms have been discussed in Lei (1996 [12]). However, each component of a GMM is a multivariate gaussian distribution that cannot effectively capture heavy tails and the number of components become sensitive w.r.t heavy tails. The introduction of more flexible components may help to further reduce number of components when working with heavy-tailed data. On the other hand, copulas have been used in research for model dependence. The definition of a copula in the two dimensional case is given as below:
2309.01974
Anomalous Thermodynamic Cost of Clock Synchronization
Clock synchronization is critically important in positioning, navigation and timing systems. While its performance has been intensively studied in a wide range of disciplines, much less is known for the fundamental thermodynamics of clock synchronization, what limits the precision and how to optimize the energy cost for clock synchronization. Here, we report the first experimental investigation of two stochastic clocks synchronization, unveiling the thermodynamic relation between the entropy cost and clock synchronization in an open cavity optomechanical system. Two autonomous clocks are synchronized spontaneously by engineering the controllable photon-mediated dissipative optomechanical coupling and the disparate decay rates of hybrid modes. The measured dependence of the degree of synchronization on entropy cost exhibits an unexpected non-monotonic characteristic, indicating that the perfect clock synchronization does not cost the maximum entropy and there exists an optimum. The investigation of transient dynamics of clock synchronization exposes a trade-off between energy and time consumption. Our results reveal the fundamental relation between clock synchronization and thermodynamics, and have a great potential for precision measurements, distributed quantum networks, and biological science.
Cheng Yang, Jiteng Sheng, Haibin Wu
2023-09-05T06:01:04Z
http://arxiv.org/abs/2309.01974v1
# Anomalous Thermodynamic Cost of Clock Synchronization ###### Abstract Clock synchronization is critically important in positioning, navigation and timing systems. While its performance has been intensively studied in a wide range of disciplines, much less is known for the fundamental thermodynamics of clock synchronization-what limits the precision and how to optimize the energy cost for clock synchronization. Here, we report the first experimental investigation of two stochastic clocks synchronization, unveiling the thermodynamic relation between the entropy cost and clock synchronization in an open cavity optomechanical system. Two autonomous clocks are synchronized spontaneously by engineering the controllable photon-mediated dissipative optomechanical coupling and the disparate decay rates of hybrid modes. The measured dependence of the degree of synchronization on entropy cost exhibits an unexpected non-monotonic characteristic, indicating that the perfect clock synchronization does not cost the maximum entropy and there exists an optimum. The investigation of transient dynamics of clock synchronization exposes a trade-off between energy and time consumption. Our results reveal the fundamental relation between clock synchronization and thermodynamics, and have a great potential for precision measurements, distributed quantum networks, and biological science.
2307.09884
Electrons in helical magnetic field: a new class of topological metals
Two theorems on electron states in helimagnets are proved. They reveal a Kramers-like degeneracy in helical magnetic field. Since a commensurate helical magnetic system is transitionally invariant with two multiple periods (ordinary translations and generalized ones with rotations), the band structure turns out to be topologically nontrivial. Together with the degeneracy, this gives an unusual spin structure of electron bands. A 2D model of nearly free electrons is proposed to describe conductive hexagonal palladium layers under an effective field of magnetically ordered CrO$_2$ spacers in PdCrO$_2$. The spin texture of the Fermi surface leads to abnormal conductivity.
Yu. B. Kudasov
2023-07-19T10:28:25Z
http://arxiv.org/abs/2307.09884v1
# Electrons in helical magnetic field: a new class of topological metals ###### Abstract Two theorems on electron states in helimagnets are proved. They reveal a Kramers-like degeneracy in helical magnetic field. Since a commensurate helical magnetic system is transitionally invariant with two multiple periods (ordinary translations and generalized ones with rotations), the band structure turns out to be topologically nontrivial. Together with the degeneracy, this gives an unusual spin structure of electron bands. A 2D model of nearly free electrons is proposed to describe conductive hexagonal palladium layers under an effective field of magnetically ordered CrO\({}_{2}\) spacers in PdCrO\({}_{2}\). The spin texture of the Fermi surface leads to abnormal conductivity. Metallic delafossites PtCoO\({}_{2}\), PdCoO\({}_{2}\), PdCrO\({}_{2}\) are layered compounds with anomalous transport properties [1; 2; 3; 4]. Their conductivity at room temperature approaches to that of the best elementary conductors like argentum, copper, and aluminum [1]. CoO\({}_{2}\) and CrO\({}_{2}\) are proved to be dielectric interlayers [5], and the conductivity in these substances is determined entirely by hexagonal palladium or platinum layers. This leads to extremely large values of electron mean free path, up to 20 m\(\mu\) at low temperatures [4]. When electron momentum-relaxing scattering by impurities or phonons is much weaker as compared to momentum-conserving scattering, a hydrodynamic regime of electron transport appears [6] as it was observed in PdCoO\({}_{2}\)[7]. The unusual behavior of the delafossites suggests a novel mechanism of electron transport in them [8]. A long-range magnetic order appears in PdCrO\({}_{2}\) below \(T_{c}=37.5\) K [9; 10; 11]. Chromium ions form a \(120^{0}\) magnetic structure within a single layer. A complex interlayer arrangement leads to the magnetic system consisting of 18 sublattices [10]. The appearance of the magnetic order is accompanied by a resistivity drop [12]. Thus, a main hypothesis, which is going to be proved in the present article, is that the helical magnetic order on certain conditions induces the high-conductivity state. It also should be mentioned that unconventional anomalous Hall effect [13] and nonreciprocal electronic transport [2] are observed in PdCrO\({}_{2}\) in the magnetically ordered state. Above \(T_{c}\) a short-range magnetic order in chromium hexagonal layers persists up to about 500 K [11; 12], and this is another surprising fact in view of the extremely high conductivity. The Fermi surface in the metallic delafossites was thoroughly investigated [1; 4]. In the paramagnetic state, it is quasi-two-dimensional with a single \(\alpha\)-orbit. The transition to the ordered state in PdCrO\({}_{2}\) leads to a reconstruction of the Fermi surface within the magnetic Brillouin zone and appearance of additional \(\gamma\)-orbits corresponding to pockets in the vicinity of K points [11; 14]. A motion of spin-1/2 particle in helical magnetic field was studied for a long time [15; 16; 17; 18; 19; 20]. In particular, an exact solution is known [16] and there are various approximate approaches [17; 21]. The electron transport in noncollinear magnetic structures demonstrates nonreciprocal due to breaking of spatial inversion symmetry [2; 19]. A spin space group (SSG) theory is a useful instrument for investigation of helical magnetic systems [22; 23]. A SSG operator \(\{\mathbf{\alpha}|\mathbf{\beta}|\mathbf{t}\}\) comprises the spin rotation \(\mathbf{\alpha}_{s}\) and space transformation \(\{\mathbf{\beta}|\mathbf{t}\}\) combining rotation \(\mathbf{\beta}\) and translation \(\mathbf{t}\). In the framework of the theory, generalized translations are introduced (\(\{\mathbf{\alpha}|0|\mathbf{t}\}\)), and a generalized Bloch theorem was proved [22; 23]. Topological aspects of band structure of crystalline solids were intensively studied during last decades [24]. Main efforts of theoreticians and experimenters were concentrated on investigations of topological insulators and their edge states [25; 26]. Recently, magnetic topological insulators and semimetals including helimagnets turned out to be the focus of attention [27; 28]. In the present article, a novel approach to topology of metallic helimagnets is proposed. Let us consider a particle of spin-1/2 in a nonuniform magnetic field, which is invariant under translation with period \(\mathbf{T}\), i.e. \(\mathbf{h}(\mathbf{r}+\mathbf{T})=\mathbf{h}(\mathbf{r})\). The Hamiltonian of the system has the form \[\hat{H}=\hat{H}_{0}+\mathbf{h}(\mathbf{r})\hat{\mathbf{\sigma}} \tag{1}\] where \(\hat{\mathbf{\sigma}}\) are the Pauli matrices. One can prove two theorems on eigenvalues of the Hamiltonian. **Theorem 1.** If \(\hat{H}_{0}\) is invariant under the operation \(\hat{\mathbf{T}}_{1/2}\hat{\mathbf{\theta}}\), where \(\hat{\mathbf{T}}_{1/2}\) is the space translation along vector \(\mathbf{T}/2\), \(\hat{\theta}\) is the time-reversal operator, and \(\mathbf{h}(\mathbf{r}+\mathbf{T}/2)=-\mathbf{h}(\mathbf{r})\), then eigenvalues \(\varepsilon_{\mathbf{k}}\) of \(\hat{H}\) are at least two-fold degenerate for all the wave vectors \(\mathbf{k}\) except those, which satisfy \(\exp\left(i\mathbf{k}\mathbf{T}\right)=-1\), and the following condition is fulfilled for all the eigenvalues: \[\varepsilon_{\mathbf{k},\langle\mathbf{\sigma}\rangle}=\varepsilon_{-\mathbf{k},- \langle\mathbf{\sigma}\rangle} \tag{2}\] where \(\langle\mathbf{\sigma}\rangle\equiv\langle\psi_{\mathbf{k}}|\hat{\mathbf{\sigma}}|\psi _{\mathbf{k}}\rangle\) and \(|\psi_{\mathbf{k}}\rangle\) is the eigenstate of \(\hat{H}\). **Proof.** In the presence of magnetic field a symmetry operation containing \(\hat{\theta}\) should be a product of an operation, which change sign of the magnetic field, and \(\hat{\theta}\)[29]. Therefore, \(\hat{\mathbf{T}}_{1/2}\hat{\theta}\) is a symmetry operation for \(\hat{H}\). Let us consider two eigenstates: \(|\psi_{\mathbf{k}}\rangle\) and \(\hat{\mathbf{T}}_{1/2}\hat{\theta}|\psi_{\mathbf{k}}\rangle\). According to rules for antiunitary operators [30] we obtain \[\langle\psi_{\bf k}|\big{(}\hat{\bf T}_{1/2}\hat{\theta}|\psi_{\bf k }\big{)}\rangle = \overline{\big{(}\langle\psi_{\bf k}|\hat{\theta}^{\dagger}\big{)} \big{(}\hat{\bf T}_{1/2}\hat{\theta}^{2}|\psi_{\bf k}\big{)}\big{)}} \tag{3}\] \[= -\langle\psi_{\bf k}|\hat{\bf T}^{\dagger}\big{(}\hat{\bf T}_{1/2 }\hat{\theta}|\psi_{\bf k}\big{)}\rangle.\] By means of Bloch's theorem \(\hat{\bf T}|\psi_{\bf k}\rangle=\exp\big{(}i{\bf k}{\bf T}\big{)}|\psi_{\bf k}\rangle\) it is reduced to \[\langle\psi_{\bf k}|\big{(}\hat{\bf T}_{1/2}\hat{\theta}|\psi_{\bf k }\big{)}\rangle=-\exp\big{(}i{\bf k}{\bf T}\big{)}\langle\psi_{\bf k}|\big{(} \hat{\bf T}_{1/2}\hat{\theta}|\psi_{\bf k}\big{)}\rangle. \tag{4}\] From here one can see that \(|\psi_{\bf k}\rangle\) and \(\hat{\bf T}_{1/2}\hat{\theta}|\psi_{\bf k}\rangle\) are orthogonal and make up a pair of degenerate states for all \({\bf k}\) except those, which satisfy \(\exp\big{(}i{\bf k}{\bf T}\big{)}=-1\). In the last case, the state can be either degenerate or nondegenerate. Since operators \(\hat{\bf k}\) and \(\hat{\sigma}\) commute with \(\hat{\bf T}_{1/2}\) and change sign under the time reversal, we obtain Eq. (2) \(\blacksquare\). The first term of Eq.(1) can contain a scalar potential and kinetic energy contribution \(\hat{\bf p}^{2}/(2m)\) where \(\hat{\bf p}\) and \(m\) are the momentum operator and particle mass. If the vector potential corresponding to \({\bf h}({\bf r})\) has the same translational symmetry under a proper calibration, i.e. \({\bf A}({\bf r}+{\bf T}/{\bf 2})=-{\bf A}({\bf r})\), the kinetic term can be extracted from \(\hat{H}_{0}\) and the Hamiltonian \[\hat{H}=\hat{H}_{0}+\big{[}\hat{\bf p}+q_{0}{\bf A}({\bf r})\hat{ \bf l}\big{]}^{2}/(2m)+{\bf h}({\bf r})\hat{\mathbf{\sigma}} \tag{5}\] also meets the theorem. Here \(q_{0}\) is the charge of the particle and \(\hat{\bf l}\) is the unit matrix. Helical systems with the SSG symmetry operator \(\{\mathbf{\alpha}|0|{\bf t}\}\), where \(\alpha=2\pi/n\) and \(n\) is an even natural number, satisfy the hypothesis of the theorem if \({\bf h}({\bf r})\) is perpendicular to the axis of spin rotation. This can be illustrated by an example of a tight-binding model for a four-sublattice helical structure (see Supplemental material). On the other hand, the theorem can not be applied if \(n\) is the odd number, in particular, in the important case of the \(120^{0}\) magnetic ordering (\(n=3\)). Let us introduce an operator \(\hat{\bf r}_{\alpha}\) of spin rotation by angle \(\alpha\) about \(z\) axis. In the theorem and models discussed below, there is no spin-orbit coupling. Therefore, the spin rotation axis can be chosen arbitrary, i.e. it is independent of the spacial \(z^{\prime}\) axis. **Theorem 2.** If \(\hat{H}_{0}\) in Eq.(1) is invariant under translation \(\hat{\bf t}\), time reversion \(\hat{\theta}\), and arbitrary rotations of the spin system about \(z\) axis, and if \({\bf h}({\bf r})\) is invariant under \(\hat{\bf r}_{\alpha}\hat{\bf t}\), where \(\alpha=2\pi/n\), \(n\) is an odd number (\(n>1\)), and \({\bf h}({\bf r})\) is perpendicular to the spin rotation axis, then the eigenvalues of \(\hat{H}\) are at least two-fold degenerate for all the wave vectors \({\bf k}\) except those, which satisfy \(\exp\big{(}2i{\bf k}{\bf T}\big{)}=1\), where \({\bf T}=n{\bf t}\), and all the eigenvalues satisfy Eq. (2) with \(\langle\hat{\sigma}_{x(y)}\rangle_{\bf k}=0\). **Proof.** Let us introduce operators \[\hat{\bf Y}=\hat{\bf t}\hat{\bf r}_{\alpha}\ \text{and}\ \hat{\bf X}=\hat{\bf t} \hat{\bf r}_{\alpha-\pi}\hat{\theta}. \tag{6}\] \(\hat{\bf Y}\) is a symmetry operator under the hypothesis of the theorem and a generator of an Abelian group [23]. Its irreducible representations coincide with those of the space translation group \(\exp(i{\bf k}{\bf t})\), where \({\bf k}\) is the wave vector in the extended Brillouin zone [23]. The magnetic field is perpendicular to the spin rotation axis, then \(\hat{\bf r}_{-\pi}{\bf h}({\bf r})=-{\bf h}({\bf r})\) and \(\hat{\bf X}\) is also a symmetry operator. It forms a group, which is isomorphic to that of \(\hat{\bf Y}\). We can consider the following quantity: \[\langle\psi_{\bf k}|\big{(}\hat{\bf X}^{n}|\psi_{\bf k}\big{)}\rangle=\langle \psi_{\bf k}|\big{(}\hat{\bf X}^{-n}|\psi_{\bf k}\big{)}\rangle. \tag{7}\] Direct translations give \[\langle\psi_{\bf k}|\big{(}\hat{\bf t}^{-n}\hat{\bf r}_{\alpha- \pi}^{-n}\hat{\theta}^{-n}|\psi_{\bf k}\big{)}\rangle=\langle\psi_{\bf k}|\hat {\bf t}^{-2n}\big{(}\hat{\bf t}^{n}\hat{\bf r}_{\alpha-\pi}^{-n}\hat{\theta}^{-n }|\psi_{\bf k}\big{)}\rangle\] \[=-\langle\psi_{\bf k}|\hat{\bf t}^{-2n}\hat{\bf r}_{\alpha-\pi}^{- 2n}\big{(}\hat{\bf t}^{n}\hat{\bf r}_{\alpha-\pi}^{n}\hat{\theta}^{n}|\psi_{ \bf k}\big{)}\rangle. \tag{8}\] Using \(\hat{\bf r}_{\alpha-\pi}^{-2n}=-\hat{\bf l}\) and Bloch's theorem \(\hat{\bf T}^{2}|\psi_{\bf k}\rangle=\exp\big{(}2i{\bf k}{\bf T}\big{)}|\psi_{ \bf k}\rangle\), we obtain \[\langle\psi_{\bf k}|\big{(}\hat{\bf X}^{n}|\psi_{\bf k}\big{)}\rangle=\exp \big{(}-2i{\bf k}{\bf T}\big{)}\langle\psi_{\bf k}|\big{(}\hat{\bf X}^{n}|\psi_ {\bf k}\big{)}\rangle. \tag{9}\] That is, \(|\psi_{\bf k}\rangle\) and \(\hat{\bf X}^{n}|\psi_{\bf k}\rangle\) are orthogonal and the eigenstates are two-fold degenerate for all the wave vectors except those, which satisfy \(\exp\big{(}2i{\bf k}{\bf T}\big{)}=1\). The operators \(\hat{\bf k}\) and \(\hat{\sigma}_{z}\) commute with \(\hat{\bf t}\) and \(\hat{\bf r}\), as well as they change sign under the time reversal. Therefore, we obtain \(\varepsilon_{{\bf k},(\sigma_{z})}=\varepsilon_{-{\bf k},-\langle\sigma_{z}\rangle}\). The relations for the transverse spin components are proved in Supplement material \(\blacksquare\). If the vector potential has the same translational symmetry as \({\bf h}({\bf r})\), i.e. \(\hat{\bf r}_{\alpha}\hat{\bf t}{\bf A}({\bf r})={\bf A}({\bf r})\), Hamiltonian of the form (5) also satisfies theorem 2. Let us consider a tight-binding model of an atomic chain under an effective magnetic field corresponding to the \(120^{0}\) order as an example of 1D helical system satisfying to theorem 2: \[\hat{H}_{3sl}=-\sum_{i,\sigma}\Big{(}\hat{a}^{\dagger}_{i,1, \sigma}\hat{a}_{i,2,\sigma}+\hat{a}^{\dagger}_{i,2,\sigma}\hat{a}_{i,3,\sigma}+\] \[\hat{a}^{\dagger}_{i,3,\sigma}\hat{a}_{i+1,1,\sigma}+h.c.\Big{)}- \sum_{i,j,\sigma,\sigma^{\prime}}\Big{(}\hat{a}^{\dagger}_{i,j,\sigma}\hat{ \bf h}_{j}\hat{a}_{i,j,\sigma^{\prime}}\Big{)}\,. \tag{10}\] where \(\hat{a}^{\dagger}_{i,j,\sigma}(\hat{a}_{i,j,\sigma})\) is the electron creation (annihilation) operator in the \(j\)-th sublattice (\(j=1,2,3\)) and \(i\)-th cell with the spin projection on \(z\) axis \(\sigma=\pm 1/2\). The on-site magnetic field for the sublattices is defined as follows: \(\hat{\bf h}_{1}=h_{0}\hat{\sigma}_{x}\), \(\hat{\bf h}_{2}=h_{0}(-\hat{\sigma}_{x}+\sqrt{3}\hat{\sigma}_{y})/2\), \(\hat{\bf h}_{3}=h_{0}(-\hat{\sigma}_{x}-\sqrt{3}\hat{\sigma}_{y})/2\). This model is exactly solvable and the electron dispersion is shown in Fig. 1. The magnetic Brillouin zone lies between \(-\pi\) and \(\pi\). According to theorem 2, the obtained eigenvalues obey Eq. (2). Special points, where the degeneracy is undefined (\(k=0\) and \(k=\pm\pi\)), are shown by yellow and green circles. As was mentioned above the helical system is subject to both the generalized Bloch theorem due to invariance under the generalized translation \(\hat{\bf r}_{\alpha}\hat{\bf t}\)[22; 23] and ordinary Bloch theorem [31] (translational invariance under \(\hat{\bf T}=\hat{\bf t}^{n} periodic in the reciprocal space with two periods which are multiple of one another. We define dispersion curves so that the corresponding eigenstates \(|\psi_{\mathbf{k},g}\rangle\) are continuous functions of \(\mathbf{k}\) over the extended Brillouin zone defined by the generalized translations, and \(g\) is the curve number. As an example, one of the curve is shown in Fig. 1 by the thick line. According to the Bloch theorem, the eigenvalues should obey the conditions \(\epsilon_{l,\mathbf{k}+\mathbf{K}}=\epsilon_{l,\mathbf{k}}\) where \(\mathbf{K}\) is the primitive vector of the magnetic reciprocal lattice and \(l\) is the eigenvalue number. One can see in Fig. 1 that a dispersion curve is not necessarily periodic in the magnetic Brillouin zone. However, the whole band structure has to be periodic to fulfill the Bloch theorem. For instance, let us denote the lowest dispersion curve in the range from \(-\pi\) to \(0\) in Fig. 1 as \(|\psi_{\mathbf{k},1}\rangle\). It corresponds to \(l=1\) (eigenvalues indexed from the bottom to the top). However, while crossing \(k=0\) and, then, \(k=\pi\) the eigenvalue number of the curve is changed by \(l=2\) and \(l=3\). That is, there is no a single-valued correspondence between \(g\) and \(l\). This leads to a non-trivial topology of dispersion curves. The curve marked by the thick line in the left panel of Fig. 1 is schematically shown in a cylinder representation in the right panel. Paths on cylinder can be classified by means of the fundamental group of cylinder [32]. The winding number coincides with \(n\) defined above and is a topological index. In the magnetic Brillouin zone, the band is reduced to \(n\) branches (\(n=3\) in Fig. 1). If the Fermi level falls into a gap between \(E_{2}\) and \(E_{1}\) (or between \(E_{4}\) and \(E_{3}\)) an electron transport becomes unusual. A backward scattering without spin flip is forbidden. There also exists a persistent spin current. An additional uniform magnetic field directed along spin \(z\) axis breaks the Kramers-like symmetry defined by theorem 2. On the other hand, the Hamiltonian remains still invariant under both the translations. That is why, the topological structure in the right panel of Fig. 1 survives under the perturbation. A uniform magnetic field perpendicular to the \(z\) axis breaks both the Kramers-like symmetry and invariance under the generalized translations. In this case, the topology becomes trivial due to hybridization at band crossing points. The proved theorems and topological arguments can be applied to multidimensional systems. The conductive hexagonal palladium layers in PdCrO\({}_{2}\) are described well by a 2D nearly-free-electron model [33]. Magnetic spacers CrO\({}_{2}\) form an effective field corresponding to a \(120^{0}\) (three-sublattice) magnetic structure [10]. Then, a solution for the Bloch wave functions \(\psi\) can be obtained from the following equation [31] \[\left[\frac{\hbar^{2}}{2m}\big{(}\mathbf{k}-\mathbf{K}\big{)}^{2}-\mathcal{E} \right]\hat{\alpha}_{\mathbf{k}-\mathbf{K}}+\sum_{\mathbf{K}^{\prime}}\hat{U}_ {\mathbf{K}^{\prime}-\mathbf{K}}=0 \tag{11}\] where \(\mathbf{k}\) is the wave vector within the magnetic Brillouin zone (\(\sqrt{3}\times\sqrt{3}\)), \(\hat{U}_{\mathbf{K}}\) are the Fourier coefficients of effective field, \(\hat{c}_{\mathbf{k}}\) and \(\hat{U}_{\mathbf{K}}\) have spinor form [21]. In case of 2D system, the second term in Eq. (11) should contain at least two terms to describe properly the band structure in the vicinity of the K points (see Fig. 2). Since the model Eq. (11) does not contain a spin-orbit coupling, the spin rotation axis \(z\) can be chosen arbitrary, and for the sake of simplicity, we direct it along the spacial \(z^{\prime}\) axis, i.e. perpendicular to the plane. The Fourier coefficients have a special form in case of a helical \(120^{0}\) effective field [21]: \(\hat{U}_{\mathbf{K}_{1(2)}}=\gamma h_{0}\big{(}\hat{\sigma}_{x}\pm i\hat{ \sigma}_{y}\big{)}\) where \(\gamma\) is a complex coefficient. It should be mentioned that the Figure 1: The electron band structure of the 1D model Eq. (10) and a schematic representation of the bottom dispersion curve on cylinder. The average spin along the curves is indicated by color: red and blue if \(|\langle\hat{\sigma}_{z}\rangle|>1/2\), gray otherwise. The yellow and green circles denote the degenerate and non-degenerate points, correspondingly. spinor Fourier coefficients are abnormal operators [21]. The dispersion of the 2D model and the Fermi surface for two positions of the Fermi level are shown in Fig. 2. The topology of the dispersion surface is nontrivial. The volume of the extended Brillouin zone is three times larger than that of the magnetic one, therefore the topological index is \(n=3\). That is, if we close periodic boundaries of the 2D magnetic Brillouin zone in a torus representation, the dispersion surface wrapping the torus contains three sheets. Intersections of the sheets occur along the Brillouin zone boundaries and along \(\Gamma\)-K lines. Since the model satisfies theorem 2 we again obtain relation (2). In the middle and right panels of Fig. 2, one can see that the opposite arcs of the Fermi surface are predominantly formed by opposite spins. This is an important result because this spin texture suppresses backward nonspin-flip scattering and umklapp electron-phonon scattering [21]. The arrows in the middle panel of Fig. 2 show transitions between initial and final electron states in the umklapp processes which are suppressed by the spin texture of the Fermi surface. Since at low temperatures an electron-phonon part of resistance in metals with closed Fermi surface is determined by umklapp scattering [34], the resistance occurs strongly suppressed [21] and the high-conductive state appears. In conclusion, the two theorems proved above show that a Kramers-like degeneracy exists in a helical magnetic field. The topology of a band structure in the helical magnetic systems is non-trivial. The traditional topological band theory developed for topological insulators deals with a phase of the wave function and its variation over the Brillouin zone. In the present article, the nontrivial topology stems from the fact that commensurate helical systems have to be simultaneously periodic on the ordinary and generalized translations. This leads to multisheet dispersion of electrons. The specific Kramers-like symmetry and topology lead to the spin texture of the Fermi surface, which suppresses the backward nonspin-flip scattering and umklapp electron-phonon scattering. As a result, a high-conductivity state appears. This effect is pronounced if a single band crosses the Fermi level (strong topological metal) because otherwise it is masked by interband scattering. This behavior is similar to that of topological surface and edge states in topological insulators [24]. However, in the present work we dealt with bulk states. The magnetic metallic delafossite PdCrO\({}_{2}\) is a candidate for a topological metal of this type. It should be mentioned that the band structure in Fig. 2 can also reproduce the nonreciprocity of electrons transport under magnetic field observed in this substance [2]. The effect discussed in the present article can be verified by noncollinear calculations within the density functional theory and spin-resolve angle-resolve photoelectron spectroscopy. **ACKNOWLEDGMENTS** The work was supported by National Center for Physics and Mathematics (Project #7 "Investigations in high and ultrahigh magnetic fields").
2308.10588
Persistent homology of collider observations: when (w)hole matters
Topological invariants have played a fundamental role in the advancement of theoretical high energy physics. Physicists have used several kinematic techniques to distinguish new physics predictions from the Standard Model (SM) of particle physics at Large Hadron Collider (LHC). However, the study of global topological invariants of the collider signals has not yet attracted much attention. In this article, we present a novel approach to study collider signals using persistent homology. The global topological properties of the ensemble of events as expressed by measures like persistent entropy, Betti area, etc. are worth considering in addition to the traditional approach of using kinematic variables event by event. In this exploratory study, we first explore the characteristic topological signature of a few SM electroweak resonant productions. Next, we use the framework to distinguish global properties of the invisible Higgs decay processes in the SM and a real singlet extension of the SM featuring stable singlet scalar dark matter.
Jyotiranjan Beuria
2023-08-21T09:41:31Z
http://arxiv.org/abs/2308.10588v2
# Persistent homology of collider observations: when (w)hole matters ###### Abstract Topological invariants have played a fundamental role in the advancement of theoretical high energy physics. Physicists have used several kinematic techniques to distinguish new physics predictions from the Standard Model (SM) of particle physics at Large Hadron Collider (LHC). However, the study of global topological invariants of the collider signals has not yet attracted much attention. In this article, we present a novel approach to study collider signals using persistent homology. The global topological properties of the ensemble of events as expressed by measures like persistent entropy, Betti area, etc. are worth considering in addition to the traditional approach of using kinematic variables event by event. In this exploratory study, we first explore the characteristic topological signature of a few SM electroweak resonant productions. Next, we use the framework to distinguish global properties of the invisible Higgs decay processes in the SM and a real singlet extension of the SM featuring stable singlet scalar dark matter. keywords: Persistent Homology, Topological Data Analysis, LHC, BSM Physics + Footnote †: journal: Physics Letters B ## 1 Introduction The Standard Model (SM) of particle physics has been immensely successful in describing the dynamics of elementary particles. The discovery of a neutral scalar particle, Higgs boson [1; 2] was indeed the triumph of the SM. Despite the remarkable success of the SM, it fails to answer several essential questions. Thus, the quest for a new physics model beyond the SM (BSM) has led to several phases of upgrade for the Large Hardon Collider (LHC). With the third phase run of the LHC gathering data, it is imperative to devise new methods to discriminate a plethora of BSM models. The study of the phenomenology of BSM models involves constraining the parameter space of new physics models with collider simulation at the parton level, hadron level and detector level through signal and background analysis. Physicists have devised several kinematic variables [3] for this matter. This approach relies on kinematic cuts applied on event by event basis. There have been some attempts to study the global properties of events using Voronoi and Delaunay tessellations [4; 5; 6], network distance metrics [7] and multi-event ML classifier [8], etc. However, Topological Data Analysis (TDA) [9; 10; 11; 12; 13; 14; 15; 16; 17] for studying the global properties of events have not attracted much attention. Algebraic topology is a branch of mathematics that studies the global properties of topological spaces in terms of the associated invariants. TDA has attracted much interest in recent years in the broader field of data science as a complement to more traditional machine learning methods. Of all such computational geometry techniques, persistent homology stands out in terms of its ease of application and predictive power. Homology is all about the \(k-\)dimensional holes that are the properties of the system as a whole. The applications of persistent homology range from multi-dimensional biological networks to social networks. It is a tool to analyze the topological characteristics of data that persist over various scales of filtration. Information networks are crucial tools for simulating the dynamics associated with the relations among the components of a complex system. The traditional approach using network graphs is limited by binary relations. However, relations modeled through simplicial complexes have much richer phenomenology because of the possible multi-dimensional relationships between different parts of the system. Interestingly, simplicial complexes form a topological space. Thus, the associated dynamics is characterized by topological invariants. The fundamental idea behind persistent homology is to substitute data points with a parametrized family of simplicial complexes, which are roughly thought of as a union of points, edges, triangles, tetrahedrons, and other higher-dimensional polytopes. Such a topological space encodes the change in the topological features (such as the number of connected components, holes, voids, etc.) of the simplicial complexes across different parameters. The dynamics involved in particle physics colliders is highly complicated. With run-3 of the LHC in progress, there is a great need to understand the features of the data collected at the LHC. The BSM physics models are likely to leave behind very complicated relations among the detected particles, characterizing the specific model under discussion. There have been several extensions of the SM proposed, viz., Supersymmetry [18; 19; 20; 21], Minimal Universal Extra Dimension [22; 23], etc. are a few popular ones to name. As an initial study to demonstrate the global properties of data, we explore TDA techniques for the SM and a real scalar extension of the SM. The real singlet scalar extension of the SM is a simple model with rich Higgs phenomenology and has attracted much attention [24; 25; 26; 27; 28; 29; 30; 31]. Since many BSM models feature real singlet scalars, this model is also well motivated as a low energy effective model. In this work, we focus on a particular variant wherein the singlet scalar becomes the stable particle under a global \(Z_{2}\) symmetry [26; 28], thus serving as a candidate for dark matter. This model is sometimes dubbed the Singlet Scalar Dark-Matter Model (SSDM). The framework of analysis using persistent homology is quite generic and can be used for any other model as well. We choose this simple model to demonstrate the usability of persistent homology in search for new physics. The organization of the paper is as follows. In section 2, we give a preliminary introduction to the mathematics of persistent homology. In section 3, we discuss the framework of topological analysis and collider simulation. In section 4, we compare the global topological signatures associated with various SM processes. In section 5, we study the parameter space of the SSDM and subject it to the constraints from the measurements of the 125 GeV Higgs boson at colliders. In section 6, we compare the persistent homology of the invisible decay of the Higgs boson in the SM with the SSDM. In section 7, we summarize and conclude the discussion. ## 2 Simplicial Complex and Persistent Homology Simplicial complex is the fundamental geometrical structure associated with the study of persistent homology. This section offers some background notions of simplicial complex and persistent homology. We also discuss the topological properties we will extensively use in our analysis. Our reader is advised to refer to some excellent reviews [14; 15; 17; 32] on Topological Data Analysis (TDA) available on the web. One can also refer to several worked out real-life examples [33; 34] with code. ### Simplicial complexes Simplicial complexes are a way to build topological spaces from basic combinatorial building blocks. It reduces the complexities of the continuous geometry of topological spaces and instead deals with the task of comparatively simple combinatorics and counting. These simple combinatorial building blocks are called simplices, as illustrated in figure 1. A \(k-\)dimensional simplex is formed by taking the convex hull of \(k+1\) independent points. Thus, 0-simplex is a point, 1-simplex is a line segment, 2-simplex is a filled triangle and 3-simplex is a filled tetrahedron. Similarly, one can construct higher-dimensional polytopes. For a \(n-\)dimensional simplex, the simplices with dimension \(k<n\) form the faces. Thus, for a 2-simplex (triangle), its edges (1-simplex) are the faces. A simplicial complex (\(K\)) is a finite collection of simplices (\(\sigma\)) such that for every \(\sigma\in K\), 1. any face of \(\sigma\in K\) is also part of \(K\) and 2. if \(\sigma_{1},\sigma_{2}\in K\), then \(\sigma_{1}\cap\sigma_{2}\) is a face of both \(\sigma_{1}\) and \(\sigma_{2}\). A collection of \(n-\)dimensional data points is typically represented as point cloud data (PCD). Even a time series data can also be embedded as point cloud data using Taken's embedding theorem [35]. There are several algorithms to convert point cloud data (\(X\)) to simplicial complex. For our purpose, we will use Vietoris-Rips complex algorithm [36], which is one of the simplest and computationally less expensive. Let \(X=\{x_{0},x_{1},...,x^{m-1}\}\in R^{m}\) be the point cloud data with each point \(x_{i}\) in \(R^{n}\). Let \(r\) be a fixed radius. The Vietoris-Rips complex of \(X\) is an abstract simplicial complex whose \(k-\)simplices are the (k + 1)-tuples of points in \(X\) such that the pairwise distance between them is less than equal to \(r\). This maximal radial limit \(r\) is also termed the filtration parameter. Mathematically, the Vietoris-Rips complex (also known as Rips complex) is given by \[VR_{r}(X)=\{\sigma\subseteq X\,|\,d(x_{i},x_{j})\leq r\ \ \forall x_{i}\neq x_{j}\in \sigma\}, \tag{1}\] where \(d(x_{i},x_{j})\) is the Euclidean distance between two points. It is to be noted that a metric other than Euclidean distance can also be taken. ### Chain complexes and Homology groups A \(k-\)chain denoted by \(C_{k}\) is the formal sum of a subset of \(k-\)simplices of the simplicial complex \(K\). \(C_{k}\) can be expressed as \[C_{k}=\sum_{i}\alpha_{i}\sigma_{k}^{i}\, \tag{2}\] where \(\sigma_{k}\) is the \(k-\)simplex and \(\alpha_{i}\) is assumed to be a real number, i.e., \(\alpha_{i}\in R\). It is interesting to note that \(C_{k}\) forms an abelian group under component-wise addition operation. The \(k-\)chain group is also generated by the \(k\)-cycles, where \(k\)-cycle is a \(k-\)chain without boundary. The \(k-\)th boundary operator on \(k-\)simplex \(\sigma_{k}\) with vertices \((v_{0},v_{1},...,v_{k})\) is expressed as \[\partial_{k}(\sigma_{k})=\sum_{i=0}^{k}(-1)^{i}(v_{0},v_{1},...,\hat{v}_{i},...,v_{k}), \tag{3}\] where \(\hat{v}_{i}\) is the vertex deleted from \(\sigma_{k}\). In other words, we have \[\partial_{k}:\ C_{k}\to C_{k-1}\, \tag{4}\] where \(k-\)chain \(C_{k}\) is mapped to (\(k-\)1)-chain \(C_{k-1}\) under boundary operation. A chain complex is created by the collection of boundary operators on the chain groups, \[C_{k}\xrightarrow{\partial_{k}}C_{k-1}\xrightarrow{\partial_{k-1}}C_{k-2}\ \...\ \ C_{1}\xrightarrow{\partial_{1}}C_{0}\xrightarrow{\partial_{0}}0 \tag{5}\] The kernel of the boundary operator \(\partial_{k}\) is the set of all \(C_{k}\) that has no boundary and the image of \(\partial_{k}\) is the set of \((k-1)\)-chains \(C_{k-1}\) that are the boundaries of \(C_{k}\). This can be expressed Figure 1: Simplices (plural of simplex) are the combinatorial building blocks of a simplicial complex. For illustration, 0-,1-,2-, and 3-simplex are shown from left to right. mathematically as \[ker(\partial_{k})=\{c\in C_{k}\mid\partial_{k}C_{k}=0\}\] \[im(\partial_{k})=\{d\in C_{k-1}\mid\exists c\in C_{k}:d=\partial_{ k}(c)\} \tag{6}\] Thus, we find that elements of \(ker(\partial_{k})\) are nothing but \(k-\)cycles while \(k-\)boundary is an element of \(im(\partial_{k+1})\). It is interesting to note that the sets of \(k-\)cycles \(Z_{k}\) and the sets of \(k-\)boundaries \(B_{k}\) form abelian subgroups of \(C_{k}\) under addition. It is also to be noted that \(ker(\partial_{k})\subset im(\partial_{k+1})\) since \(k-\)boundaries are also \(k-\)cycles. Thus, the groups \(B_{k}\), \(Z_{k}\), and \(C_{k}\) form a nested structure with \(B_{k}\subset Z_{k}\subset C_{k}\). \(k\)-th homology group \(H_{k}\) is defined as the quotient group, i.e., group of cycles modulo boundaries as given by \[H_{k}\ =\ \frac{Z_{k}}{B_{k}}\ =\ \frac{ker(\partial_{k})}{im(\partial_{k+1} )}. \tag{7}\] \(H_{k}(K)\) is the quotient vector space whose generators are given by \(k-\)cycles that are not boundaries of any \((k+1)\)-simplices. The rank of \(H_{k}(K)\) is also termed as the \(k\)-th betti number \(\beta_{k}(K)\). \(\beta_{k}(K)\) is the number of \(k-\)dimensional holes in the simplicial complex \(K\) that are not boundaries of any \((k+1)\)-simplices. Here \(\beta_{0}(K)\) is the number of connected components in \(K\). It is worth pointing out that betti number \(\beta_{k}\) is an important property of the system that we will make use of in our later analysis. Betti numbers are also used to define Euler characteristics \(\chi\), which is a topological invariant of the simplicial complex. \[\chi\ =\ \sum_{i=0}^{n}\ (-1)^{k}\ \beta_{k}. \tag{8}\] ### Persistent Homology and Filtration The classical homology does not give rich information for a given point cloud data \(X\). However, with a multi-scale approach to homology through filtration parameter, a change in the homology of \(X\) can be captured. The topological features that persist longer are more reliable features of the point cloud data. Instead of working with point cloud data, one considers a family of simplicial complexes \(K^{\delta}\) parameterized by \(\delta\in R\) from the set \(X\) such that a simplicial complex \(K^{\delta_{i}}\) at step \(i\) is a subset of \(K^{\delta_{i}}\) at step \(j\) for \(i\leq j\). This family of nested simplicial complexes is termed filtration and \(\delta\) a filtration parameter that evolves at every \(i\)-th step. For our case, we will use Vietoris-Rips complex; thus, the natural choice of filtration parameter becomes the radial separation \(r\) between points. One can keep track of the _birth_-the moment a hole first appears and _death_-the moment a hole disappears in a filtration. Tracking the emergence and disappearance of these homological properties in \(K^{\delta}\) for various values of \(\delta\) is the key idea behind persistent homology. These so-called _birth-death_ charts are typically represented by _barcode diagram_ (BD), _persistent diagram_ (PD), _persistent landscape_ (PL), _betti curve_ (BC), etc. In figure 2, we present a simple illustration of three points appearing in the _persistent diagram_ shown on the left. For example, point-1 appears at \(r=1\) and vanishes at \(r=4\), whereas point-2 appears at \(r=2\) and vanishes at \(r=3\). Thus, point-1 is more persistent than point-2. Thus, the points lying far off the diagonal line on _birth-death_ chart are more persistent and are true features of the data. The same information can be rendered as one-dimensional _barcode diagram_ as shown in the middle image of figure 2. It can also be represented through _persistent landscape_ (figure 2(c)) when _persistent diagram_ is rotated by \(\pi/4\). Another important measure is the entropy of the points clustered in a _persistence diagram_. It is called _persistence entropy_. Let \(D=\{(b_{i},d_{i})\}\) be the set of all _birth-death_ pairs associated with \(k-\)th order homology group in _persistence diagram_ with \(d_{i}<\infty\). The \(k-\)th order _persistence entropy_ is given by \[S(D_{k})=-\sum_{i}p_{i}\log(p_{i}), \tag{9}\] where \(p_{i}=\frac{d_{i}-b_{i}}{L_{D}}\) and \(L_{D}=\sum_{i}\left(d_{i}-b_{i}\right)\). Similarly, another useful feature associated with persistent homology is _Betti curve_. It is a function \(\beta(D_{k}):R\to N\), that counts the multiplicity of points of \(k-\)th homology group at a particular filtration parameter \(r=s\) in a _persistence diagram_ such that \(b_{i}\leq s\leq d_{i}\). The area under _Betti curves_, also termed as _Betti area_ is a commonly used feature in TDA. ### An illustrative example In order to illustrate the concepts described above, we take a point cloud consisting of four points: A (0,0), B (0,-2), C (2,0) and D (0,4). We have given the mathematical formulation for forming Rips simplicial complex in equation 1. It says that circles of radius \(\frac{r}{2}\) centered around each of the \(n\) points with a common intersection will form a \((n-1)\)-simplex. The Rips-filtration of the example has three important filtration parameters given by \(r=0\), \(r=2\sqrt{2}\) and \(r=4\). Figure 3: Rips filtration of point cloud having four points: {A (-2,2), B (0,0), C (2,2), D (0,4)} for (a) \(r=0\) (b) \(r=2\sqrt{2}\) (c) \(r=4\). Figure 2: (a) Persistent Diagram (PD) (b) Barcode Diagram (BD) (c) Persistent Landscape (PL) At \(r=0\), the simplicial complex is given by 0-simplices (points) as shown in figure 3(a). This corresponds to four connected components. The dataset acquires 1-simplices (edges) when \(r\) equals the edge length, i.e., \(r=2\sqrt{2}\) as depicted in figure 3(b). At \(r\geq 4\), all four points form a clique complex and the simplicial complex is given by the 3-simplex along with all its faces. For \(r\geq 2\sqrt{2}\), the number of connected component reduces from four (\(\beta_{0}=4\)) to one (\(\beta_{0}=1\)). For \(2\sqrt{2}\leq r<4\), there is a 1-cycle given by the formal sum of all edges, i.e., \(\langle AB\rangle+\langle BC\rangle+\langle CD\rangle+\langle DA\rangle\). For this choice of \(r\), this 1-cycle is not the boundary of any higher order simplex present in the simplicial complex. However, for \(r\geq 4\), the 1-cycle \(\langle AB\rangle+\langle BC\rangle+\langle CD\rangle+\langle DA\rangle\) becomes one of the boundaries of the 3-simplex \(\langle ABCD\rangle\). Thus, it does not contribute to the first homology group \(H_{1}\) for \(r\geq 4\). Thus, \(\beta_{1}=1\) for \(2\sqrt{2}\leq r<4\) and \(\beta_{1}=0\) for all other filtration values. The filtration can be summarized as follows. As far as _birth-death_ persistent diagram is concerned, we have two _birth-death_ pairs corresponding to \(H_{0}\) given by (0, 2 \(\sqrt{2}\)) with \(\beta_{0}=4\) and (0, \(\infty\)) with \(\beta_{0}=1\). On the otherhand, there is only one _birth-death_ pair corresponding to \(H_{1}\) given by (\(2\sqrt{2},4\)) with \(\beta_{1}=1\). In our subsequent analyses, _birth-death_ pairs with _death_ at \(r=\infty\) are not included for simplicity. With this brief introduction to persistent homology, we are ready to dive into the physics discussion. Next, we will talk about the collider simulation and the subsequent persistent homology analysis. ## 3 Framework for analysis ### Collider Simulation We first choose the resonant production of the electroweak gauge bosons and the Higgs boson. The \(Z\) and \(W\) bosons are made to decay leptonically and the Higgs boson is made to decay through \(b\bar{b}\) channel. Later in the text, we will also investigate the invisible decay modes of the Higgs boson both in the SM and the SSDM. The event samples are generated at the lowest order (LO) in perturbation theory using MadGraph5 aMC@NLO v3.5.1[37; 38] with nn23lo1[39] patron distribution function. The generated events correspond to \(\sqrt{s}=\)13 TeV. The generated parton-level events are showered with Pythia v8.309[40]. In the presence of extra hard partonic jets and the parton shower, the event generator uses the MLM matching technique with the variables _xqcut_ and _qcut_ set at appropriate values to prevent double counting of events in the simulated samples. We have used an NLO K-factor of 1.2 while estimating the cross sections for all the processes. The jet-finding package FastJet (v3.3.4)[41; 42] included in Delphes v3.5.0[43] is used to find the jets. The anti-\(k_{T}\) jet algorithm is employed with the cone size set at 0.5, requiring a minimum \(p_{T}^{jet}\) of 20 GeV and the pseudorapidity in the range \(|\eta_{jet}|<2.5\). As per the default parameter settings of Delphes v3.5.0, the leptons (electrons and muons) are reconstructed with a minimum \(p_{T}^{l}\) of 0.5 GeV and with \(|\eta_{jet}|<2.5\). The track isolation requirement for electrons and muons involves removing jets that lie within an angular distance \(\Delta R\leq 0.5\) from the lepton. Also, to increase the purity of electrons, it is required that the ratio of total \(p_{T}\)'s of the stray tracks within the cones of their identification to their own \(p_{T}\)'s is less than 0.12. The corresponding ratio for muon is set at 0.25. Subsequently, the events are processed through fast detector simulation using Delphes v3.5.0. It results in events in ROOT[44] format and thus, we convert it to LHC0 format for further processing. ### Point cloud data for TDA After the detector level reconstruction step, we require some basic kinematic cuts on the events before we prepare point cloud data from the ensemble of events. From a purely data science perspective guided by collider physics, we extract (\(\eta\), \(\phi\), \(p_{T}\), \(m_{inv}\)) from the event files stored in LHC0 format as coordinates to represent the entire data as a point cloud. Since these four variables span very different ranges of values, we normalize each of these four variables to lie in the [-1,1] range. It is to be noted here that we are representing the complete ensemble of events along with all the constituent particles that have been reconstructed after fast detector-level simulation. The number of events is normalized to the integrated luminosity times the effective cross-section of the process under consideration. We choose 100 fb\({}^{-1}\) of integrated luminosity all through the study. For analysis of the persistent homology of these simulated event bins, we use giotto-tda v0.6.0[45], a high-performance topological machine learning toolbox in Python. The traditional Rips complex described in section 2 forms a lot more simplices for a given filtration parameter \(r\) compared to the Alpha complex. Alpha complex is a Rips complex constructed from the finite cells of a Delaunay triangulation. Thus, we choose the latter in order to reduce the computational cost. This corresponds to WeakAlphaPersistence module in giotto-tda package. ## 4 Persistent homology of the SM With the above-mentioned framework of analysis, we now delve into the study of the persistent homology of the electroweak sector of the SM. We consider resonant production of \(ZZ\), \(ZW^{\pm}\), \(W^{+}W^{-}\), \(ZH\) and \(W^{+}H\) processes in the SM. We apply \(p_{T}^{l}>[50,40,10,10]\) GeV depending on the lepton counts. For the SM processes involving \(b\bar{b}\) decays, we keep \(p_{T}^{b}>30\) GeV. For \(Z\to l^{+}l^{-}\) processes, we keep the leptonic invariant mass window from 80 GeV to 100 GeV. For ZW and WH processes, \begin{table} \begin{tabular}{|l|c|c|c|} \hline Filtration parameter (\(r\)) & \(\beta_{0}\) & \(\beta_{1}\) & \(\chi\) \\ \hline \(r<2\sqrt{2}\) & 4 & 0 & 4 \\ \hline \(2\sqrt{2}\leq r<4\) & 1 & 1 & 0 \\ \hline \(r\geq 4\) & 1 & 0 & 1 \\ \hline \end{tabular} \end{table} Table 1: \(k\)–th Betti numbers (\(\beta_{k}\)) and corresponding Euler characteristics \(\chi\) for different filtration parameters (\(r\)) associated with the example. we keep leptonic transverse mass \(m_{T}^{l_{1}}>80\) GeV and for WW, transverse mass \(m_{T2}^{l_{1},l_{2}}>70\) GeV. In figure 4, we present persistent diagrams for WH, ZH, WW, ZW, and ZZ corresponding to \(H_{0,1,2}\) homology groups. Zeroth, Figure 4: _birth-death_ chart or persistent diagrams for WH (a), ZH (b), WW (c), ZW (d) and ZZ (e) productions in the SM. The legends \(H_{k}\) stand for the \(k-\)th homology group, i.e., \(k-\)dimensional holes. Figure 5: Betti curves for WH (a), ZH (b), WW (c), ZW (d) and ZZ (e) productions in the SM. first and second order persistent entropy are represented by red (+), green (dot) and purple (triangle) points. In table 2, zeroth order persistent entropy for WH and ZH are the same (\(S_{0}=1.52\)) and for WW, ZW and ZZ, \(S_{0}\approx 1.60\). In figure 4(a) and 4(b), the purple triangles (corresponds to \(H_{2}\)) has _birth_ points beginning at larger \(r\approx 0.1\) and \(r\approx 0.2\), respectively compared to figure 4(c) and 4(d). In table 2, we see that \(S_{1}\) for ZZ and ZH are the largest compared to others. Similarly, \(S_{2}\) for ZH is also the largest and it is corroborated with the spread of purple triangles to large \(r\approx 0.6\). In figure 5, we present Betti curves for WH, ZH, WW, ZW and ZZ corresponding to \(H_{0,1,2}\) homology groups. We also present the Betti areas (area under Betti curves) in table 3 along with the fractional Betti area in the brackets. We see that ZH and ZZ have the lowest Betti areas across all homology dimensions. This is because of lesser number of points in the ensemble resulting from lower production cross-sections compared to others. However, the fractional contributions of Betti areas are somewhat similar in the SM except for a slightly larger (0.92) for the ZZ production. Also, the shape of Betti curves corresponding to \(H_{0,1}\) for ZH and ZZ are different from WH, WW and ZW in terms of the location of peak and slope of the curve in figure 5. Next, we consider the resonant production of WH and ZH and subsequent leptonic decay of gauge bosons and invisible decay of the Higgs boson in a real singlet extension of the SM featuring a scalar dark matter (DM) candidate. ## 5 Real singlet extension of the SM We consider a simple extension of the SM with a real \(SU(2)_{\rm L}\) singlet scalar field \(S\). We also impose \(Z_{2}\) symmetry on the scalar sector such that \(S\) is odd and all SM fields are even under that. Since \(Z_{2}\) symmetry ensures a stable singlet scalar particle, this model also serves as the simplest extension of the SM featuring a scalar dark matter (DM) candidate. We choose this model to illustrate that the persistent homology of the ensemble of collider signals of a BSM model can have a different topological signature compared to the SM. The tree-level scalar potential is given by \[V(H,S)=-\frac{\mu^{2}}{2}H^{\dagger}H-\frac{\lambda_{H}}{2}(H^{\dagger}H)^{2} -\frac{M_{S}^{2}}{2}S^{2}-\frac{\lambda_{s}}{2}S^{4}-\lambda_{sh}H^{\dagger} HS^{2} \tag{10}\] Around the vacuum expectation value (_vev_), the neutral component of the SM Higgs doublet (\(\mathcal{H}_{0}\)) and real singlet scalar (\(S\)) is parameterized as \[\mathcal{H}_{0}=\frac{v+h}{\sqrt{2}},\ S=\frac{v_{s}+s}{\sqrt{2}}, \tag{11}\] where \(v=246\) GeV (\(v_{s}\)) is the _vev_ for \(\mathcal{H}_{0}\) (\(S\)). \(Z_{2}\) invariance of the scalar sector requires zero _vev_ for the real singlet scalar, i.e., \(v_{s}=\langle S\rangle=0\). Thus, the tree-level mass of the real singlet \(S\) is given by \[m_{x}^{2}=M_{S}^{2}+\lambda_{sh}v^{2} \tag{12}\] Thus, in this simple extension of the SM, the mass of the singlet dark matter (DM) candidate \(S\) is primarily governed by \(M_{S}^{2}\) and \(\lambda_{sh}\). \(\lambda_{sh}\) is the only term in the scalar potential contributing to the coupling between singlet DM and the SM Higgs boson. Thus, the branching ratio (BR) of the Higgs for invisible decay to singlet scalar DM is primarily determined by the \(\lambda_{sh}\) and \(m_{h}-m_{s}\). We have implemented this model in SARAH v4.5.1[46; 47] and the spectra are obtained using SARAH generated SPheno v4.0.5[48; 49] While keeping \(\lambda_{H}=0.255\) to ensure the SM Higgs boson with \(m_{h}\approx 125\) GeV and a low value for singlet \begin{table} \begin{tabular}{|l|c|c|c|} \hline Event type & \(S_{0}\) & \(S_{1}\) & \(S_{2}\) \\ \hline WH & 1.52 & 2.39 & 10.13 \\ \hline ZH & 1.52 & 2.86 & 67.22 \\ \hline WW & 1.60 & 2.50 & 10.28 \\ \hline ZW & 1.57 & 2.32 & 6.35 \\ \hline ZZ & 1.59 & 3.36 & -2.19 \\ \hline \end{tabular} \end{table} Table 2: Persistent entropies for the SM resonant productions. \(S_{k}\) stands for \(k-th\) persistent entropy. \begin{table} \begin{tabular}{|l|c|c|c|} \hline Event type & \(BA_{0}\) & \(BA_{1}\) & \(BA_{2}\) \\ \hline WH & 135.82 (0.89) & 15.64 (0.10) & 1.98 (0.01) \\ \hline ZH & 79.19 (0.90) & 8.12 (0.09) & 1.03 (0.01) \\ \hline WW & 164.72 (0.90) & 17.27 (0.09) & 1.57 (0.01) \\ \hline ZW & 219.44 (0.89) & 25.81 (0.10) & 2.78 (0.01) \\ \hline ZZ & 58.13 (0.92) & 4.95 (0.08) & 0.29 (0.0) \\ \hline \end{tabular} \end{table} Table 3: \(BA_{k}\) is the \(k-th\) order Betti area for WH, ZH, WW, ZW and ZZ productions in the SM. The fractional Betti areas are indicated in brackets. Figure 6: Scan over parameter space of the SSDM. The red (green) patches in (a) and (c) refer to excluded (allowed) regions when subjected to constraints from all searches at LHC for a 125 GeV neutral scalar. The \(\lambda_{sh}\)-\(M_{S}^{2}\) parameter space with heat map with \(m_{s}\) is shown in (b). The same region with Higgs invisible branching fraction is given in (d). The benchmark points A and B in table 4 are also indicated in all of these plots. Figure 8: Betti curves for the Higgs invisible decay in the SM and the SSDM benchmark scenarios are presented. The first (second) row corresponds to WH (ZH) production mode. Figure 7: _birth-death_ charts or the persistent diagrams for the Higgs invisible decay in the SSDM benchmark scenarios and the SM backgrounds are presented. First (second) row corresponds to WH (ZH) production mode in the SSDM. quartic coupling strength \(\lambda_{s}=0.1\), we have performed a scan over parameter space with \[|M_{S}^{2}|\leq 3\times 10^{3}\text{GeV}^{2},\ |\lambda_{sh}|\leq 0.1. \tag{13}\] In figure 6(a) and 6(b), we see a sharp edge (\(m_{s}\approx 0\)) with a slope of 135 degree corresponding to \(m_{s}^{2}\geq 0\) in equation 12. We also subject the parameter space under Higgs precision tests implemented by HiggsTools v1.1.3 [50] which include three sub-packages HiggsPredictions, HiggsBounds [51], and HiggsSignals [52] covering all publicly accessible experimental findings from 125 GeV Higgs boson measurements and searches for new (scalar) particles. The allowed regions are shaded in green and excluded regions in red in 6(a) and 6(c). We observe that only low values of \(\lambda_{sh}\) are allowed under the Higgs precision test. This is kind of expected since large \(\lambda_{sh}\) corresponds to large \(h\to s\)\(s\) invisible decay. This is corroborated in figure 6(d) with low BR(\(h\to s\)\(s\)) for low values of \(\lambda_{sh}\). Based on the allowed regions of parameter space, we choose two benchmark scenarios (table 4) for demonstrating the global topological features associated with the signals involving invisible decay of Higgs boson to the real singlet DM in the SSDM. We also will contrast the findings with the SM counterpart. ## 6 Persistent homology of invisible Higgs decays in the SM and the SSDM In the SM, extremely low (0.1%) branching fraction to invisible final states is primarily through \(H\to ZZ^{(*)}\to\bar{\nu}\bar{\nu}\nu\)[53; 54]. On the contrary, many BSM models feature a sizable amount of Higgs invisible decay to massive stable particles protected under some global symmetries, as in the SSDM mentioned previously. Thus, we expect a characteristic difference between the global topological properties of the events associated with the BSM model and the SM, particularly in the invisible decay of Higgs. Table 4 presents two benchmark scenarios allowed under Higgs precision tests. Benchmark-A and benchmark-B have similar coupling strengths (\(\lambda_{sh}=0.005\)) between singlet scalar and Higgs boson. However, benchmark-A has \(m_{s}=4.8\) GeV and benchmark-B has \(m_{s}=50\) GeV. Thus, the BR(\(h\to s\)\(s\)) for benchmark-B is lower than benchmark-A due to compression in available phase space for decay. For the SSDM case, we consider resonant production of ZH and WH with leptonic decay of the Z/W boson and the invisible decay of H (\(H\to s\)\(s\)). For background estimation, we consider electroweak resonant productions with leptonic decay of the Z/W boson and invisible decay of Z and H in the SM. This leads to two kinds of signals, i.e., \(1\ell+MET\) and \(2\ell+MET\) for benchmark-A and benchmark-B. We perform collider simulation upto detector level similar to section 3. We apply \(p_{T}^{l}>[50,40]\) GeV depending on the lepton counts. For \(Z\to l^{*}l^{-}\) process, we keep the leptonic invariant mass window from 80 GeV to 100 GeV. All the processes, including the SM background processes, we keep leptonic transverse mass \(m_{T}^{l}<40\) GeV and \(MET<150\) GeV to effectively filter the SM backgrounds. The _birth-death_ chart or the persistent diagram for the SM backgrounds and the benchmark scenarios of the SSDM are compared in figure 7. In table 5, we give persistent entropies associated with the persistent diagrams in figure 7 for up to the second homology dimension. We see figure 7 (e) and (f) are sparsely populated compared to other plots. This can be attributed to very low resonant production cross sections of ZH for benchmark-A (4.4 fb) and benchmark-B (2.8 fb), leading to lesser points at a fixed integrated luminosity. Referring to the table 5, we find that in the case of the SM, the persistent entropy \(S_{1}\) is slightly lower compared to the SSDM in both WH and ZH modes. On the contrary, \(S_{2}\) is larger in magnitudes for the SM than for the SSDM. This can be corroborated by a larger spread of purple triangles above the diagonal in the SM compared to the SSDM. compared to the SM background. On the other hand, we see the opposite trend for the Betti areas corresponding to \(H_{1,2}\). ## 7 Summary and Conclusion Observations recorded by several detectors at LHC involve a complex interplay of countless variables in action. In such a complex experimental setup, discovering several fundamental particles predicted by the quantum field theory models is triumphant for particle physics. In this preliminary study, we suggest a generic novel framework to investigate the properties of the ensemble of event space as a whole using persistent homology. This technique is slowly gaining popularity among the data science community as complementary to classical machine learning approaches. Philosophically, such global topological properties also establish that the system as a whole is as characteristically important as its components. We suggest that the signature of fundamental laws of Nature may also be found in the very complex global relations and information geometry associated with the LHC observations. This work serves as an exploratory step in that direction. We have demonstrated the usability of the framework first for the SM electroweak sector and subsequently for the Higgs invisible decays in the SM and the SSDM. We find the characteristic difference between different physics scenarios encoded in the persistent diagrams and Betti curves for different homology dimensions. The associated persistent entropies and Betti areas also serve as the topological markers for the processes under discussion. We find that these global topological properties are useful properties that can supplement and complement the kinematic variables used for event-by-event in signal and background discrimination. These features can also be used with machine learning for discriminating the new physics scenarios from the SM background. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Acknowledgements JB thanks IKSMHA Centre, IIT Mandi and IKS Centre, ISS Delhi for their support while part of the work was completed.
2304.10885
Solution Properties for Pertubed Linear and Nonlinear Integrals Equations
In this study we consider perturbative series solution with respect to a parameter {\epsilon} > 0. In this methodology the solution is considered as an infinite sum of a series of functional terms which usually converges fast to the exact desired solution. Then we investigate perturbative solutions for kernel perturbed integral equations and prove the convergence in an appropriate ranges of the perturbation series. Next we investigate perturbation series solutions for nonlinear perturbations of integral equations of Hammerstein type and formulate conditions for their convergence. Finally we prove the existence of a maximal perturbation range for non linear integral equations.
Markos Z. Tsoukalas, Panagiotis G. Asteris
2023-04-21T10:59:30Z
http://arxiv.org/abs/2304.10885v1
# Solution Properties for Perturbed Linear and Nonlinear Integral Equations ###### Abstract In this study we consider perturbative series solution with respect to a parameter \(\varepsilon>0\). In this methodology the solution is considered as an infinite sum of a series of functional terms which usually converges fast to the exact desired solution. Then we investigate perturbative solutions for kernel perturbed integral equations and prove the convergence in an appropriate ranges of the perturbation series. Next we investigate perturbation series solutions for nonlinear perturbations of integral equations of Hammerstein type and formulate conditions for their convergence. Finally we prove the existence of a maximal perturbation range for non linear integral equations. **Keywords:** perturbation, Fredholm, Hammerstein integral equation Introduction Integral equations arise in many fields in mathematical physics, biology, chemical kinetics, mechanics, etc.. In recent years there is a literature dealing with homotopy perturbation methods in integral equations ([1, 3, 2, 4, 5, 6, 9, 8, 10, 13, 14, 18]). However despite its fast numerical convergence, the convergence of the OHAM (optimal homotopy asymptotic method) is not proved formally in ([1, 2, 4, 5, 3, 8, 9, 14, 18]). In this work we deal mainly with existence theorems for nonlinear integral equations for solutions in \(C[0,1]\) or \(L^{2}[0,1]\) spaces. Existence theorems are given in ([12, 16, 17]) but here we give versions of these theorems for perturbed integral equations by formulating and proving basic existence theorems for non linear integral equations considering perturbative series solution with respect to a parameter \(\varepsilon>0\). In this methodology the solution is considered as an infinite sum of a series of functional terms which usually converges fast to the exact desired solution. Particularly we investigate perturbative solutions for kernel perturbed integral equations and prove the convergence in appropriate ranges of the perturbation series. In the next section 3 we investigate perturbation series solutions for nonlinear perturbations of non linear integral equations of Hammerstein type and formulate conditions for their convergence. Finally we prove the existence of a maximal perturbation range for non linear integral equations. The methods used here can be exploited to implement numerical procedures for perturbation series solutions. ## 2 Series Solutions for perturbed kernels Here we consider'small' in a sense as we shall see perturbations in a non linear sense of the kernel \(K\) of the integral equation and we prove that the perturbative series solution converges absolutely in an appropriate range for \(\varepsilon\), the perturbation parameter. In theorem (2.1) we deal with perturbations of the kernel in the linear Fredholm equation of the second kind and prove that if the unperturbed original equation has a \(C[0,1]\) or \(L^{2}[0,1]\) solution then the perturbed equation has a corresponding \(C[0,1]\) or \(L^{2}[0,1]\) solution. Next in theorem (2.2) we prove the corresponding result dropping the boundedness assumption for the perturbed kernel. Continuing we prove in theorem (2.3) that the corresponding result holds under suitable as sumptions for a non linear Hammerstein integral equation. Following we prove the corresponding result in theorem (2.4) for a linear Fredholm equation of the second kind. with boundedly differentiable kernel. Next we prove the theorem (2.5) stating the result for \(L^{2}\) integrable perturbed kernel in a Hammerstein equation. We prove the same kind of result for equations with boundedly differentiable kernel in theorem (2.6). We proceed with theorem (2.7) stating the corresponding result for \(L^{1}\) integrable kernel. We denote by \(\phi(\varepsilon,x)=\phi_{(0,0)}(x)\) the unknown sought function and by \(\phi_{(0,\nu)}(x)=\frac{\partial^{\nu}\phi(\varepsilon,x)}{\partial \varepsilon^{\nu}}|_{\varepsilon=0},\nu=0,1,2,...\) its various order derivatives at \(\varepsilon=0\). **Theorem 2.1**: _Let's consider the perturbed integral equation_ \[\phi(\varepsilon,x)-\omega\int_{0}^{1}\Gamma(\varepsilon,x,y)\phi(\varepsilon, y)dy=f(x),\quad x\in[0,1],\varepsilon\geq 0 \tag{2.1}\] _where \(\Gamma(\varepsilon,x,y)=\Gamma_{0}(x,y)+\varepsilon\Gamma_{1}(x,y)\) is continuous and \(L^{2}\) integrable respectively functions of their variables and \(\int_{0}^{1}\int_{0}^{1}|\Gamma(\varepsilon,x,y|dxdy<\infty\). Suppose that_ \[|\Gamma_{0}(\varepsilon,x,y)|\leq C,\quad\ 0\leq x,y\leq 1,\varepsilon\geq 0. \tag{2.2}\] _Assuming that the unperturbed equation for \(\varepsilon=0\) has a solution in \(C([0,1])\) or \(L^{2}([0,1])\) respectively then the perturbation series_ \[\phi_{(0,0)}(x)=\omega\int_{0}^{1}\Gamma_{0},x,y)\phi(0,y)dy+f(x) \tag{2.3}\] \[\phi_{(0,1)}(x)=\omega\int_{0}^{1}\Gamma_{1}(x,y)\phi(0,y)dy+ \omega\int_{0}^{1}\Gamma_{0}(x,y)\phi_{(0,1)}(y)dy\] (2.4) \[\cdots\] \[\phi_{(0,\nu)}(x)=\frac{\partial^{\nu}\phi(\varepsilon,x)}{ \partial\varepsilon^{\nu}}|_{\varepsilon=0}=\omega\int_{0}^{1}\Gamma_{1}((x,y )\phi_{(0,\nu-1)}(y)dy\] \[+\omega\int_{0}^{1}\Gamma(0,x,y)\phi_{(0,\nu)}(y)dy,\nu=1,2,.... \tag{2.5}\] _converges absolutely in \(||\cdot(x)||_{0}=\sup_{x\in[0,1]}|\cdot(x)|\) norm and \(||\cdot||_{2}\) norm to a solution_ \[\phi(\varepsilon,x)=\sum_{j=0}^{\infty}\phi_{(0,j)}(x)\frac{ \varepsilon^{j}}{j!} \tag{2.6}\] _which is continuous with respect to \((\omega,\epsilon,x)\)._ **Proof:** As \(\Gamma_{0}(x,y)\leq C\) and \(\Gamma_{1}(x,y)\leq C\) we have that \(||\Gamma_{0}||_{1}\leq C,||\Gamma_{1}||_{1}\leq C\), which implies \[||\phi_{0}^{(\nu)}||_{0}=||\phi(0,x)||_{0}\leq|\omega|||\Gamma_{1}||_{1}\cdot|| \phi_{0}^{(\nu-1)}||_{0}+|\omega|||\Gamma_{0}||_{1}\cdot||\phi_{0}^{(\nu)}||_{0} \tag{2.7}\] leading to \[||\phi_{0}^{(\nu)}||_{0}\leq\frac{|\omega|||\Gamma_{1}||_{1}}{1-|\omega||| \Gamma_{0}||_{1}}||\phi^{(\nu-1)}||_{2} \tag{2.8}\] which implies that for \[\frac{|\omega|||\Gamma_{1}||_{1}}{1-|\omega|||\Gamma_{0}||_{1}}=\rho_{0},\] we have that \[||\phi(\epsilon,x)||_{0}=\sum_{j=0}^{\infty}||\phi_{(0,j)}(x)||_ {0}\frac{\epsilon^{j}}{j!} \tag{2.9}\] \[\leq\sum_{j=0}^{\infty}\rho_{0}^{j}\frac{\epsilon^{j}}{j!}||\phi _{0}||_{0} \tag{2.10}\] the solution converges in \(||\cdot||_{0}\) for any value of \(\omega,\rho_{0},\epsilon\geq 0\). Also we observe that as \(||\Gamma_{j}||_{2}<\infty\) for \(j=0,1\), \[||\phi_{0}^{(\nu)}||_{2} = ||\phi_{(0,0)}(x)||_{2}\leq|\omega|||\Gamma_{1}||_{2}\cdot||\phi_{ (0,\nu-1)}||_{2} \tag{2.11}\] \[+ |\omega|||\Gamma_{0}||_{2}\cdot||\phi_{(0,\nu)}||_{2}\] leading to \[||\phi_{(0,\nu)}||_{2}\leq\frac{|\omega|||\Gamma_{1}||_{2}}{1-|\omega||| \Gamma_{0}||_{2}}||\phi_{(0,\nu-1)}||_{2} \tag{2.12}\] which implies that for \[\frac{|\omega|||\Gamma_{1}||_{2}}{1-|\omega|||\Gamma_{0}||_{2}}=\rho,\] we have that \[||\phi(\epsilon,x)||_{2}\leq\sum_{j=0}^{\infty}||\phi_{(0.j)}(x)||_{2 }\left[\frac{\epsilon^{j}}{j!}\right] \tag{2.13}\] \[\leq\sum_{j=0}^{\infty}|\rho^{j}\left[\frac{\epsilon^{j}}{j!} \right]||\phi_{0}||_{2}\leq||\phi_{0}||_{2}e^{|\rho\epsilon|} \tag{2.14}\] the solution converges for any value of \(\omega,\rho,\epsilon\geq 0\). As a convergent power series everywhere it is continuous with respect to \((x,\epsilon)\) as one can see immediately from uniform convergence. Also we have that is continuous with respect to \(\omega\). **Theorem 2.2**: _Let's consider the perturbed integral equation_ \[\phi_{(\epsilon)}(x)-\omega\int_{0}^{1}\Gamma(\epsilon,x,y)\phi_{(\epsilon)}( y)dy=f(x),\quad x\in[0,1],\epsilon\geq 0 \tag{2.15}\] _where \(\Gamma(\epsilon,x,y)=\Gamma_{0}(x,y)+\epsilon\Gamma_{1}(x,y)\) is \(L^{2}\) integrable respectively functions of their variables and \(\int_{0}^{1}\int_{0}^{1}|\Gamma(\epsilon,x,y)|dxdy<\infty\). Assuming that the unperturbed equation for \(\epsilon=0\) has a solution in \(C([0,1])\) then the perturbation series_ \[\phi_{(0,0)}(x)=\omega\int_{0}^{1}\Gamma_{0},x,y)\phi_{(0,0)}(y) dy+f(x) \tag{2.16}\] \[\phi_{(0,1)}(x)=\omega\int_{0}^{1}\Gamma_{1}(x,y)\phi_{(0,0)}(y) dy+\omega\int_{0}^{1}\Gamma_{0}(x,y)\phi_{(0,1)}(y)dy\] (2.17) \[\cdots\] \[\phi_{(0,\nu)}(x)=\frac{\partial^{\nu}\phi(x)}{\partial x^{\nu} }|_{x=0}=\omega\int_{0}^{1}\Gamma_{1}((x,y)\phi_{(0,\nu-1)}(y)dy\] \[+\omega\int_{0}^{1}\Gamma(0,x,y)\phi_{(0,\nu)}(y)dy,\nu=1,2,.... \tag{2.18}\] _converges absolutely in \(||\cdot||_{0}=\sup_{x\in[0,1]}|\cdot(x)|\) norm and \(||\cdot||_{2}\) norm to a solution_ \[\phi(\epsilon,x)=\sum_{j=0}^{\infty}\phi_{(0,j)}(x)\frac{\epsilon^{j}}{j!} \tag{2.19}\] _which is continuous with respect to \((\omega,\varepsilon,x)\)._ **Proof:** As a result of the \(L^{2}\) integrability we have that we have converging in \(L^{2}\) sequences of kernels to \(\Gamma_{0},\Gamma_{1}\) respectively such that \(\Gamma_{0,n}(x,y)\leq C_{n}\to C_{|infty}\leq\infty\) and \(\Gamma_{1,n}(x,y)\leq C_{n}\to C_{\infty,1}\leq\infty\) and we have that \(||\Gamma_{0,n}||_{1}\leq C_{n},||\Gamma_{1,n}||_{1}\leq C_{n}\), which implies corresponding inequalities to (2.7,2.8), and therefore there exists a solution sequence \[||\phi_{(n,\varepsilon)}(x)||_{0}=\sum_{j=0}^{\infty}||\phi_{(n,0,j)}(x)||_{0}\frac{\varepsilon^{j}}{j!} \tag{2.20}\] \[\leq\sum_{j=0}^{\infty}\rho_{0,n}^{j}\frac{\varepsilon^{j}}{j!} ||\phi_{0,n}||_{0},n=0,1,2,... \tag{2.21}\] the solution converges in \(||\cdot||_{0}\) for any value of \(\omega,\rho_{0},\varepsilon\geq 0\). As \(\rho_{0,n}\to\rho_{0}<\infty\) we have that the limit solution \(||\phi(\varepsilon,x)||_{0}<\infty\) is bounded everywhere for \((\omega,x,\varepsilon)\). As the convergence is uniform we have that \(\phi(\varepsilon,x)\) is continuous. In the same manner we have as a result that \(\phi(\varepsilon,x)\) belongs to \(L_{2}[0,1]\). **Theorem 2.3**: _Let's consider the perturbed integral equation_ \[\phi(\varepsilon,x)-\omega\int_{0}^{1}\Gamma(\varepsilon,x,y)\psi(y,\phi( \varepsilon,y))dy=f(x),\ \ \ x\in[0,1],\varepsilon\geq 0 \tag{2.22}\] _where \(\Gamma(\varepsilon,x,y)=\Gamma_{0}(x,y)+\varepsilon\Gamma_{1}(x,y)\) is \(L^{2}\) integrable respectively function of its variables and \(\int_{0}^{1}\int_{0}^{1}|\Gamma(\varepsilon,x,y)|^{2}dxdy<\infty\). Suppose that_ \[|\Gamma_{j}(\varepsilon,x,y)|\leq C_{j},\ \ \ \ \ 0\leq x,y\leq 1,\varepsilon\geq 0,j=0,1. \tag{2.23}\] _and \(\psi^{(0,\nu)}(y,s)=\frac{\partial^{\nu}\psi(y,s)}{\partial s^{\nu}}\leq\frac{b ^{\nu}}{E(\nu,\nu)},0\leq b\leq 1,\ \ \ \ \nu=0,1,2,...\) where \(E(n,k)\leq\binom{2n-1}{n-1}\) is the number of integer solutions of the equation_ \[\sum_{j=1}^{k}r_{j}\cdot s_{j}=n,0\leq r_{j},s_{j}\leq n,j=1,2,..,k. \tag{2.24}\] _Assuming that the unperturbed equation for \(\varepsilon=0\) has a solution in \(C([0,1])\) then the perturbation series_ \[\phi_{(0,0)}(x)=\phi(0,x)=\omega\int_{0}^{1}\Gamma_{0}(x,y)\psi(y, \phi_{(0,0)}(y))dy \tag{2.25}\] \[\phi_{(0,1)}(x)=\omega\int_{0}^{1}\Gamma_{1}(x,y)\psi(y,\phi_{(0,0 )}(y))dy\] \[+ \omega\int_{0}^{1}\Gamma_{0}(x,y)\psi^{(0,1)}(y,\phi_{(0,0)}(y)) \phi_{(0,1)}(y)dy\] \[\cdots\] \[\phi_{(0,\nu)}(x)=\omega\int_{0}^{1}\Gamma_{0}(x,y)\psi^{(0,\nu)} (y,\phi_{(0,0)}(y))P_{\nu+1}(y)dy\] \[+ \omega\int_{0}^{1}\Gamma_{1}(x,y)\psi^{(0,\nu-1)}(y,\phi_{(0,0)} (y))P_{\nu}(y)dy\] \[P_{\nu}(y)=\] \[\sum_{\begin{subarray}{c}\Sigma_{j=1}^{\nu-1}r_{j}\cdot s_{j}= \nu-1\\ r_{j}\cdot s_{j}=0,j=1,\ldots,\nu-1\end{subarray}}^{\nu-1} \left(\begin{subarray}{c}\nu-1\\ r_{1},s_{1};\cdots;r_{\nu-1},s_{\nu-1}\end{subarray}\right)\prod_{j=1}^{\nu-1 }\phi_{(0,s_{j})}^{r_{j}}(y),\;\;\;\nu=2,3,...\] (2.29) \[\left(\begin{subarray}{c}m\\ r_{1},s_{1};\cdots;r_{m},s_{m}\end{subarray}\right)=\frac{m!}{\prod_{j=1}^{m}( s_{j}!)^{r_{j}}}\] (2.30) \[\psi^{(k,\nu)}(s,\phi(\varepsilon,y))=\frac{\partial^{k}\partial ^{\nu}\psi(s,\phi(\varepsilon,y))}{\partial s^{k}\partial\varepsilon^{\nu}}, \;\;\;\;k,\nu=0,1,2, \tag{2.31}\] _converges absolutely in \(||\cdot||_{0}=\sup_{x\in[0,1]}|\cdot(x)|\) norm and \(||\cdot||_{2}\) norm to a solution_ \[\phi(\varepsilon,x)=\sum_{j=0}^{\infty}\phi_{(0,j)}(x)\frac{\varepsilon^{j}}{ j!} \tag{2.32}\] _which is continuous with respect to \((\omega,\varepsilon,x)\) for suitable ranges._ **Proof:** As a result of the \(L^{2}\) integrability we have that we have converging in \(L^{2}\) sequences of kernels to \(\Gamma_{0},\Gamma_{1}\) respectively such that \(\Gamma_{0,n}(x,y)\leq C_{n}^{\prime}\to C_{0},\leq\infty\) and \(\Gamma_{1,n}(x,y)\leq C_{n}^{\prime}\to C_{1}\leq\infty\) and we have that \(||\Gamma_{0,n}||_{1}\leq C_{n},||\Gamma_{1,n}||_{1}\leq C_{n}^{\prime}\), which implies that inductively we can show that \(||\phi_{n}(0,\cdot)||_{0}\leq\left[\frac{\omega(C_{0}}{1-\omega C_{0}}+\frac{ \omega C_{1}}{1-\omega C_{0}}\right](n)!D^{n+1},n=1,2,...\), so for \(\omega\leq g_{\pm}(C_{0},C_{1},D)\) we have the bound \(||\phi_{n}(0,\cdot)||_{0}\leq n!D^{n},n=1,2,...\), where if \(\Delta(C_{0},C_{1},D)=[2DC_{0}+C_{0}+C_{1}]^{2}-4(DC_{1}+2C_{0}^{2})<0\) we have for any \(\omega\) the bound and otherwise for \(\Delta(C_{0},C_{1},D)>0\) we have for \[\omega<g_{\pm}(C_{0},C_{1},D)=\frac{2DC_{0}+C_{0}+C_{1}\pm\sqrt{\Delta(C_{0},C_{ 1},D)}}{2(DC_{1}+2C_{0}^{2})}>0 \tag{2.33}\] The induction is based on the inequality \[|P_{\nu}(y)|\leq E(\nu-1,\nu-1)(\nu-1)! \tag{2.34}\] and \[||\phi_{0,\nu}(\cdot)||_{0}\leq|\omega|\int_{0}^{1}|\Gamma_{0}(x,y)||\psi^{(0,\nu)}(y,\phi_{0,0}(y))||P_{\nu+1}(y)|dy \tag{2.35}\] \[+ |\omega|\int_{0}^{1}|\Gamma_{1}(x,y)||\psi^{(0,\nu-1)}(y,\phi_{0, 0}(y))||P_{\nu}(y)|dy\] \[\leq |\omega|C_{0}||\phi_{\nu}(0,\cdot)||_{0}\frac{b^{\nu}}{E(\nu,\nu )}+|\omega|C_{0}|\frac{b^{\nu}}{E(\nu,\nu)}E(\nu-1,\nu-1)\nu!D^{\nu}\] \[+ |\omega|C_{1}||_{0}\frac{b^{\nu}}{E(\nu,\nu)}(\nu-1)!D^{\nu-1}. \tag{2.36}\] and which leads to the convergent series \[||\phi_{(n,\varepsilon)}(x)||_{0}=\sum_{j=0}^{\infty}||\phi_{n, 0,j}(x)||_{0}\frac{\varepsilon^{j}}{j!} \tag{2.37}\] \[\leq\sum_{j=0}^{\infty}kj!D^{j}\frac{\varepsilon^{j}}{j!}||\phi_ {0,n}||_{0},n=0,1,2,... \tag{2.38}\] converging in \(||\cdot||_{0}\) for any value of \(\omega\leq g_{\pm}(C_{0},C_{1},D),1/D>\varepsilon\geq 0\). So we have that the limit solution \(||\phi(\varepsilon,x)||_{0}<\infty\) is bounded everywhere for \((\omega,x,\varepsilon)\) in the allowed ranges. As the convergence is uniform we have that \(\phi(\varepsilon,x)\) is continuous on the specified ranges. In the same manner we have as a result that \(\phi(\varepsilon,x)\) belongs to \(L_{2}[0,1]\). **Corrolary 2.1**: _Let \(f\), \(\Gamma_{0}\) be continuous and \(\Gamma_{1}\) to be \(L^{2}\) integrable. Then \(\Gamma(\varepsilon,x,y)\) defines a continuous, bounded solution for \(\varepsilon<\varepsilon_{0}>0\)._ **Proof:** If \(\Gamma_{0}\) is continuous the solution \(\phi_{(0)}(x)\) as in \(L^{2}[0,1]\) it implies that \(\phi_{(0)}(x)=f(x)+\omega\int_{0}^{1}\Gamma_{0}(x,y)\phi_{(0)}(y)dy\) as a sum of a continuous function and an integral with a continuous kernel is continuous. Applying theorem (2.3) we have the sought implication. **Corrolary 2.2**: _Let \(f\) continuous and \(\Gamma_{1}\) to be \(L^{2}\) integrable. Then \(\Gamma(\varepsilon,x,y)=\varepsilon I+\varepsilon(\Gamma_{1}-I)\) defines a continuous, bounded solution for \(\varepsilon<\varepsilon_{0}>0\)._ **Proof:** Immediate from the above corollary (2.1). Next we consider the behavior of the solution of the perturbed integral equation when the kernel has bounded uniformly derivative with respect to \(x\). Then the perturbation series has a bounded derivative and uniformly convergent solution. **Theorem 2.4**: _Let's consider the perturbed integral equation_ \[\phi(\varepsilon,x)-\omega\int_{0}^{1}\Gamma(\varepsilon,x,y)\phi( \varepsilon,y)dy=f(x),\ \ \ x\in[0,1],\varepsilon\geq 0 \tag{2.39}\] _where \(\Gamma(\varepsilon,x,y)=\Gamma_{0}(x,y)+\varepsilon\Gamma_{1}(x,y)\) is continuous and \(L^{2}\) integrable respectively functions of their variables and \(\int_{0}^{1}\int_{0}^{1}\Gamma(\varepsilon,x,y)dxdy<\infty\). Let_ \[\sup_{0\leq x\leq 1}\frac{\partial\Gamma_{j}(x,y)}{\partial x}\leq C _{1}>0,\ \ j=0,1.\] _Suppose that_ \[|\Gamma_{0}(\varepsilon,x,y)|\leq C,\ \ \ \ \ 0\leq x,y\leq 1, \varepsilon\geq 0. \tag{2.40}\] _Assuming that the unperturbed equation for \(\varepsilon=0\) has a solution in \(C^{1}([0,1])\) or \(L^{2}([0,1])\) respectively then the perturbation series_ \[\phi_{(0,0)}(x)=\omega\int_{0}^{1}\Gamma_{0},x,y)\phi_{(0,0)}(y)dy+f(x) \tag{2.41}\] \[\phi_{(0,1)}(x)=\omega\int_{0}^{1}\Gamma_{1}(x,y)\phi_{(0,0)}(y)dy+ \omega\int_{0}^{1}\Gamma_{0}(x,y)\phi_{(0,1)}(y)dy\] (2.42) \[\cdots\] \[\phi_{(0,\nu)}(x)=\frac{\partial^{\nu}\phi(x)}{\partial x^{\nu}}| _{x=0}=\omega\int_{0}^{1}\Gamma_{1}((x,y)\phi_{(0,\nu-1)}(y)dy\] \[+\omega\int_{0}^{1}\Gamma(0,x,y)\phi_{(0,\nu)}(y)dy,\nu=1,2,.... \tag{2.43}\] _converges absolutely in \(||\cdot||_{0}=\sup_{x\in[0,1]}|\cdot(x)|\) norm and \(||\cdot||_{2}\) norm to a solution_ \[\phi(\epsilon,x)=\sum_{j=0}^{\infty}\phi_{(0,j)}(x)\frac{\epsilon^{j}}{j!} \tag{2.44}\] _which is continuous with respect to \((\omega,\epsilon,x)\) and has a convergent series of derivatives._ **Proof:** As \(\Gamma_{0}(x,y)\leq C\) and \(\Gamma_{1}(x,y)\leq C\) we have that \(||\Gamma_{0}||_{1}\leq C,||\Gamma_{1}||_{1}\leq C\), which implies \[||\phi_{(0,\nu)}||_{0} = ||\phi^{\prime}_{(0,0)}(x)||_{0}\leq|\omega|||\Gamma^{\prime}_{1 }||_{1}\cdot||\phi_{(0,\nu-1)}||_{0} \tag{2.45}\] \[+ |\omega|||\Gamma^{\prime}_{0}||_{1}\cdot||\phi_{(0,\nu)}||_{0}\] leading to \[||\phi^{\prime}_{(0,\nu)}||_{0}\leq\frac{|\omega|||\Gamma^{\prime}_{1}||_{1}} {1-|\omega|||\Gamma^{\prime}_{0}||_{1}}||\phi^{\prime}_{(0,\nu-1)}||_{2} \tag{2.46}\] which implies that for \[\frac{|\omega|||\Gamma^{\prime}_{1}||_{1}}{1-|\omega|||\Gamma^{\prime}_{0}||_{ 1}}=\rho_{0},\] we have that \[||\phi^{\prime}_{(\epsilon)}(x)||_{0} = \sum_{j=0}^{\infty}||\phi^{\prime}_{(0,j)}(x)||_{0}\frac{\epsilon ^{j}}{j!} \tag{2.47}\] \[\leq\sum_{j=0}^{\infty}\rho_{0}^{j}\frac{\epsilon^{j}}{j!}||\phi ^{\prime}_{0}||_{0} \tag{2.48}\] the solution converges in \(||\cdot||_{0}\) for any value of \(\omega,\rho_{0},\epsilon\geq 0\). Also we observe that as \(||\Gamma_{j}^{\prime}||_{2}<\infty\) for \(j=0,1\), \[||\phi_{(0,\nu)}^{\prime}||_{2} = ||\phi_{(0,0)}^{\prime}(x)||_{2}\leq|\omega|||\Gamma_{1}^{\prime}|| _{2}\cdot||\phi_{(0,\nu-1)}^{\prime}||_{2} \tag{2.49}\] \[+ |\omega|||\Gamma_{0}^{\prime}||_{2}\cdot||\phi_{(0,\nu)}^{\prime }||_{2}\] leading to \[||\phi_{(0,\nu)}^{\prime}||_{2}\leq\frac{|\omega|||\Gamma_{1}^{\prime}||_{2}}{ 1-|\omega|||\Gamma_{0}^{\prime}||_{2}}||\phi_{(0,\nu-1)}^{\prime}||_{2} \tag{2.50}\] which implies that for \[\frac{|\omega|||\Gamma_{1}^{\prime}||_{2}}{1-|\omega|||\Gamma_{0}^{\prime}|| _{2}}=\rho,\] we have that \[||\phi_{(\epsilon,0)}^{\prime}(x)||_{2}\leq\sum_{j=0}^{\infty}|| \phi_{(0,\nu)}^{\prime}(x)||_{2}\left[\frac{\epsilon^{j}}{j!}\right]\] \[\leq\sum_{j=0}^{\infty}|\rho^{j}\left[\frac{\epsilon^{j}}{j!} \right]||\phi_{0}^{\prime}||_{2}\leq||\phi_{0}^{\prime}||_{2}e^{|\rho\epsilon|} \tag{2.51}\] the derivative of the solution converges for any value of \(\omega,\rho,\epsilon\geq 0\). As a convergent power series everywhere it is continuous with respect to \((x,\epsilon)\) as one can see immediately from uniform convergence. Also we have that is continuous with respect to \(\omega\). Next we continue with the following theorem. **Theorem 2.5**: _Let's consider the perturbed integral equation_ \[\phi(\epsilon,x)-\omega\int_{0}^{1}\Gamma(\epsilon,x,y)\phi(\epsilon,y)dy=f(x),\ \ \ x\in[0,1],\epsilon\geq 0 \tag{2.52}\] _where \(\Gamma(\varepsilon,x,y)=\Gamma_{0}(x,y)+\varepsilon\Gamma_{1}(x,y)\) is \(L^{2}\) integrable respectively functions of their variables and \(\int_{0}^{1}\int_{0}^{1}\Gamma(\varepsilon,x,y)dxdy<\infty\). Suppose that_ \[|\Gamma_{j}(x,y)|\leq C,\ \ \ \ \ 0\leq x,y\leq 1,\varepsilon\geq 0,j=0,1. \tag{2.53}\] _Assuming that the unperturbed equation for \(\varepsilon=0\) has a solution in \(C^{1}([0,1])\) then the perturbation series_ \[\phi_{(0,0)}(x)=\omega\int_{0}^{1}\Gamma_{0},x,y)\phi_{(0,0)}(y) dy+f(x) \tag{2.54}\] \[\phi_{(0,1)}(x)=\omega\int_{0}^{1}\Gamma_{1}(x,y)\phi_{(0,0)}(y) dy+\omega\int_{0}^{1}\Gamma_{0}(x,y)\phi_{(0,1)}(y)dy\] (2.55) \[\cdots\] \[\phi_{(0,\nu)}(x)=\frac{\partial^{\nu}\phi(x)}{\partial x^{\nu} }|_{x=0}=\omega\int_{0}^{1}\Gamma_{1}((x,y)\phi_{(0,\nu-1)}(y)dy\] \[+\omega\int_{0}^{1}\Gamma(0,x,y)\phi_{(0,\nu)}(y)dy,\nu=1,2,.... \tag{2.56}\] _converges absolutely in \(||\cdot(x)||_{0}=\sup_{x\in[0,1]}|\cdot(x)|\) norm and \(||\cdot||_{2}\) norm to a solution_ \[\phi(\varepsilon,x)=\sum_{j=0}^{\infty}\phi_{(0,j)}(x)\frac{ \varepsilon^{j}}{j!} \tag{2.57}\] _which is continuous with respect to \((\omega,\varepsilon,x)\)._ **Proof:** As we have that \(||\Gamma_{0}||_{1}\leq C,||\Gamma_{1}||_{1}\leq C\), which implies corresponding inequalities to (2.45,2.46), and therefore there exists a solution sequence \[||\phi_{(n,\varepsilon)}(x)||_{0}=\sum_{j=0}^{\infty}||\phi_{(n, 0,j)}(x)||_{0}\frac{\varepsilon^{j}}{j!} \tag{2.58}\] \[\leq\sum_{j=0}^{\infty}\rho_{0,n}^{j}\frac{\varepsilon^{j}}{j!}|| \phi_{0,n}||_{0}\leq||\phi_{0,n}||_{0}e^{|\rho_{0,n}\varepsilon},n=0,1,2,... \tag{2.59}\] the solution converges in \(||\cdot||_{0}\) for any value of \(\omega,\rho_{0},\varepsilon\geq 0\). As \(\rho_{0,n}\to\rho_{0}<\infty\) we have that the limit solution \(||\phi(\varepsilon,x)||_{0}<\infty\) is bounded everywhere for \((\omega,x,\varepsilon)\). As the convergence is uniform we have that \(\phi(\varepsilon,x)\) is \(C^{1}[0,1]\) continuous. In the same manner we have as a result that \(\phi^{\prime}_{(\varepsilon)}(x)\) belongs to \(L_{2}[0,1]\). We give the respective theorem to (2.3) for kernels with uniformly bounded derivative with respect to \(x\). **Theorem 2.6**: _Let's consider the perturbed integral equation_ \[\phi(\varepsilon,x)-\omega\int_{0}^{1}\Gamma(\varepsilon,x,y) \psi(y,\phi(\varepsilon,y))dy=f(x),\quad x\in[0,1],\varepsilon\geq 0 \tag{2.60}\] _where \(\Gamma(\varepsilon,x,y)=\Gamma_{0}(x,y)+\varepsilon\Gamma_{1}(x,y)\) is \(L^{2}\) integrable respectively function of its variables and \(\int_{0}^{1}\int_{0}^{1}|\Gamma(\varepsilon,x,y)|^{2}dxdy<\infty\). Let_ \[\sup_{0\leq x\leq 1}\frac{\partial\Gamma_{j}(x,y)}{ \partial x}\leq C^{\prime}_{j}>0,\;\;j=0,1.\] _Suppose that_ \[|\Gamma_{j}(x,y)|\leq C_{j},\;\;\;\;\;0\leq x,y\leq 1, \varepsilon\geq 0,j=0,1. \tag{2.61}\] _and and \(\psi^{(0,\nu)}(y,s)=\frac{\partial^{\nu}\psi(y,s)}{\partial s^{\nu}}\leq\frac {b^{\nu}}{E(\nu,\nu)},0\leq b\leq 1,\;\;\;\nu=0,1,2,...\) where \(E(n,k)\leq\binom{2n-1}{n-1}\) is the number of integer solutions of the equation_ \[\sum_{j=1}^{k}r_{j}\cdot s_{j}=n,0\leq r_{j},s_{j}\leq n,j=1,2,...k. \tag{2.63}\] _Assuming that the unperturbed equation for \(\varepsilon=0\) has a solution in \(C([0,1])\) then_ the perturbation series_ \[\phi_{(0,0)}(x)=\omega\int_{0}^{1}\Gamma_{0}(x,y)\psi(y,\phi_{(0,0)} (y))dy \tag{2.64}\] \[\phi_{(0,1)}(x)=\omega\int_{0}^{1}\Gamma_{1}(x,y)\psi(y,\phi_{(0,0) }(y))dy\] \[+ \omega\int_{0}^{1}\Gamma_{0}(x,y)\psi^{(0,1)}(y,\phi_{(0,0)}(y)) \phi_{(0,1)}(y)dy\] \[.....\] \[\phi_{(0,\nu)}(x)\] \[=\omega\int_{0}^{1}\Gamma_{0}(x,y)\psi^{(0,\nu)}(y,\phi_{(0,0)}( y))P_{\nu+1}(y)dy\] \[+ \omega\int_{0}^{1}\Gamma_{1}(x,y)\psi^{(0,\nu-1)}(y,\phi_{(0,0)} (y))P_{\nu}(y)dy\] (2.66) \[P_{\nu}(y)=\] \[\sum_{\begin{subarray}{c}\Sigma_{j=1}^{\nu-1}r_{j}s_{j}=v-1\\ r_{j},s_{j}=0,j=1,...,\nu-1\end{subarray}}^{\nu-1} \left(\begin{subarray}{c}\nu-1\\ r_{1},s_{1};\cdots;r_{\nu-1},s_{\nu-1}\end{subarray}\right)\prod_{j=1}^{\nu-1 }\phi_{(0,s_{j})}(x)^{r_{j}}(y),\nu=2,3,...\] (2.67) \[\left(\begin{subarray}{c}m\\ r_{1},s_{1};\cdots;r_{m},s_{m}\end{subarray}\right)=\frac{m!}{\prod_{j=1}^{m}( s_{j}!)^{r_{j}}}\] (2.68) \[\psi^{(k,\nu)}(s,\phi(\varepsilon,y))=\frac{\partial^{k}\partial ^{\nu}\psi(s,\phi(\varepsilon,y))}{\partial s^{k}\partial\varepsilon^{\nu}}k, \nu=0,1,2,... \tag{2.69}\] _converges absolutely in \(||\cdot||_{0}=\sup_{x\in[0,1]}|\cdot(x)|\) norm and \(||\cdot||_{2}\) norm to a solution_ \[\phi(\varepsilon,x)=\sum_{j=0}^{\infty}\phi_{(0,j)}(x)\frac{\varepsilon^{j}}{j!} \tag{2.70}\] _which is continuous with respect to \((\omega,\varepsilon,x)\) for suitable ranges._ **Proof:** As \(\Gamma_{0}(x,y)\leq C\) and \(\Gamma_{1}(x,y)\leq C\) we have that \(||\Gamma_{0}||_{1}\leq C,||\Gamma_{1}||_{1}\leq C_{1}\), which implies that inductively we can show as in theorem (2.3) that \(||\phi_{(0,n)}(\cdot)||_{0}\leq\left[\frac{\omega C_{0}}{1-\omega C_{0}}+\frac {\omega C_{1}}{1-\omega C_{0}}\right](n)!D^{n},n=1,2,...\), so for \(\omega\leq g_{\pm}(C_{0},C_{1},D)\) we have the bound \(||\phi_{(0,n)}(\cdot)||_{0}\leq n!D^{n+1},n=1,2,...\), where if \(\Delta(C_{0},C_{1},D)=[2DC_{0}+C_{0}+C_{1}]^{2}-4(DC_{1}+2C_{0}^{2})<0\) we have for any \(\omega\) the bound and otherwise for \(\Delta(C_{0},C_{1},D)>0\) we have for \[\omega<g_{\pm}(C_{0},C_{1},D)=\frac{2DC_{0}+C_{0}+C_{1}\pm\sqrt{\Delta(C_{0},C_{1},D)}}{2(DC_{1}+2C_{0}^{2})}>0 \tag{2.71}\] This leads to \[||\phi_{(\varepsilon,n)}(x)||_{0}\leq\sum_{j=0}^{\infty}||\phi_{ (0,j,n)}(x)||_{0}\frac{\varepsilon^{j}}{j!} \tag{2.72}\] \[\leq\sum_{j=0}^{\infty}kj!D^{j}\frac{\varepsilon^{j}}{j!}||\phi_{ 0,n}||_{0},n=0,1,2,... \tag{2.73}\] converging in \(||\cdot||_{0}\) for any value of \(\omega\leq g_{\pm}(C_{0},C_{1},D),1/D>\varepsilon\geq 0\). So we have that the limit solution \(||\phi(\varepsilon,x)||_{0}<\infty\) is bounded everywhere for \((\omega,x,\varepsilon)\) in the allowed ranges. As the convergence is uniform we have that \(\phi(\varepsilon,x)\) is continuous on the specified ranges. In the same manner we have as a result that \(\phi(\varepsilon,x)\) belongs to \(L_{2}[0,1]\). We also have in the same manner that \(\sup_{0\leq x\leq 1}|\frac{\partial\phi_{0,n}}{\partial x}|\leq n!D_{1}^{n},n=2,3,...\) for \[\omega\leq g_{\pm}(C_{0}^{\prime},C_{1}^{\prime},D_{1})\] holds and which leads to the uniformly convergent series \[||\phi_{(0,n)}^{\prime}(x)||_{0}\leq\sum_{j=0}^{\infty}||\phi_{(0,j,n)}(x)||_{0}\frac{\varepsilon^{j}}{j!}\] \[\leq\sum_{j=0}^{\infty}k_{1}j!D_{1}^{j}\frac{\varepsilon^{j}}{j! }||\phi_{(0,n)}^{\prime}||_{0},n=0,1,2,... \tag{2.74}\] converging in \(||\cdot||_{0}\) for any value of \(\omega\leq g_{\pm}(C_{0}^{\prime},C_{1}^{\prime},D_{1}),1/D_{1}>\varepsilon\geq 0\). So we have that the limit solution \(||\phi^{\prime}(\varepsilon,x)||_{0}<\infty\) is bounded everywhere for \((\omega,x,\varepsilon)\) in the allowed ranges. As the convergence is uniform we have that \(\phi^{\prime}(\varepsilon,x)\) is continuous on the specified ranges. In the same manner we have as a result that \(\phi^{\prime}_{(\varepsilon,0)}(x)\) belongs to \(L_{2}[0,1]\). **Corrolary 2.3**: _Let \(f\), \(\Gamma_{0}\) be continuous and \(\Gamma_{1}\) to be \(L^{2}\) integrable. Let_ \[\sup_{0\leq x\leq 1}\frac{\partial\Gamma_{j}(x,y)}{\partial x}\leq C_{1}>0,\ \ j=0,1.\] _Then \(\Gamma(\varepsilon,x,y)\) defines a continuous, bounded \(C^{1}[0,1]\) solution for \(\varepsilon<\varepsilon_{0}>0\)._ **Proof:** If \(\Gamma_{0}\) is continuous the solution \(\phi(0,x)\) as in \(L^{2}[0,1]\) it implies that \(\phi(0,x)=f(x)+\omega\int_{0}^{1}\Gamma_{0}(x,y)\phi(0,y)dy\) as a sum of a continuous function and an integral with a continuous kernel is continuous. Applying theorem (2.6) we have the sought implication. **Corrolary 2.4**: _Let \(f\) continuous and \(\Gamma_{1}\) to be \(L^{2}\) integrable. Let_ \[\sup_{0\leq x\leq 1}\frac{\partial\Gamma_{j}(x,y)}{\partial x}\leq C_{1}>0,\ \ j=0,1.\] _Then \(\Gamma(\varepsilon,x,y)=\varepsilon I+\varepsilon(\Gamma_{1}-I)\) defines a continuous, \(C^{1}[0,1]\) bounded solution for \(\varepsilon<\varepsilon_{0}>0\)._ **Proof:** Immediate from the above corollary (2.3). We have the respective theorem as well. **Theorem 2.7**: _Let's consider the perturbed integral equation_ \[\phi(\varepsilon,x)-\omega\int_{0}^{1}\Gamma(\varepsilon,x,y)\psi(y,\phi( \varepsilon,y))dy=f(x),\ \ \ x\in[0,1],\varepsilon\geq 0. \tag{2.75}\] _where \(\Gamma(\varepsilon,x,y)=\Gamma_{0}(x,y)+\varepsilon\Gamma_{1}(x,y)\) is \(L^{1}\) integrable respectively functions of their variables and \(\int_{0}^{1}\int_{0}^{1}\Gamma(\varepsilon,x,y)dxdy<\infty\). Let_ \[\sup_{0\leq x\leq 1}\frac{\partial\Gamma_{j}(x,y)}{\partial x}\leq C_{2}>0,\ \ j=0,1.\] _Suppose that_ \[|\Gamma_{j}(\varepsilon,x,y)|\leq C_{j},\ \ \ \ \ 0\leq x,y\leq 1,\varepsilon\geq 0,j=0,1. \tag{2.76}\] _Assuming that the unperturbed equation for \(\varepsilon=0\) has a solution in \(C([0,1])\) then the perturbation series_ \[\phi_{(0,0)}(x)=\phi(0,x)=\omega\int_{0}^{1}\Gamma_{0}(x,y)\psi(y,\phi_{(0,0)}(y))dy \tag{2.77}\] \[\phi_{(0,1)}(x)=\omega\int_{0}^{1}\Gamma_{1}(x,y)\psi(y,\phi_{(0, 0)}(y))dy\] \[+ \omega\int_{0}^{1}\Gamma_{0}(x,y)\psi^{(0,1)}(y,\phi_{(0,0)}(y)) \phi_{(0,1)}(y)dy\] \[.....\] \[\phi_{(0,\nu)}(x)\] \[=\omega\int_{0}^{1}\Gamma_{0}(x,y)\psi^{(0,\nu)}(y,\phi_{(0,0)}( y))P_{\nu+1}(y)dy\] \[+ \omega\int_{0}^{1}\Gamma_{1}(x,y)\psi^{(0,\nu-1)}(y,\phi_{(0,0)} (y))P_{\nu}(y)dy\] (2.79) \[P_{\nu}(y)=\] \[\sum_{\begin{subarray}{c}\Sigma_{j=1}^{\nu-1}\,r_{j}{}^{*}j=\nu- 1\\ r_{j}{}^{*}j=0,j=1,...,\nu-1\end{subarray}}^{\nu-1} \left(\begin{subarray}{c}\nu-1\\ r_{1},s_{1};\cdots;r_{\nu-1},s_{\nu-1}\end{subarray}\right)\prod_{j=1}^{\nu- 1}\phi_{(0,s_{j})}(x)^{r_{j}}(y),\nu=2,3,...\] (2.80) \[\left(\begin{subarray}{c}m\\ r_{1},s_{1};\cdots;r_{m},s_{m}\end{subarray}\right)=\frac{m!}{\prod_{j=1}^{m}( s_{j}!)^{r_{j}}}\] (2.81) \[\psi^{(k,\nu)}(s,\phi(\varepsilon,y))=\frac{\partial^{k}\partial ^{\nu}\psi(s,\phi(\varepsilon,y))}{\partial s^{k}\partial\varepsilon^{\nu}}, \ k,\nu=0,1,2,.. \tag{2.82}\] _converges absolutely and uniformly in \(||\cdot||_{0}=\sup_{x\in[0,1]}|\cdot(x)|\) norm and \(||\cdot||_{1}\) norm to a \(C^{1}[0,1]\) solution_ \[\phi(\varepsilon,x)=\sum_{j=0}^{\infty}\phi_{(0,j)}(x)\frac{\varepsilon^{j}} {j!} \tag{2.83}\] _which is continuous with respect to \((\omega,\varepsilon,x)\) for suitable ranges._ **Proof:** Along the same lines as in theorem (2.6). **Note:** We have corresponding results if \[\sup_{0\leq x\leq 1}\frac{\partial^{m}\Gamma_{j}(x,y)}{\partial x^{m}}\leq C_{m }>0,\ \ j=0,1,m=1,2,...,p.\] Then the respective nonlinear Fredholm equations have a \(C^{m}[0,1]\) solution for \(m=1,2,...,p\). ## 3 Solution procedure for nonlinear integral equations Here we investigate'small' perturbations employed on the non linear term where the unknown function is involved rather than the kernel of the integral equation. Again we show in theorem (3.1) that the perturbative series converges to a continuous solution. Then we show in there (3.2) below next, the existence of a maximal perurbation range for the perurbation parameter using Zorn's lemma on a appropriate partially ordered set of partitions. **Theorem 3.1**: _Let's consider the nonlinear integral equation_ \[\phi(\varepsilon,x)-\omega\int_{0}^{1}\Gamma(x,y)\psi(\varepsilon,y,\phi( \varepsilon,y))dy=0,\ \ \ x\in[0,1],\varepsilon\geq 0 \tag{3.1}\] _where \(\Gamma(x,y)\) is \(L^{2}\) integrable respectively function of its variables and \(\int_{0}^{1}\int_{0}^{1}\Gamma(\varepsilon,x,y)dxdy<\infty\). We have \(\psi(\varepsilon,y,z)=z+\varepsilon\Psi(y,z)\). Suppose that_ \[|\Gamma(x,y)|\leq C,\ \ \ \ \ 0\leq x,y\leq 1,\varepsilon\geq 0. \tag{3.2}\] _and \(\Psi^{(0,\nu)}(y,s)=\frac{\partial^{\nu}\Psi(y,s)}{\partial s^{\nu}}\leq\frac {b^{\nu}}{E(\nu,\nu)},0\leq b\leq 1,\ \ \ \ \nu=0,1,2,...\) where \(E(n,k)\leq\binom{2n-1}{n-1}\) is the number of integer solutions of the equation_ \[\sum_{j=1}^{k}r_{j}\cdot s_{j}=n,0\leq r_{j},s_{j}\leq n,j=1,2,..,k. \tag{3.3}\] _If the integral equation_ \[\phi_{0}(0,x)=\phi(0,x)=\omega\int_{0}^{1}\Gamma(x,y)\phi_{0}(0,y)dy \tag{3.4}\] _has continuous solutions in \(C[0,1]\), then the integral equation (3.1) has a solution in \(C[0,1]\)._ **Proof:** Differentiating eq.(3.1) with respect to \(\varepsilon\) at \(0\), we get the series of integral equations \[\phi_{(0,0)}(x)=\phi(0,x)=\omega\int_{0}^{1}\Gamma(x,y)\phi_{(0,0 )}(y)dy \tag{3.6}\] \[\phi_{(0,1)}(x)=\omega\int_{0}^{1}\Gamma(x,y)\phi_{(0,1)}(y)dy\] \[+ \omega\int_{0}^{1}\Gamma(x,y)\Psi(y,\phi_{(0,0)}(y))dy\] \[.....\] \[\phi_{(0,\nu)}(x)=\omega\int_{0}^{1}\Gamma(x,y)\phi_{(0,\nu)}(y)dy\] \[+ \omega\int_{0}^{1}\Gamma(x,y)\Psi^{(0,\nu-1)}(y,\phi_{(0,0)}(y))P _{\nu}(y)dy\] (3.8) \[P_{\nu}(y) = \sum_{\begin{subarray}{c}\Sigma_{j=1}^{\nu-1}\\ r_{j},s_{j}=v-1\\ r_{j},s_{j}=0,j=1,...,\nu-1\end{subarray}}^{\nu-1}\left(\begin{subarray}{c} \nu-1\\ r_{1},s_{1};\cdots;r_{\nu-1},s_{\nu-1}\end{subarray}\right)\prod_{j=1}^{\nu-1 }\phi_{(0,s_{j})}^{r_{j}}(y),\nu=2,3,...\] (3.9) \[\left(\begin{subarray}{c}m\\ r_{1},s_{1};\cdots;r_{m},s_{m}\end{subarray}\right)=\frac{m!}{\prod_{j=1}^{m} (s_{j}!)^{r_{j}}} \tag{3.10}\] which are linear Fredholm integral equations of the second kind after the first. The first has solutions the eigenfunctions \(\phi_{n},n=1,2,...\) of \(\Gamma(x,y)\) with associated eigenvalues \(\mu_{n},n=1,2,...\) under suitable assumptions on \(K\). Equation (3.8) has solution as a second kind Fredholm integral equation of the form \[\phi_{n}(0,x)=f_{n}(x)+\omega\sum_{j=1,\omega\mu_{j}\neq 1} \frac{\mu_{j}}{1-\omega\mu_{j}}(f_{n},\phi_{j})\phi_{j}(x) \tag{3.11}\] \[f_{n}(x)=\omega\int_{0}^{1}\Gamma(x,y)\Psi^{(0,n-1)}(y,\phi_{0}( 0,y))P_{n}(y)dy,n=1,2,... \tag{3.12}\] Inductively we can show as in theorem (2.3) that \(||\phi_{(0,n)}(\cdot)||_{0}\leq\frac{\omega C}{1-\omega C}(n-1)!D^{n-1},n=1,2,...\), so for \(\omega\leq\frac{D}{C+CD}\) we have the bound \(||\phi_{(0,n)}(\cdot)||_{0}\leq n!D^{n},n=1,2,...\), This implies that under the assumption for continuity of solution in eq.(3.4) then the perturbation series solution converges absolutely and uniformly in \(||\cdot||_{0}=\sup_{x\in[0,1]}|\cdot(x)|\) norm and \(||\cdot||_{1}\) norm to a \(C^{1}[0,1]\) solution \[\phi(\epsilon,x)=\sum_{j=0}^{\infty}\phi_{(0,j)}(x)\frac{\epsilon^{j}}{j!} \tag{3.13}\] which is continuous with respect to \((\omega,\epsilon,x)\) for suitable ranges. More precisely it holds for \(\epsilon<\frac{1}{D}\) and then the solution is absolutely bounded by \(\frac{1}{1-D\epsilon}\) for all \(x\in[0,1]\). The question arises that as given a perturbative series solution to a non linear integral equation if the region of perturbation can be expanded further so as to extend the region of convergence of the series. The following theorem establishes the existence of a maximal perturbation along a univariate family of perturbation functions. **Theorem 3.2**: _Let's consider the nonlinear integral equation_ \[\phi(\epsilon,x)-\omega\int_{0}^{1}\Gamma(x,y)\psi(\epsilon,y,\phi(\epsilon,y ))dy=0,\ \ \ x\in[0,1],\epsilon\geq 0 \tag{3.14}\] _where \(\Gamma(x,y)\) is \(L^{2}\) integrable respectively functions of their variables and \(\int_{0}^{1}\int_{0}^{1}\Gamma(\epsilon,x,y)dxdy<\infty\). We have \(\psi(\epsilon,y,z)=z+\epsilon\Psi(y,z)\). Suppose that_ \[|\Gamma_{0}(\epsilon,x,y)|\leq C,\ \ \ \ \ 0\leq x,y\leq 1,\epsilon\geq 0. \tag{3.15}\] _and \(\Psi^{(0,\nu)}(y,s)=\frac{\partial^{\nu}\Psi(y,s)}{\partial s^{\nu}}\leq\frac {b^{\nu}}{E(\nu,\nu)},0\leq b\leq 1,\ \ \ \ \nu=0,1,2,...\) where \(E(n,k)\leq\binom{2n-1}{n-1}\) is the number of integer solutions of the equation_ \[\sum_{j=1}^{k}r_{j}\cdot s_{j}=n,0\leq r_{j},s_{j}\leq n,j=1,2,..,k. \tag{3.16}\] _If the integral equation_ \[\phi_{0}(0,x)=\phi(0,x)=\omega\int_{0}^{1}\Gamma(x,y)\phi_{0}(0,y)dy \tag{3.17}\] _has continuous solutions in \(C[0,1]\), then the integral equation (3.14) has a maximal perturbation solution in \(C[0,1]\)._ **Proof:** Let \(\frac{\partial^{j}\psi(\varepsilon,y,\phi(\varepsilon,y))}{\partial\varepsilon^ {j}}\leq B(j,\varepsilon))\) where \(B(j,\varepsilon)\leq Qb(\varepsilon)^{j})/E(j,j),Q>0,j=1,2,...\). Let a convex, respectively concave function \(f_{1},f_{2}:[a,b]\rightarrow\mathbf{R}\) on its domain of definition. Then we define the functional on its domain \[V_{1}(f_{1},P,a,b) = \sum_{j=0}^{n-1}\left|\frac{f_{1}(x_{j+1})-f_{1}(x_{j})}{x_{j+1} -x_{j}}\right| \tag{3.19}\] \[V_{2}(f_{2},P,a,b) = \sum_{j=1}^{n}\left|\frac{f_{2}(x_{j+1})-f_{2}(x_{j})}{x_{j+1}-x_ {j}}\right|\] (3.20) \[P = \{a=x_{0},x_{1},...,x_{k},..x_{n}=b\} \tag{3.21}\] We observe that as \(||P||=\max_{0\leq j\leq n-1}|x_{j+1}-x_{j}|\to 0\) as \(n\rightarrow\infty\) we have that \(\lim_{n\rightarrow\infty}V_{j}(f_{j},P,a,b)=\int_{a}^{b}|f_{j}^{\prime}(x)|dx, \;\;j=1,2\). So constructing the functional \(V(f,a,b)\) for a function \(f\) with convex or concave on successive subintervals on its domain of definition \([a,b]\) as \[V(f,P,a,b)=\sum_{j=0}^{k}V_{i_{j}}(f_{i_{j}},P_{j},a_{j},b_{j}) \tag{3.22}\] \[\bigcup_{j=0}^{k}[a_{j},b_{j})=[a,b)\] (3.23) \[f_{i_{j}}\;\;\;\mbox{convex on}[a_{j},b_{j}),\;\;\;\;\mbox{mod }(j,2)=0\] (3.24) \[f_{i_{j}}\;\;\;\mbox{concave on}[a_{j},b_{j}),\;\;\;\;\mbox{mod }(j,2)=1\] (3.25) \[\mbox{or reversely,}\;\;\;j=0,1,...,k.\] (3.26) \[P=\bigcup_{j=0}^{k}P_{j},\mbox{span}(P_{j})=b_{j}-a_{j},j=0,...,k. \tag{3.27}\] We have that \(\lim_{n\rightarrow\infty}V(f,P,a,b)=\int_{a}^{b}|f^{\prime}(x)|dx,\;\;\). Moreover from the construction of the discrete functionals the convergence is monotone decreasing. Additionally this allows monotonicity and convergence with respect to refinement in partitions of the interval \([a,b)\) and shall be used in the following construction. Given a function \(f:[0,\infty)\rightarrow\mathbf{R}\) we impose an order on the partitions of \(\mathbf{R}^{+}\) considering \(P\preceq Q\) if \(\mbox{span}(P)<\mbox{span}(Q)\) or if \(\mbox{span}(P)=\mbox{span}(Q)\) and \(V(f,P)>V(f,Q)\). Thus as each chain of ascending partitions \(P_{0}\preceq P_{1}\preceq\cdots\preceq P_{m}\) has an upper bound with span \(\sup_{j=0}^{\infty}\operatorname{span}(P_{j})\) or if spans are finally equal, with value of the functional \(\liminf_{j\to\infty}V(f,P_{j})\), by Zorn's lemma the partial ordered set of chains has a maximal element \(P^{*}\). So we have a maximal perturbation range \(\operatorname{span}(P^{*})\). Suppose we are achieving a perturbation of magnitude \(\varepsilon_{0}>0\). And from this value we achieve an additional perturbation of magnitude \(\varepsilon_{1}-\varepsilon_{0}\) with nonlinear term \(\psi(\varepsilon_{1},y,\phi(\varepsilon_{1},y))\). Continuing in this manner we achieve perturbations with magnitudes \(\varepsilon_{0}<\varepsilon_{1}<\cdots<\varepsilon_{m},m\in\mathbf{N}\). This defines a partition \(P=\{\varepsilon_{0},\varepsilon_{1},\cdots,\varepsilon_{m}\},m\in\mathbf{N}\)of \([0,\infty)\). We order in the same way as above with \(\operatorname{span}(P_{m})\) and \(V(\psi(\varepsilon,\cot,\cdot),P)\) these partitions defined by successive perturbations of the integral equation (3.14) and we have as a result that there is a maximal element by Zorn's lemma on this set of partitions \(P_{\psi}\) with maximal span and therefore range. ## 4 Conclusions We investigate perturbative solutions for kernel perturbed integral equations and prove the convergence in an appropriate ranges of the perturbation series. Next we investigate perturbation series solutions for nonlinear perturbations of integral equations of Hammerstein type and formulate conditions for their convergence. Then applying Zorn's lemma to a construction for the succesive'small' perturbations to the perturbing function \(\psi(\varepsilon,y,z)\) with respect to the variable \(\varepsilon\) we get that there is a maximal span where \(\varepsilon\) can vary and give finite solutions to the integral equation (3.14) which are useful for the practitioner. Furthermore we note that these methods can be extended to more general equations arising in mathematical physics and mechanics.
2306.14242
Universality of satellites in the breakup of a stretched fluid bridge
As a fluid object breaks, it often leaves behind satellite fragments. Here we show that satellite formation can follow universal dynamics, leading to robust satellite sizes. Specifically, we consider the breakup of a slowly stretched fluid bridge, which we realize experimentally using a soap-film bubble suspended between two plates. Combining experiments and one-dimensional simulations, we show that a main satellite bubble always forms as the bridge breaks. We discover that the size of the bubble is highly reproducible and can be dramatically increased by stretching the bridge faster or increasing its volume. The satellite size is a simple function of two non-dimensional parameters: the normalized volume of the bridge and the Weber number, measuring inertia due to stretching as compared to surface tension. These observations can be explained by tracing the bridge evolution over a series of dynamical stages in which the bridge: (i) closely follows a sequence of equilibrium bridge configurations; (ii) stretches as it begins to breakup after reaching an unstable equilibrium; and (iii) follows a universal breakup solution. The last stage takes place over a finite region, the corresponding length scale determined by stretching during the previous stage. This length scale controls the satellite size, and the universality of the dynamics makes the system highly reproducible. This work suggests universal satellite formation dynamics may provide a route for understanding satellite bubble sizes in turbulent flows.
Anna Frishman, Daniel Lecoanet
2023-06-25T13:28:26Z
http://arxiv.org/abs/2306.14242v2
# Universality of satellites in the breakup of a stretched fluid bridge ###### Abstract The breakup of a fluid object is a singular process where the topology changes in finite time. As pinch-off is approached, its singular nature is expected to produce universal dynamics. Here we show that universality can emerge at a finite scale during breakup and control the size of satellites. Specifically, we consider the breakup of a slowly stretched fluid bridge, which we realize experimentally using a soap-film bubble suspended between two plates. Combining experiments and one-dimensional simulations, we show that a single satellite bubble always forms as the bridge breaks. We discover that the size of the bubble is highly reproducible and can be dramatically increased by stretching the bridge faster or increasing its volume. The satellite size is a simple function of two non-dimensional parameters: the normalized volume of the bridge and the Weber number, measuring inertia due to stretching as compared to surface tension. These observations can be explained by tracing the bridge evolution over a series of dynamical stages in which the bridge: (i) closely follows a sequence of equilibrium bridge configurations; (ii) stretches as it begins to breakup after reaching an unstable equilibrium; and (iii) follows a universal breakup solution. The last stage takes place over a finite region, the corresponding length scale determined by stretching during the previous stage. This length scale controls the satellite size, and the universality of the dynamics makes the system highly reproducible. This work suggests universal breakup dynamics may provide a route for understanding satellite bubble sizes in turbulent flows. ## I Introduction The fragmentation of fluid jets, drops and bubbles is central to many industrial and natural processes; from ocean-atmosphere exchange in breaking waves [1], to atomization processes as in ink-jet printers or the generation of emulsions [2]. Small satellite bubbles are often generated, either in individual breakup events or through a sequence of such events. Predicting the size and number distribution of satellites in fragmentation is thus a fundamental problem of practical importance. For breakup where viscous effects dominate over inertia, a rather detailed understanding of the sizes of satellites based on individual breakup events exists [3; 4], and their formation can be controlled to a high degree [5]. The situation is more complex when viscous forces are negligible compared to inertia (high Reynolds number flow) [6; 7; 8], especially for breakup driven by a turbulent flow [9; 10], which occurs on many length and time-scales and whose flow realizations are characterized statistically. The singular nature of breakup often gives rise to universality: the dynamics at the pinch-off point follow the same route, regardless of initial conditions and spatial details away from the pinch-off point.1 There are different classes of universal routes depending on which physical effects are at play. Here we focus on inertia-surface-tension driven breakup [14] (in which viscosity becomes important only at the late pinch-off stages). Satellites are known to form for such a breakup if the system is reflection-symmetric around the initial pinch-off point. In that case, the inertia of the breaking fluid builds up near the pinch-off point and leads to the splitting of the single pinch-off point into two, sealing a satellite bubble (or drop) upon breakup [15; 16; 17], see Figs. 1\((a)(b)\). At the very last stages of pinch-off, the localized dynamics around either one of the pinch-off points are expects to follow a universal route. However, a comparable universal picture for the satellite formation, taking place at an earlier stage and over a finite length scale as in the third panel in Fig. 1\((b)\), does not exist. Such universality could provide an important link for the characterization of breakup in turbulent flows, as the exact flow conditions are then unknown. Footnote 1: Exceptions to universal behaviour have also been observed [11; 12], e.g. due to asymmetries [13]. Here, we reveal that satellite formation from a stretched fluid element is controlled by an attracting universal dynamical trajectory, which is not a similarity solution. Specifically, we consider the classic problem of a fluid bridge held between two solid flat plates, a problem first studied by Plateau [18]. Since then, a vast literature on fluid bridges has developed, due to their ubiquity in nature and practical and fundamental importance, see [19] and references therein. Our focus here will be the breakup of a fluid bridge as it is slowly stretched by the movement of its end plates, previously considered in e.g. [16; 20; 21]. The breakup of the bridge always results in the formation of a satellite drop at its center, as in Fig. 1\((a)(b)\). Experimentally, we realize this set-up using a soap bubble placed between two plates, forming a soap-film bridge. The evolution is controlled by the dynamics of the air inside the bridge, which we model numerically using the slender-jet approximation [22]. Combining experiments and simulations, we discover that the volume of the forming satellite bubble can be dramatically increased by increasing the rate with which the plates are separated. Moreover, we demonstrate that the universality of the dynamics gives rise to highly reproducible satellite bubble sizes. ### The breakup of a soap-film bridge Each experiment is initiated with a cylindrical soap-film bridge which is suspended between two solid plates, as shown in Fig. 1\((a)\). The plates are pulled apart as described in Appendix A, increasing their separation, while the volume of air inside the bridge, \(V\), remains fixed (Fig. 1\((b)\)). Eventually, the bridge breaks, always leaving behind two spherical caps at the plates and one central satellite bubble, as in Fig. 1\((a)\) and Fig. 1\((b)\). The bridge remains axisymmetric throughout the dynamics, so its shape can be described by the radius along the shape, equivalent to the height \(h(z)\) as measured from the axis of symmetry in the images, see Fig. 1\((b)\). At the plates, the bridge remains practically pinned to its initial height, denoted by \(h_{0}\). In the following we non-dimensionalize all length-scales using \(h_{0}\). The breakup is controlled by the dynamics of the air inside the bridge, rather than by the solution within the soap-film, as shown in previous soap-film bridge experiments [23]. The only important property of the soap-film is the surface tension \(\sigma\) that acts between the air outside the bridge and inside it. Thus, the system can be characterized by two non-dimensional parameters. The first is geometrical: the initial length, or aspect ratio of the bridge, which can be written as \(L_{0}=V/(\pi h_{0}^{3})\). It encodes the conserved volume of the bridge. The second parameter measures the relative importance of inertia and surface tension, the Weber number \(\text{We}=\rho v_{p}^{2}h_{0}/\sigma\). Here \(2v_{p}\) is the speed with which the plates separate and \(\rho\) is the density of air inside the bridge. For each experiment, we measure \(v_{p}^{2}\) averaged over time to compute the corresponding We-number. While the definition of the We-number in experiments is not unique, any choice which is representative of the plate velocity at late stages of the dynamics will produce the scaling with We-number demonstrated below. Further discussion of this point can be found in Appendix A. Our experiments lie in the regime \(\text{We}\ll 1\), where surface tension dominates over the inertia imposed by the moving boundaries. We find that the volume of the satellite bubble, formed as a by-product of the breakup, increases with We and \(L_{0}\), as can be seen in Fig. 1\((a)\). Increase in the size of a satellite drop due to external stretching was previously observed in [7], though in a different context. We quantify the increase in size of the satellite in Fig. 1\((c)\) where the central bubble volume normalized by the initial volume, \(V_{b}/V\), is plotted as a function of We for (roughly) 3 values of \(L_{0}\). Note that for a given We (pulling protocol) and \(L_{0}\) the volume of the final bubble is extremely reproducible and robust. In particular, special care to control the airflow around the bridge or have the same precise velocity profile for the plates is not required to reproduce the same values. Furthermore, the final fractional volume, \(V_{b}/V\), is a function of a single parameter \(\text{We}\,L_{0}^{4/3}\), instead of two, as seen in Fig. 1\((d)\). Below we will describe the dynamical stages leading to the breakup (not including the final pinch-off stage itself, which was treated in detail previously [14]), then we will show that the dynamics at the center of the bridge become universal as pinch-off is approached, explaining the observed collapse with \(\text{We}\,L_{0}^{4/3}\). Under this universal behavior, \(V_{b}/V\sim(\text{We}\,L_{0}^{4/3})^{3/2}\) for sufficiently large We, consistent with Fig. 1\((d)\). ### Modelling We model the dynamics of the air inside the bridge using the slender jet approximation [22], which here can be justified both by the axisymmetry of the bridge, and by the fact that it becomes increasingly slender as the dynamics proceeds (Appendix B). In non-dimensional form, normalizing time by the surface-tension time-scale \(t_{\sigma}=\sqrt{\rho h_{0}^{3}/\sigma}\), the inviscid equations read \[\partial_{t}v+vv_{z}=-p_{z} p=\frac{1}{h\sqrt{1+h_{z}^{2}}}-\frac{h_{zz}}{(1+h_{z}^{2})^{3/2}} \tag{1}\] \[\partial_{t}h+vh_{z}=-v_{z}\frac{h}{2}\] (2) \[h\left(t,\frac{\pm L(t)}{2}\right)=1 v\left(t,\frac{\pm L(t)}{2}\right)=\pm\sqrt{\text{We}} \tag{3}\] where \(h(z)\) is the free surface height, see also Fig. 1\((b)\), \(v(z)\) is the axial velocity of the air inside the bridge, the non-dimensional bridge length is \(L(t)=L_{0}+2t\sqrt{\text{We}}\), and the non-dimensional speed of the boundary is \(v_{p}=\sqrt{\text{We}}\). Partial derivatives with respect to \(z\) are denoted by a subscript. The momentum balance appears in equation (1), with inertia on the one hand and the pressure difference due to surface tension, proportional to the mean curvature, on the other. Viscous effects are neglected (Appendix A). Equation (2) represents conservation of mass \(\propto h^{2}\) inside the bridge. While the soap film is free to slip on the plates in our experiments, slippage is typically slower than stretching, and is hence negligible in most of our experiments. This justifies using pinned boundary conditions for the height in our simulations, Eq. 3. More details can be found in Appendix A. We solve Eqs. 1-3 numerically (Appendix C). The fractional volume of the satellite bubble in the simulations is presented in Fig. 1\((e)\). As was the case in experiments, the data collapses to a single curve when plotted as a function of \(\text{We}L_{0}^{4/3}\). It can also be seen that, when presented as a function of \(\text{We}L_{0}^{4/3}\), a dependence on \(L_{0}\) emerges at very small We-numbers (we do not have data from several \(L_{0}\) at these values of \(\text{We}L_{0}^{4/3}\) in the experiments). This feature will be explained in the following. A quantitative comparison between experiments and simulations, shown in the inset of Fig. 1\((e)\), reveals that the numerical curves are shifted downward compared to the experimental results. Such a difference could be due to several physical aspects in which the detailed dynamics in experiments and simulations are distinct. First, the stretching protocol is not the same: while in simulations the edges are pulled with a constant speed throughout, in experiments that speed invariably changes in time (making the We-number definition dependent to an extent), potentially with asymmetries between the two plates, see the Appendix. In addition, the boundary conditions are not quite the same, as there is some slippage of the edges of the bridge in experiments while those are pinned in simulations (Appendix A). Lastly, we neglect the inertia of the soap film, viscous effects and gravity in our numerical modeling, effects expected to be small until pinch-off is reached (Appendix A), but which could still contribute to quantitative differences. While it may be interesting to explore these differences in future work, here we would like to emphasize that the robust dynamical features of the breakup, such as the scaling of various quantities with \(L_{0}\) and \(\text{We}\), are shared between experiments and simulations, reflecting a shared universal breakup route. In particular, as shown in the Appendix, the functional dependence of \(V_{b}/V_{i}\) on \(\text{We}L_{0}^{4/3}\) is identical. The correspondence between the numerical and experimental results found here suggests the applicability of our findings to a general stretching fluid bridge, including a capillary liquid bridge. This is provided that the liquid bridge is in the regime where viscosity and gravity effects can be neglected (\(\text{Re}\gg 1\) and \(\text{Bo}\ll 1\)), but with low stretching speeds (\(\text{We}\ll 1\)). While satisfying Figure 1: \((a)\) Soap-film bridge experiments. Left panels show initial conditions with different \(L_{0}\), the ratio of the initial length to the radius \(h_{0}\) of the cylindrical bridge. Right panels show the final state with a central satellite bubble for experiments with different \(L_{0}\) and Weber number We. We study bridges with typical radii \(\sim 1\,\text{cm}\). \((b)\) Snapshots of a bridge with \(L_{0}=2.4\) pulled apart with \(\text{We}=0.02\) to form a central satellite bubble. The snapshots are evenly spaced in time. \((c)\) The volume of the satellite bubble normalized to the initial bubble volume as a function of \(\text{We}\); different colors show different \(L_{0}\). \((d)\) The normalized satellite bubble volume from experiments plotted as a function of \(\text{We}L_{0}^{4/3}\). At large \(\text{We}\), we find \(V_{b}/V\sim(\text{We}\,L_{0}^{4/3})^{3/2}\) (dashed line). \((e)\) The normalized satellite bubble volume from simulations plotted as a function of \(\text{We}L_{0}^{4/3}\). In both experimental and simulation data, the normalized volume of the satellite bubble are functions of the single parameter \(\text{We}L_{0}^{4/3}\). The inset shows both experiment and simulation results; bubbles in simulations are systematically smaller than in experiments. We \(\ll 1\) for a liquid bridge requires a smaller set-up and slower speeds than those used in our experiments, it is still possible to keep \(\mathrm{Re}\gg 1\),2 though this regime has not been explored in depth [15]. Footnote 2: For example, for a water bridge with plate velocities of the order of \(v_{p}\sim 10\)cm/s and boundary radius \(h_{0}=1\,\)mm, one finds \(\mathrm{We}\sim 10^{-1}\) and \(\mathrm{Re}\sim 10^{2}\). ## II Stages of the dynamics The dynamics of the bridge can be divided into three distinct stages, as summarized in Fig. 2. The initial stage is quasi-static: the shape of the bridge follows a sequence of static equilibrium shapes (minimal surfaces) dictated by the volume and momentary length \(L(t)\). This stage ends once the bridge reaches the critical length \(L_{u}\), at which the corresponding static equilibrium shape is unstable. The bridge then enters the second stage, where the instability develops: a pressure imbalance develops in a localized region around the center, and drives the breakup of the bridge. This stage occurs during a time \(\Delta t_{b}=O(1)\) (i.e. of order the surface tension time \(t_{\sigma}\)), independent of the stretching, and proportional to the normalized volume \(L_{0}\). During this stage, the main effect of the pulling of the bridge edges is to stretch the localized breakup region. Dynamically, this stage marks the onset of instability, and by the end of it the central breakup region is attracted towards a universal breakup trajectory. In the final stage of evolution, the central region evolves along a universal trajectory that leads to break-up. We find that all our experiments and simulations follow the same final evolution when properly rescaled by a single length scale. This universal breakup stage is very rapid compared to the previous stage. The size of the satellite bubble--the end result of the breakup as seen in Fig. 1--is then determined through a combination of the first two stages. As the dynamics in the last stage are universal, the bubble size is controlled by a single length scale. That length scale is related to the unstable equilibrium length, \(L_{u}\), plus the stretching of the breakup region during the time \(\Delta t_{b}\) in the second stage. The added length varies with We-number and \(L_{0}\): as the We-number is increased the stretching rate is increased, while increasing \(L_{0}\) prolongs the stretching time \(\Delta t_{b}\), i.e., it slows down the approach to the universal dynamics. ### Stage I: Quasi-static dynamics For \(\mathrm{We}\ll 1\), stresses from surface tension dominate over inertia. Thus, initially there is no leading order flow inside the bridge, and the bridge takes an equilibrium shape with a constant mean curvature (pressure), i.e., \(p_{z}=0\). Such shapes are minimal surfaces under the constraint of a fixed volume \(V\), here parametrized by \(L_{0}\), and aspect ratio (normalized length) \(L(t)\). Generally these are not zero mean curvature shapes. Such equilibria have been extensively studied in the past [24; 25], and it is known that there is a unique stable (connected shape) equilibrium for each pair \(V,L\) up to a critical aspect ratio \(L_{u}(L_{0})\) (Appendix B). At \(L_{u}\) the equilibrium solution becomes unstable, while for \(L>L_{u}\) a constant mean curvature solution with the given volume no longer exists. Thus, the first stage of the dynamics of the bridge is a transition between a sequence of equilibrium shapes, ending at the critical shape with \(L(t)=L_{u}\) which is the starting point for the instability and subsequent breakup. We show the close agreement between the simulations when \(L\lesssim L_{u}\) and the sequence of equilibrium bridge shapes in the Appendix. We also demonstrate below and in the Appendix that in the experiments and simulations the breakup dynamics indeed begins once \(L(t)\approx L_{u}\). ### Stage II: Stretching and instability onset When \(L\geq L_{u}\), an unstable narrow neck forms at the center of the fluid bridge. In our simulations we observe that the over-pressure at this point drives the fluid away from it, further decreasing the height of the neck at that point, which in turn increases the over-pressure and so on. However, when the flow away from the point of minimal height becomes fast enough, fluid starts to escape the entire central region, tending to flatten its height, an effect observed both in experiments (Fig. 1\((b)\)) and in simulations (Fig. 2). This is roughly when the second stage ends. We now wish to demonstrate, based on results from experiments and simulations, that the duration of the second stage, \(\Delta t_{b}\), is independent of the We-number, and increases with increasing volume \(L_{0}\). While the precise boundary between stage II and stage III is hard to determine, we will show in the next section that stage III is much faster than stage II. Thus, we can estimate \(\Delta t_{b}\) using the difference between the time at which the bridge achieves the critical length \(L_{u}\), and the pinch-off time \(t_{f}\), i.e., when the bridge reached its final length \(L(t)=L_{f}\). Assuming the bridge is stretched at a constant velocity equal to \(2\sqrt{\mathrm{We}}\)3, we can translate the time interval to a length increment, expecting that \(L_{f}=L_{f}^{0}+2\sqrt{\mathrm{We}}\Delta t_{b}\) where \(L_{f}^{0}\approx L_{u}\) and \(\Delta t_{b}\propto L_{0}\). Note that such a form implies that the breakup dynamics only set in when \(L(t)\sim L_{u}\), the bridge dynamics prior to this time being irrelevant. Footnote 3: While this is only roughly true in experiments, the plate velocity saturates by the beginning of the second stage. We demonstrate the expected linear dependence \(L_{f}=L_{f}^{0}+b\sqrt{\mathrm{We}}\,L_{0}\) in Fig. 3\((a)\) where we present \(L_{f}\) as a function of \(\sqrt{\mathrm{We}}\) for the experimental data, with the corresponding linear fit overlayed. For \(L_{0}=1.1\) and the lowest We-numbers, we observe some slippage of the soap film edges during the evolution. This effect would bias the above fit (see discussion in Appendix A), so we remove the three lowest We in our fit (for which slippage is most prominent, see the Appendix), but include the points in Fig. 3\((a)(b)\). As expected, experiments with different \(L_{0}\) have a different slope, and cross the y-axis at different points, corresponding to \(L_{f}^{0}\). Then, in Fig. 3\((b)\) we present \(L_{f}-L_{f}^{0}\) as a function of \(\sqrt{\mathrm{We}}\,L_{0}\) for the experimental data, showing they all collapse, i.e. have the same slope \(b\). The same features are observed in simulations, presented alongside experiments in the inset of Fig. 3\((b)\). The data confirm that the time it takes for the bridge to break starting from \(L_{f}^{0}\) is independent of We-number, i.e. is not influenced by the stretching. Furthermore, it scales linearly with \(L_{0}\), being slower the larger the vol Figure 3: \((a)\) The final bridge length \(L_{f}\) as a function of \(\sqrt{\mathrm{We}}\). We find \(L_{f}\) is linearly proportional to \(\sqrt{\mathrm{We}}\), but with a slope and constant offset \(L_{f}^{0}\), both of which depend on \(L_{0}\). Dots show the ensemble average of experimental results with error bars representing one standard deviation. \((b)\)\(L_{f}-L_{f}^{0}\) as a function of \(\sqrt{\mathrm{We}}\,L_{0}\) for experiments with different initial aspect ratios \(L_{0}\). The inset includes simulation data, depicted with lines for different \(L_{0}\). While \(L_{f}-L_{f}^{0}\) is linearly proportional to \(\sqrt{\mathrm{We}}\,L_{0}\) in both experiments and simulations, the constant of proportionality is larger in experiments than in simulations. Figure 2: Sketch of stages of the dynamics. Data is taken from the evolution of a simulated liquid bridge with \(\mathrm{We}=0.07\) and \(L_{0}=1.8\); all variables are non-dimensionalized with \(h_{0}\) and the surface-tension time-scale. The line shows \(h_{zz}\) at the center of the bridge; at representative points denoted with dots, we plot the full shape of the bridge. In the first stage of evolution (blue), the bridge follows a sequence of equilibrium shapes. The unstable equilibrium is reached once \(L(t)=L_{u}\). This begins the second stage of evolution (yellow) which lasts for \(\Delta t_{b}=\mathcal{O}(1)\). In the second stage, the central region of the bridge stretches as the breakup process begins. Finally, the central region approaches a universal breakup trajectory, which is the third stage of evolution (red). The universal trajectory is plotted as a dashed line. ume is--an observation we do not have an explanation for. While the linear dependence \(L_{f}=L_{f}^{0}+b\sqrt{\mathrm{We}}\,L_{0}\) is found both in experiments and simulations, the slopes \(b\) differ (Fig. 3\((b)\)). We obtain \(b=2.4\) in experiments and \(b=1.4\) in simulations (see Table 1 and the Appendix), implying that the second stage of the dynamics is roughly twice slower in experiments compared to simulations. This may be related to differences between experiments and simulations in the bridge dynamics near the boundaries. However, these details do not change the robust features of the dynamics, explained below, which give rise to the scaling of the satellite bubble with \(\sqrt{\mathrm{We}}\,L_{0}\). We verify that indeed \(L_{f}^{0}\sim L_{u}\), comparing \(L_{f}^{0}\) to the expected critical length \(L_{u}(L_{0})\) in the Appendix. Because the breakup time \(\Delta t_{b}\) is independent of We-number, the time it takes for the dynamics to converge to the third, universal, stage is unaffected by the stretching of the bridge and the externally induced inertia. That the latter does not affect the dynamics can be understood in the following way: initially, we expect breakup to be driven by fluid acceleration due to the pressure imbalance, and both the advection term and the inertia induced by the stretching boundaries to be negligible. The fluid will then accelerate until advection becomes as important as pressure gradients, while inertia from stretching will remain subdominant. To see this, let us rewrite equations (1)-(3) in a coordinate system \((s,\tau)\) where the inertia from stretching of the bridge is more evident: \(z=\frac{L(t)}{2}s,\quad\tau=t\) so that \(\partial_{s}=\frac{L(t)}{2}\partial_{z},\quad\partial_{\tau}=\partial_{t}+s \sqrt{\mathrm{We}}\partial_{z}\). Using the modified velocity \(u=v-s\sqrt{\mathrm{We}}\), we arrive at \[l(\tau)\partial_{\tau}u+u(\partial_{s}u+\sqrt{\mathrm{We}})=-p_{s}\] \[p=\frac{l(\tau)}{h\sqrt{l(\tau)^{2}+h_{s}^{2}}}-\frac{l(\tau)h_{ ss}}{(l(\tau)^{2}+h_{s}^{2})^{3/2}}\] \[l(\tau)\partial_{\tau}h+u\partial_{s}h=-\frac{h}{2}(\partial_{s }u+\sqrt{\mathrm{We}})\] \[h(\tau,\pm 1)=1 u(\tau,\pm 1)=0 \tag{4}\] with \(l(\tau)\equiv\frac{L(\tau)}{2}=\frac{L_{0}}{2}+\tau\sqrt{\mathrm{We}}\). We see that if \(p_{s}\sim u\partial_{s}u\) while \(\partial_{s}u\gg\sqrt{\mathrm{We}}\) (equivalently \(\partial_{s}v=l(t)\partial_{z}v\gg\sqrt{\mathrm{We}}\)), the inertial term would no longer be negligible but the explicit dependence on We-number would still disappear from the equations. Indeed, this is what happens at the end of the second stage, as a fast outflow develops in the central region (note the flattening of the free surface of the bridge) implying \(\partial_{z}v\gg 1\) there. On the other hand, this condition is not satisfied close to the boundaries: from equation (2) and the boundary conditions (3), the condition \(\partial_{s}u=-\sqrt{\mathrm{We}}\) should be dynamically satisfied there (reflecting that the surface is pinned at the boundaries so that the radial velocity should remain zero there). Thus, there exists a region of finite extent around the bridge center, whose extent we denote by \(z_{c}\), where the breakup process takes place and is not directly affected by the external We-number dependent flow. Note however that the dynamics of this region does differ between the different We-numbers. Indeed, the stretching of the bridge influences the evolution due to the change in the bridges length, as is seen from the dependence of Eqs. 4 on \(l(\tau)\). Thus, it is surprising that the duration of this stage is roughly independent of the We-numbers, and is a feature that remains to be explained. ### Stage III: Universal breakup dynamics Stage III begins after advection becomes important, leading to the eventual breakup of the bridge. Advection causes fluid to escape the breakup region and flatten it, eventually decreasing the pressure in this region and making the surface overturn. Thus, while initially there is a single minimum point of the height, corresponding to a single pinch-off point, this point splits into two, which seal a satellite bubble in between them. Note that this relies on a reflection symmetry around the initial height minimum, so that \(v=0\) there (at least approximately) and the point is not advected by the flow, see further discussion in Appendix A. The formation of satellite bubbles due to such inertial effects has been previously discussed for the breakup of soap-film and liquid bridges, as well as gas bubbles [15; 16; 17]. We now wish to demonstrate that the breakup dynamics follow a universal time-dependent trajectory. Namely, when (advective) inertial effects become important, a finite region forms near the center of the bridge where, if time and length scales are properly rescaled, the height and velocity dynamics follow a universal trajectory independent of We and \(L_{0}\). We begin by theoretical considerations which demonstrate that such a universal localized description can emerge for the system (4). We have argued above that there exists a region of finite extent, denoted by \(z_{c}\), where the explicit dependence on We-number drops from the dynamics in Eq. (4). We next assume stage III happens much faster than the surface tension time \(t_{\sigma}\) (e.g., see Fig. 2), so we can neglect the effect of stretching. In that case, \(l(\tau)\approx\mathrm{Const}\) appears as a parameter in the equations. Note that in the limit \(\partial_{s}u\gg\sqrt{\mathrm{We}}\), also \(u\approx v\) so that at leading order we can use \(u=v\) in the breakup region. We expect universal behaviour to emerge locally in the breakup region in the limit where the height there becomes vanishingly small. We shall thus assume that there is a single scale, \(h_{c}\ll 1\), controlling the dynamics. To be concrete, we take \(h_{c}=h(t_{c},0)\) the height at the center of the liquid bridge at the time \(t_{c}\) when the surface flattens and \(h_{zz}(t_{c},0)=0\), as shown in Fig. 2. Indeed, by the time \(t_{c}\) we expect the dynamics to have entered stage III for all We and \(L_{0}\). Note that here we normalize by a scale \(h_{c}\) which is fixed in time, so the resulting solution will not be a similarity solution. We thus use the ansatz \(h=h_{c}H\left[\frac{t_{f}-t}{t_{c}},\frac{s}{s_{c}}\right]\) and \(u=u_{c}U\left[\frac{t_{f}-t}{t_{c}},\frac{s}{s_{c}}\right]\). Plugging it into equation (4) with \(\text{We}=0\), and requiring that all terms are of the same order in the limit \(h_{c}\to 0\), we get that \(h_{c}=ls_{c}\), \(u_{c}=ls_{c}/t_{c}\) and \(t_{c}=h_{c}^{3/2}\). Although the We-number based on the plate velocity is assumed to be small, here inertia and surface tension terms are comparable. That is because the unstable pressure imbalance drives a fast flow with a local We-number \(\mathcal{O}(1)\). The scaling we find indeed corresponds to the surface tension-inertia scaling [14] based on the length scale \(h_{c}\). Also note that this temporal scaling is consistent with our assumption that bridge is hardly stretched during stage III, compared to the change in its length during stage II: since \((t_{f}-t)\propto h_{c}^{3/2}\ll\Delta t_{b}=O(1)\) the dynamics in stage III occur on a fast time-scale (compared with \(t_{\sigma}\)) and the change in \(l\) is negligible. We thus obtain the following universal form \[\begin{split}& h(t,z)=h_{c}H\left[\frac{t_{f}-t}{h_{c}^{3/2}}, \frac{ls}{h_{c}}\right]=h_{c}H\left[\frac{t_{f}-t}{h_{c}^{3/2}},\frac{z}{h_{c} }\right]\\ & v(t,z)=u(z,t)=h_{c}^{-1/2}U\left[\frac{t_{f}-t}{h_{c}^{3/2}}, \frac{z}{h_{c}}\right]\end{split} \tag{5}\] with the functions \(H[T,Z]\) and \(U[T,Z]\) satisfying equations (1)-(2), where the scale \(l\) does not appear explicitly. Recall that this calculation is for the breakup region, and so we must supplement these equations with boundary conditions at \(z=z_{c}\). To determine those we assume that \(z_{c}/h_{c}\gg 1\) so that in terms of the variable \(Z\) the boundary is at infinity. The solution at the breakup region should match the external solution, where we assume that the velocity is finite, i.e. that \(u(t,z_{c})\) does not scale with \(h_{c}\). We thus get the boundary condition \(U\left[T,Z\right]=u(t,z_{c})/u_{c}=h_{c}^{1/2}u(t,z)\to 0\) as \(Z\rightarrow\infty\). Next, assuming that the height of the external solution has a finite curvature, i.e. that \(\partial_{zz}h(t,z_{c})\) is independent of \(h_{c}\), we get the condition \(\partial_{ZZ}H[T,Z]\to h_{c}\partial_{zz}h(t,z_{c})=0\) as \(Z\rightarrow\infty\). The matching is thus to be done at the inflection point of the external solution. Equivalently, this implies the boundary condition \(H[T,Z]/Z\to O(1)\) as \(Z\rightarrow\infty\). To summarize, under these assumptions, the boundary conditions for \(H,U\) which replace (3) are \[H[T,Z\rightarrow\infty]\propto Z\hskip 28.452756ptU\left[T,Z\rightarrow\infty \right]=0 \tag{6}\] Figure 4: \((a)\) Bridge shape at the pinch-off time in experiments (solid lines), and simulations (dashed lines), for \(L_{0}\approx 1.1\) and various We. When normalized by \(h_{b}\), the bridge shapes of the experiments and simulations at different We collapse near the central region. \((b)\) Bridge shapes in simulations at the pinch-off time, with different \(L_{0}\) (colors) and We (high curves correspond to lower We). The central region of the bridges collapses to a universal shape independent of \(L_{0}\) and We. \((c)\) Bridge shapes in simulations at \(t_{c}\), the time at which \(h_{zz}(z=0)=0\), with different \(L_{0}\) and We. \((d)\)\(h_{zz}(z=0)\) as a function of time, both normalized using \(h_{c}\). Dots represent the trajectories of different simulations with \(L_{0}=1.5\), colored by their We. For the highest We, we plot a black cross at the time of the last stable equilibrium when \(L(t)=L_{u}\). While simulations with different We initially have different curvatures, they are attracted to a universal breakup solution prior to \(t_{c}\) where \(h_{zz}=0\). Equations (1)-(2) which \(H,Z\) satisfy together with the boundary conditions (6) are seen to be universal, independent of external parameters. We argue that in stage III the dynamics in the breakup region are captured by the attracting dynamical solution of this system. We first check the universality of the shape of the bridge in the central, breakup, region at the pinch-off time \(t_{f}\). In Fig. 4\((a)(b)\), we present results from experiments and simulations, where the height \(h\) of the bridge and space \(z\) are both rescaled using \(h_{b}\)--the height at the center of the bridge at this time. We see a very good collapse of the shape in the central region, both when we fix the volume \(L_{0}\) but vary the We-number, Fig. 4\((a)\), and when both \(L_{0}\) and We-number are varied, Fig.4\((b)\) (see the Appendix for the corresponding experimental results). The collapse is only seen in the central region, not near the boundaries. While there is good agreement between experiments and simulations for the shape of the bridge in the central region in Fig. 4\((a)\), the bridges take quite different shapes outside this region. To show that the collapse is not simply dictated by properties of the shape at the final time, in Fig. 4\((c)\) we present the shape of the bridge in our numerical simulations at the time \(t_{c}\) when the surface flattens. This is the earliest time when we can easily determine that the bridge is in stage III. We again rescale both axes by \(h_{b}\); the collapse of the central region is evident. In particular, as expected \(h_{b}\propto h_{c}\), which we also observe in the experiments, see Appendix. Finally, to demonstrate the attracting nature of the universal trajectory, and that its dynamics is indeed universal, in Fig. 4\((d)\) we plot the curvature at the center of the bridge in rescaled coordinates based on Eq. (5), with \(h_{c}=h(t_{c},z=0)\) as above. Under this rescaling, trajectories with different We all asymptotically converge to the same universal curve, corresponding to stage III. We observe that simulations with a higher We-number approach this curve later in (rescaled) time, but that by the time \(t_{c}\) all our simulations have entered stage III. ## III Scaling of the satellite bubble volume Here we combine our insights from stage II and stage III to explain the observations for the volume of the satellite bubble in Fig. 1. First, from the universality of the breakup region in stage III, we expect that the volume of the satellite bubble is determined by a single scale, i.e. \(V_{b}\propto h_{c}^{3}\) with a universal constant of proportionality. In Fig. 5\((a)\) we show this is indeed the case. This ratio is independent of We and \(L_{0}\), and is similar for experiments and simulations. This further confirms that the experiments and simulations follow the same universal dynamics, controlled by a single scale which is set at least as early as \(t_{c}\). Given \(h_{c}\) the volume of the final bubble can be predicted, so the scaling of the volume with We and \(L_{0}\) must be inherited from that of \(h_{c}\). We present \(h_{c}\) as a function of the We number and \(L_{0}\) in Fig. 5\((b)\), revealing a linear dependence, \[h_{c}=h_{c}^{0}+a\sqrt{\mbox{We}}\,L_{0} \tag{7}\] of the same type as we had demonstrated for \(L_{f}\). The slope \(a\), tabulated in Table 1, differs slightly between simulations and experiments. This dependence of \(h_{c}\) can now be used to explain the collapse observed in Fig. 1\((c)\): \(V_{b}/V_{i}\propto h_{c}^{3}/L_{0}=(h_{c}^{0}+a\sqrt{\mbox{We}}\,L_{0})^{3}/L_ {0}\), which for \(\sqrt{\mbox{We}}\,L_{0}\gg h_{c}^{0}/a\) gives \(V_{b}/V_{i}\sim\mbox{We}^{3/2}L_{0}^{2}\), consistent with the scaling we find in experiments and simulations for the highest We-numbers. For small values of \(\sqrt{\mbox{We}}\,L_{0}\), on the other hand, \(h_{c}^{0}\) dominates and \(V_{b}/V_{i}\sim h_{c}^{0}L_{0}^{-1}\) which explains the split between simulation curves with different \(L_{0}\) seen in Fig. 1\((e)\). The final missing piece of the puzzle is to explain what determines \(h_{c}\), and in particular its scaling with We and \(L_{0}\), equation (7). Recall that \(h_{c}\) is a characteristic scale of the universal dynamics in the breakup region (during stage III). As described above, \(h_{c}\) and the lateral lengthscale of this universal solution are proportional to each other. We argue that the dependence of this lateral scale on We and \(L_{0}\) originates from the stretching of the breakup region during stage II. Without stretching, starting with a bridge of the critical length \(L_{u}\), we expect that the spatial extent of the breakup region which is attracted to the final universal dynamics would be proportional to \(L_{u}\), so \[h_{c}^{0}\propto L_{u} \tag{8}\] with a universal constant of proportionality, independent of initial conditions and \(L_{0}\).4 Then, for finite We, the attracted region would be extended by the stretching during stage II which occurs over the We independent time interval \(\Delta t_{b}\). Indeed, the stretching of the edges of the bridge should induce a flow at the boundaries of the central region, with an axial velocity \(v_{b}\), which we expect to be equal to the plate velocity \(v_{b}=v_{p}=\sqrt{\mbox{We}}\). Taking \(l_{b}\), the lateral distance between the pinch-off points at the final time (see Fig. 2), as a proxy to the extent of the breakup region, we expect \(l_{b}=l_{b}^{0}+2v_{b}\Delta t_{b}=l_{b}^{0}+c\sqrt{\mbox{We}}\,L_{0}\) (with \(c=v_{b}/v_{p}b\)). This would indeed explain the dependence in equation (7), since from the universal solution Eq. (4) \(h_{c}\propto l_{b}\). Here, as we have before, we are assuming that stage III is very rapid, so that the breakup region is negligibly extended during that time. Footnote 4: While this is true to leading order, in the Appendix we show than \(h_{c}^{0},\ a\), and \(l_{b}^{0}\), the distance between pinch-off points, all have a weak \(L_{0}\) dependence, with \(\sim 10\%\) deviations, over the range \(L_{0}=1\) to \(2.5\). Testing the proposed picture, we find that \(l_{b}^{0}/L_{f}^{0}\approx 0.1\) in experiments (\(l_{b}^{0}/L_{f}^{0}\approx 0.16\) in simulations) see Table 2 and the Appendix, which we expect to be a universal property of the quasi-static breakup at \(L_{u}\). Moreover, we can extract the velocity \(v_{b}\) from the coefficient \(c\): \(v_{b}/v_{p}=(l_{b}-l_{b}^{0})/(L_{f}-L_{f}^{0})=c/b\). We find that in the experiments \(v_{b}\sim v_{p}\) see Table 1, so that the breakup region is stretched with the same velocity as the entire bridge. However, in the simulations \(v_{b}\sim 1.3v_{p}\), see the Appendix, i.e. the breakup region is stretched faster. While we do not have an explanation for this difference, it does not alter the properties of the universal dynamics or the scaling with \(\sqrt{\mathrm{We}}\,L_{0}\). Note that in the simulations, the breakup happens on a shorter time-scale, but the breakup region is stretched with a larger velocity. These two effects partially compensate each other in the total stretching of the central region, so that the breakup region is eventually stretched slightly less (\(\sim 80\%\)) in the simulations compared to experiments. Thus, the difference in the bridges length at breakup, \(L_{f}\), between simulations and experiments is more pronounced than the difference in \(h_{c}\). Note however that the volume of the satellite bubble scales as \(h_{c}^{3}\) so that the \(\sim 80\%\) difference in \(h_{c}\) translates to a factor of \(\sim 0.5\) in the volume, responsible for the shift between experiments and simulations observed in Fig. 1(c). ## IV Summary and discussion A fluid bridge that is held between two plates and slowly stretched will eventually break, undergoing a dramatic topological change. We find that, when the bridge is stretched at an approximately constant speed, the outcome of the breakup is highly predictable. A small satellite bubble always forms, its volume uniquely determined by two non-dimensional parameters: the (normalized) volume \(L_{0}\) of the bridge and the normalized average stretching speed \(\sqrt{\mathrm{We}}\). By varying these two parameters, the volume of the satellite bubble can be made to vary over two orders of magnitude, reaching up to \(10\%\) of the total bridge volume. The high reproducibility of the bubble size can be explained by the universality of the breakup dynamics, revealed by our simulations and experiments. In the slow stretching regime \(\mathrm{We}\ll 1\) considered here, the stretching has a negligible effect on the breakup dynamics itself, which are spatially localized to the central part of the bridge. Instead, the dramatic dependence of the satellite bubble size on \(\mathrm{We}\) is a consequence of the stretching of the breakup region, occurring prior to the universal breakup solution. Correspondingly, we find a simple dependence of the bubble size on the combination \(\sqrt{\mathrm{We}}\,L_{0}\) which arises due to two robust features. First, the universal breakup solution depends on a single length scale determining the bubble size. Second, it takes a fixed time, independent of the stretching and proportional to \(L_{0}\), to approach the universal solution (starting from a bridge with the unstable equilibrium length \(L_{u}(L_{0})\)). It is mainly during this time-interval that the breakup region is stretched. The bubble size can thus be increased either by increasing \(L_{0}\), which prolongs the time over which the breakup region is stretched prior to the convergence to the universal solution, or by increasing the We-number, increasing the stretching rate of that region during the approach. Beyond the understanding and control of the size of satellite bubbles, our results shed light on the universal features of static breakup, i.e., the breakup dynamics of a fluid bridge of a fixed critical length \(L_{u}(L_{0})\). In particular, while the universality of pinch-off dynamics is a well established phenomena, e.g. [14], the novelty here is that the attracting universal solution spans a finite region in space, remaining finite even as pinch-off is approached. Thus, it is not a similarity solution. Instead, we find that the extent of the breakup region is proportional to \(L_{u}(L_{0})\), with a universal proportionality constant. What selects this constant, determining the disconnected solu Figure 5: \((a)\) Volume of the satellite bubble normalized by \(h_{c}^{3}\), as a function of \(\mathrm{We}\) and \(L_{0}\), in experiments and simulations. \((b)\)\(h_{c}\) as a function of \(\sqrt{\mathrm{We}}\,L_{0}\), in experiments and simulations. In all cases, we find \(h_{c}\) is linearly proportional to \(\sqrt{\mathrm{We}}\,L_{0}\) plus a constant term. In both plots, dots show the ensemble average of experimental results with error bars representing one standard deviation; lines show results from simulations. tion that the universal dynamics converges to (and thus the satellite size), remains to be understood. The dependence of the duration of the breakup process on the liquid bridge volume \(L_{0}\), is another robust aspect of the breakup process which merits future investigation. Finally, let us discuss the possible implications of our results to other flows, beyond the fluid bridge configuration. Our results are obtained in the regime of high Reynolds number and low Weber number, with breakup driven by a geometric incompatibility rather than by an externally imposed flow. For turbulent flows, the regime \(Re\gg 1,\text{We}\ll 1\) is applicable for small scale (sub-Hinze) drops (or bubbles) which are however larger than the Kolmogorov scale. Such drops typically do not break. However, rare but large turbulent fluctuations could deform the drop sufficiently that it could proceed to break and generate a satellite drop in a universal fashion as described here. While this could be a mechanism for generating very small satellite drops of varied but predictable sizes, most likely it is not the dominant one even for small satellites, most of which probably form from drops in the high We regime. Nevertheless, our work demonstrates that the singular nature of breakup can lead to universality of satellite formation, going beyond the universality of pinch-off dynamics. This opens the possibility of universality of satellite formation driven by other force balances--as is the case for pinch-off--an exciting prospect for understanding satellite formation and drop-size distributions in turbulent flows. ## Appendix A Experiments ### Experimental setup We prepare the soap film bridge by blowing a bubble of fixed volume (using a syringe) between two **acrylic** solid plates (diameter of \(4cm\)), and then slowly pulling the plates apart to achieve the desired initial aspect ratio. For pinned boundaries (Dirichlet), the cylinder is a stable constant pressure solution up to the aspect ratio \(L_{0}=2\pi\). However, because we initialize experiments by slowly pulling the plates apart, the soap film slips on the plates. Therefore, the largest aspect ratio we can achieve for the initial cylinder is actually \(L_{0}=\pi\), the boundary of stability for perturbations with a fixed contact angle at the plates, equal to \(\pi/2\) (Neumann boundary condition). The acrylic plates are glued onto two pillars which are mounted onto a low friction rail. We control the speed with which the pillars are pulled by connecting them (via strings) to an electric motor through a pulley, converting the rotation of the motor to linear motion of the pillars. The different rotation rates of the motor then produce different pulling speeds which we measure directly from the images, to be used in the definition of the We-number. We use a high speed camera to capture the breakup process, using a frame-rate in the range of \(4000-6500\) frames per second, depending on the We-number (the entire breakup process taking between \(\sim 30ms\) for the fastest experiments and \(\sim 300ms\) for the slowest ones). Most of our experiments were performed rapidly enough so that the soap film bridge effectively remained pinned at the boundaries, slippage being a much slower process. However, for the smallest aspect ratio we used, \(L_{0}\sim 1.1\), and especially for the slowest experiments, We \(\sim 0.002\)), we did observe slippage of the soap film at the boundaries (of about \(80\%\) for the lowest We, see the Appendix). This indicates that the time separation was not as good for these experiments, probably due to the long time it takes to reach \(L_{u}\) for this \(L_{0}\) (\(L_{u}\) being more than twice larger than \(L_{0}\), see Table 1). The main effect is probably to increase the actual normalized volume \(L_{0}\) for these experiments (since we use the height at the plates to normalize the volume), pushing \(L_{u}\) to higher values. We therefore discard the three slowest We-number experiments for the inference of \(L_{f}^{0}\), presented in table 1. Despite this, we find the difference between the expected \(L_{u}\) and \(L_{f}^{0}\) to be the largest for this aspect ratio. ### Parameters For our soap solution the measured surface tension is \(25\,\text{mN}/\text{m}\) so that for the soap-film bridge \(\sigma=50\,\text{mN}/\text{m}\). Viscous stresses become comparable to surface tension stresses at the scale \(l_{\nu}=\rho\nu^{2}/\sigma=5\times 10^{-9}\,\text{m}\) where \(\nu\) is the kinematic viscosity of air \(\nu\approx 1.5\times 10^{-5}\,\text{m}^{2}/\text{s}\). The maximal Ohnesorge number based on the minimal height \(h_{c}\sim 1\,\text{mm}\) in the experiment is given by \(\text{Oh}=\sqrt{l_{\nu}/h_{c}}\) and is of the order of \(\text{Oh}\sim 10^{-3}\). The Ohnesorge number at the pinch-off points does eventually become \(O(1)\), so that viscous effects become important. This however does not influence the earlier breakup stages in the central region of the bridge on which we focus here. In addition, the minimum Reynolds number in the experiments is \(\text{Re}=\sqrt{\text{We}}/\text{Oh}\sim 10^{2}\) implying that viscous effects are sub-leading both to inertia and surface tension during the breakup and before pinch-off. The maximal Bond number is \(\text{Bo}=g\rho h^{2}/\sigma\approx 0.05\) so that the effects of gravity are negligible (as evidenced by the axisymmetry of the soap-film bridge up to breakup). ### Symmetric necking and acceleration Two important features of our pulling protocol are that the pulling of the two plates is roughly symmetric (but see the Appendix for comments on boundary asymmetries) and that accelerations are small. These two features cause the initial necking of the bridge (where over pressure initially builds up and the instability develops) to occur roughly at the mid point between the two plates. Generally, wherever the initial necking point forms, the stretching can be considered to be locally symmetric around that point if its location does not accelerate. Then, there is no preferred direction in the reference frame of that point. If that is not the case, asymmetric features in the height profile will have time to develop. The condition for symmetric breakup is thus that the externally imposed acceleration is slower than that induced by surface tension: \(\sigma/(\rho h^{2}\dot{v})\gg 1\), where \(h\) is a typical height and \(\dot{v}\) a typical acceleration scale related to the movement of the boundaries. In our experiments, while there is always some acceleration of the plates, that acceleration remains small (e.g., since we do not reach high We), and we thus remain in the symmetric regime and do not observe breakup features that are due to acceleration, as explored in [26]. ### Image analysis We analyze the images with the scikit-image Python package [27], using the canny edge detector to extract the soap-film height and the final bubble volume. The height is computed as the half-distance between the upper and lower free surface of the bridge. The final bubble volume is computed after the bridge breaks, once the bubble relaxes to a spherical form. Then, for a series of images, we compute the bubble area and extract its radius. The volume is then computed based on the average (over the series of frames) radius. We compute the velocity \(v_{p}(t)\) from the soap-film bridge length \(\frac{L(t)}{2}\). We fit \(\frac{L(t)}{2}\) using a cubic spline (through the UnivariateSpline scipy package [28]) and then take its derivative. Profiles of the velocity for different We-number and \(L_{0}\) are shown in the Appendix. ### Definition of We To compute We we compute the time-averaged squared velocity, \(\langle v_{p}(t)^{2}\rangle\), starting the average after the velocity reaches the threshold value \(2v_{p}(t)=0.15m/s\). This choice ensures that we do not take into account the early stages of the stretching of the bridge when the plate acceleration is non-negligible (and put more weight on the velocity at later stages, when it is larger). Introducing such a threshold is physically justified for our purposes in the main text since we expect the early stages to correspond to the quasi-static phase of the dynamics, which is irrelevant for the breakup dynamics in stage II and III. However, while the We-number plays a key role in the analysis in the main text it is not uniquely defined in the experiments (since the plate velocity does not quite plateau). We have tried different definitions (such as using the averaged velocity or the velocity when the bridge length increases by a particular, large enough, increment) which did not produce a significant difference for our results except for a slight multiplicative increase in the We-number. Also note that the discrepancy between experiments and simulations could not be resolved by such an increase, as the required shift in We-number would be too large, corresponding to plate velocities which are larger than the maximal plate velocity per experiment (also, a single such correction could not produce a match between experiments and simulations for both \(L_{f}\) and \(V_{b}/V_{i}\)). Finally, note the ratio of the plate velocity and the stretching velocity of the breakup region \(v_{b}/v_{p}\) as inferred for the experiments is independent of such multiplicative factors in the definition of the We-number. ## Appendix B Theoretical modeling and considerations ### The one dimensional approximation We model the dynamics by the equations of a slender jet, even when the aspect ratio \(L\) is not large. Initially, the justification for this approximation is that deviations from a radially constant axial velocity \(v_{z}(r,z,t)=v(z,t)\) and pressure \(p(r,z,t)=p(z,t)\) are small: the initial pressure is uniform (constant atmospheric pressure and constant mean curvature for equilibrium shapes), and the boundary conditions for the velocity (which drives the dynamics) are also uniform in the radial direction. Then, working in cylindrical coordinates with \(0\leq r\leq h(z,t)\), we can expand around a radially constant solution for \(v_{z}\approx v_{z}^{0}(z,t)+\tilde{v}_{z}(r,z,t)\) and the pressure \(p(r,z,t)=p_{0}(z,t)+\tilde{p}(r,z,t)\), assuming deviations \(\tilde{v}_{z},\tilde{p}\) to be small in amplitude. The radial velocity is then determined by incompressibility and is given in the leading order by \(v_{r}=-v_{r}^{0}r/2\), and in particular at the free surface \(v_{r}=-v_{r}^{0}h/2\). This is the same relation obtained \begin{table} \begin{tabular}{c c c c} \hline \hline & \(L_{0}\) & \(L_{u}\) & \(l_{b}^{0}/L_{f}^{0}\) \\ \hline \hline exp & \(1<L_{0}<1.2\) & \(2.4<L_{u}<2.7\) & \(0.1\) \\ exp & \(1.6<L_{0}<1.9\) & \(3.2<L_{u}<3.6\) & \(0.1\) \\ exp & \(2.2<L_{0}<2.4\) & \(3.9<L_{u}<4.1\) & \(0.09\) \\ sims & \(1<L_{0}<2.5\) & \(2.4<L_{u}<4.2\) \(\sim 0.16\) \\ \end{tabular} \end{table} Table 2: Range of \(L_{0}\) in experiments and simulations and the corresponding range of \(L_{u}\); the ratio between the extrapolated bubble length at breakup for We = 0, \(l_{b}^{0}\), and the extrapolated final bridge length \(L_{f}^{0}\). \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \(\langle L_{0}\rangle\) & \(L_{u}(\langle L_{0}\rangle)\) & \(L_{f}^{0}\) & \(\Delta t_{b}/L_{0}\) & \(a\) & \(c\) & \(v_{b}/v_{p}\) \\ \hline \hline exp & \(1.1\) & \(2.6\) & \(3.1\) & \(1.1\) & \(0.22\) & \(2.4\) & \(1\) \\ exp & \(1.8\) & \(3.5\) & \(3.7\) & \(1.3\) & \(0.2\) & \(2.3\) & \(0.9\) \\ exp & \(2.3\) & \(4\) & \(4.2\) & \(1.2\) & \(0.2\) & \(2.4\) & \(1\) \\ sims & \(-\) & \(-\) & \(-\) & \(0.7\) & \(\sim 0.15\) & \(1.9\) & \(1.3\) \\ \end{tabular} \end{table} Table 1: Predicted critical length \(L_{u}(L_{0})\) and inferred \(L_{f}^{0}\), taken from experiments; inferred time interval, \(\Delta t_{b}\), between the critical length \(L_{u}\) until breakup, normalized by the volume \(L_{0}\): \(\Delta t_{b}/L_{0}=0.5b\); ratio of stretching speeds of the breakup region \(v_{b}\) and of the plates \(v_{p}\), inferred from \(v_{b}/v_{p}=(l_{b}-l_{b}^{0})/(L_{f}-L_{f}^{0})\). within the slender jet approximation, which means that \(v_{2}^{0}\), \(p_{0}(z,t)\) and \(h(z,t)\) satisfy the equations used in the slender jet approximation [22] (i.e the momentum equations for \(v_{r},v_{z}\) are self consistent to leading order, so that radial perturbations do not grow). In addition, when the bridge is stretched radial perturbations become more and more suppressed, as it enters into the regime of a slender jet. ### Equilibrium shapes The equilibrium bridge shapes are completely characterized by two parameters, which are typically chosen to be \(L=2l/h_{0}\), where \(2l\) is the length of the bridge, and \(S=V/V_{cpl}\) where \(V\) is the rescaled (by \(\pi h_{0}^{3}\)) volume of the bridge: \(V=V(0)=h_{0}^{2}2l(0)/h_{0}^{3}=L_{0}\) and \(V_{cpl}=L\) is the rescaled volume that a cylindrical bridge with length \(L\) and height \(h_{0}\) would have had. The parameters \(S\) and \(L\) are not independent during the dynamics: \(S(t)=l(0)/l(t)=L_{0}/L(t)\). Note, that we always start from an initial cylindrical bridge, i.e \(S(0)=1\). We use the parameterization introduced in [24] to determine the equilibrium shape for each (\(L,S=L_{0}/L\)) pair (solving the corresponding equations numerically). For each initial aspect ratio \(L_{0}\), we determine the critical bridge length \(L=L_{u}\) once these equations no longer have a solution. ## Appendix C Numerical methods To solve Eqns. 1-3, we change coordinates to \((s,\tau)\), where \(s=z/(\frac{L(t)}{2})\) and \(\tau=\log(L(t)/L_{0})=\log(1+2t/L_{0})\) (note this is a different definition of \(\tau\) than used in the main text), and \(t\) is non-dimensionalized with the advection timescale. Then the equations of motion become \[\partial_{\tau}v-s\partial_{s}v+\mathrm{We}^{-1}\partial_{s}p=-v \partial_{s}v\] \[\partial_{\tau}\log h-s\partial_{s}\log h+\frac{1}{2}\partial_{s }v=-v\partial_{s}\log h\] \[p=\frac{e^{\tau}}{h\sqrt{e^{2\tau}+\left(\frac{2}{L_{0}}\right) ^{2}h_{s}^{2}}}-\left(\frac{2}{L_{0}}\right)^{2}\frac{e^{\tau}h_{ss}}{\left[e^ {2\tau}+\left(\frac{2}{L_{0}}\right)^{2}h_{s}^{2}\right]^{3/2}}\] \[h(\tau,\pm 1)=1\qquad v(\tau,\pm 1)=\pm 1 \tag{10}\] We solve the equations of motion with the Dedalus pseudospectral framework [29]. We represent \(v\), \(\log h\), \(h_{s}\), and \(p\) as Chebyshev series with 1024 modes. Nonlinear terms are calculated using the 3/2 dealiasing rule, although the presence of more complicated nonlinear products introduce some aliasing errors. We initialize with \(p=\log h=1\), \(h_{s}=0\), and \(v=\tanh\left[(2.2/\pi)\tan(\pi s/2)\right]\). We run simulations for \(2/L_{0}\) ranging from 0.8 to 2.0 in increments of 0.5, and for \(\mathrm{We}=10^{-4}\), \(10^{-3}\), and 30 values ranging from \(10^{-2.5}\) to \(10^{0.5}\), equally spaced in logarithm. To timestep the equations, we use implicit-explicit timestepping methods from [30]. In particular, we use the 2nd order, two-stage Runge-Kutta method for the \(\mathrm{We}=10^{-4}\) simulations, and the 3rd order, four-stage Runge-Kutta method for all other simulations. All terms linear in the perturbation variables are treated implicitly, and all nonlinear terms are treated explicitly. As the bridge approaches breakup, the equations become more stiff. To improve the stability of the timestepping scheme, we define a base state, and evolve the (fully non-linear) deviation of each variable away from this base state. This allows us to implicitly timestep more linear terms in the equations than evolving the variables directly. We set the base state to be given by the problem variables every 100 timesteps. We initially take timesteps of size \(10^{-4}\), and evolve the system until breakup. To get better time resolution near breakup, we then restart from the simulation state between 150-200 timesteps before the breakup, taking timesteps that are half as large. We continue to iteratively restart the simulations with smaller and smaller timesteps until the timestep size is \(6.25\times 10^{-8}\). For \(\mathrm{We}=10^{-4}\), we start with timesteps of size \(2\times 10^{-4}\), and iteratively restart the simulations until the timestep is \(1.25\times 10^{-7}\). ## Appendix D Inertia driven double pinch-off in a symmetric set-up We briefly analyze the dynamics at the necking point, based on the local equations and our observations in the simulations. At the necking point the surface is symmetric with respect to reflection in \(z\), \(h(z)\approx h^{0}+\frac{z^{2}}{2}h_{zz}^{0}\) whereas the velocity is anti-symmetric, with \(v(z)=v_{0}^{0}z+...\). Also note that around this point \(p(z)=p^{0}+\frac{1}{2}p_{zz}^{2}z^{2}\) where initially \(p_{zz}^{0}<0\) since the neck is defined by a pressure maximum (with fluid driven away from it). The equations read: \[\partial_{t}v_{z}^{0}+(v_{z}^{0})^{2}=-p_{zz}^{0}=\frac{h_{zz}^{0} }{(h^{0})^{2}}+\frac{(h_{zz}^{0})^{2}}{h^{0}}-3(h_{zz}^{0})^{3}+h_{zzzz}^{0}\] \[\partial_{t}h^{0}=-v_{z}^{0}h^{0}\qquad\qquad\partial_{t}h_{zz}^ {0}=-\frac{3}{2}v_{z}^{0}h_{zz}^{0}-v_{zzz}^{0}\frac{h}{2} \tag{10}\] Thus, fluid is driven away from the neck, with \(v_{z}^{0}\) increasing with time, which decreases the height \(h^{0}\), as well as increasing the curvature \(h_{zz}^{0}\) as fluid accumulates near this point (this is a more delicate balance which has to do with \(v_{zzz}^{0}<0\) initially, as the neck region connects to a region with smaller velocities, where there is not much flow). Thus, the pressure difference further increases (dominated by terms \(\propto h_{zz}^{0}/h^{0}\), i.e. driven by the thin neck) thus driving the flow through the neck \(v_{z}^{0}\) further. However, at some point the flow through the neck becomes strong enough and fluid does not accumulate as much near the neck: \(h_{zz}^{0}\) does not increase as much and eventually begins decreasing, i.e. the surface gets flatter (the term \(-\frac{3}{2}v_{z}^{0}h_{zz}^{0}\) becoming dominant in its dynamics). This means that the pressure difference no longer grows as much as previously, and the advection becomes important also for the velocity, the \(v_{z}^{0}\) term becoming important. That in turn leads to the deceleration of the surface at the neck, and an eventual overturning (when \(v_{zzz}^{0}\) switches sign). ## Appendix E Experimental considerations and data ### Compressibility effects The compressibility of the air inside the bridge can be neglected. First, the highest Mach number (based on the largest plate velocity, the flow velocity induced by surface tension will be at most an order of magnitude larger since \(\mathrm{We}\sim 0.1\) in those experiments and \(h_{c}\,0.1h_{0}\)) reached in experiments is \(\mathrm{Ma}=v_{p}/c_{s}\sim 10^{-2}\). Second, the pressure difference induced by surface tension is at most of the order of \(\sigma/h_{min}\sim 10^{2}N/m^{2}\) (taking \(h_{min}=0.05h_{0}\approx 10^{-4}m\)), while the atmospheric pressure is \(1\mathrm{atm}=10^{5}N/m^{2}\), so that the deviations in the density of the air are of the order \(10^{-3}\) (equivalently we may compare the scaling of the pressure gradients in the leading order \(\propto\mathrm{Ma}^{-2}\) to the stress induced by surface tension \(\propto\mathrm{We}^{-1}\) in the momentum balance equation, giving \(\mathrm{Ma}^{-2}\gg\mathrm{We}^{-1}\), i.e the flow is incompressible in the leading order.). ### Neglect of the soap-film surrounding the bridge : The inertia of the liquid in the soap film, compared to that of the air inside the bridge, may be neglected in our experiments for most of the breakup process. To compare the inertia of the two we assume that the acceleration of the two fluids is comparable, meaning that we only need to compare their masses. The thickness of the soap film is smaller than \(500nm\) (blue colors could be observed in the interference patterns on our soap films), so let us estimate it as \(h_{s}\sim 1\mu\mathrm{m}\). The density of the soap solution is approximately that of water, \(\rho_{s}\sim 10^{3}kg/m^{3}\). The condition on the height of the bridge such that the soap solution inertia is negligible is thus: \(2\pi h_{min}h_{s}\rho_{s}\ll\pi h_{min}^{2}\rho_{a}\), giving \(h_{min}\gg 2\rho_{s}/\rho_{a}h_{s}\sim 10^{-3}\mathrm{m}\approx 0.1h_{0}\). In the experiments with the thinnest neck formed before Figure 6: Top: Velocity profiles for the different experiments (left) and a zoom in on small velocities and the threshold used for the definition of the We-number. Bottom: Velocity profile for a particular motor rotation speed, producing We-number in the range \(1.6-2.5\) (left) and a histogram of the initial aspect ratio of the soap-film bridge in the experiments. breakup, corresponding to the lowest We number, the height of the neck was about \(0.05h_{0}\) so those experiments may be affected by the soap film inertia (but perhaps only the dynamics at the final stages of breakup). ### Details of image analysis The spline parameters for \(l(t)\): the weights are \(1/(\#\text{ points})\), degree \(k=3\) and \(s=0.1\). For \(h(z)\): the weights are \(1/(\#\text{points})\) (and some outlier points removed), degree \(k=3\) and \(s=5\times 10^{-4}\). ### Control parameters: We and \(L_{0}\) The definition of the We number requires the assignment of a velocity to each experiment. In the experiments the velocity changes with time, as the speed with which the bridge is stretched increases from zero until it roughly saturates. We present the corresponding velocity profiles (as extracted from \(l(t)\)) in Fig. 6. To define the We number for each experiment we compute the time-averaged square velocity, beginning from a threshold speed equal to \(0.15m/s\), so that the acceleration phase is not included. In the upper panel, on the right in Fig. 6 we show how that threshold compares with the slowest experiments. In the bottom panel on the left of Fig. 6 we also demonstrate the variance of our pulling protocol, presenting velocity profiles for experiments with the same motor speed (but some with different \(L_{0}\)). Finally, while the experiments can be broadly divided into three distinct aspect ratios \(L_{0}\), in practice there was some distribution around those values. We present the histogram of the different aspect ratios in Fig. 6 in the bottom panel (right). ### Determination of \(h_{c}\) in experiments We consider that the shape has flattened once there is a region where the curvature has turned negative for the first time. However, there are also inflection points connecting the central region with the sides of the bridge, where the curvature is also negative. Thus, we only consider the curvature at points which are sufficiently low (implying they are in the necking region). This height threshold needs to be altered between the high We number experiments (where the necking region is higher) and the low We number experiments. A more subtle issue is that there are sometimes oscillations of the location of the minimum before the curve overturns at the center, as shown in Fig. 7. That typically leads to an estimation of \(h_{c}\) which is larger than what may be more natural. To mitigate this we also require that in the next time step the curvature in the same region will be negative as well. Still, we believe that these oscillations bias the determination of \(h_{c}\), leading to the lower values of \(V_{b}/h_{c}^{3}\) seen in Fig. 14 for We-numbers in the range \(0.0006-0.003\) (where we empirically observe such oscillations). ### Special behaviour at high and low We-numbers For low We-numbers and for the smallest aspect ratio considered, we observe an \(\sim 80\%\) slippage of the contact line of the soap film bridge at the plates. We observe this phenomenon for all the bridges with these parameters, an example is given in Fig. 8 (top) where the shape of bridges with three different We-numbers (corresponding to those we have excluded from the fit for \(L_{f}\) in the main text) at the final time is shown. Both axes are normalized with \(h_{0}\), the initial height of the bridge at the plates. For these We-numbers and \(L_{0}\) the increase in length from the bridges initial length and up to \(L_{u}\) is the largest, and they are pulled the slowest, meaning that arriving to the breakup stage II takes the longest. Thus, those bridges slip during the first stage of the dynamics (as the time of the first stage apparently becomes comparable to the slippage time), altering the corresponding normalized volume \(L_{0}\). For the highest We-number experiments we observe a systematic asymmetry in the breakup: the pinch-off always first occurs on the left, as seen in Fig. 8 (bottom) for a few examples with different We-number and \(L_{0}\). This asymmetry seems to originate from an asymmetry at the boundaries (there was some drainage of the soap film on the boundaries which we had absorbed by attaching tissue paper to the plate which helped stabilized the bridge. This lead to an asymmetry between the two plates, with a small bulge on the left plate which can be seen in the images in the main text). Note however that this did not seem to have a significant affect for the earlier breakup dynamics during the universal dynamics, and in particular did not seem to alter the scaling of the satellite bubble size for these We-numbers. ## Appendix F Simulations and experiments: the three dynamical stages ### Quasi-static bridge evolution In Fig. 9 we plot equilibrium bridge shapes along with simulated bridge shapes in simulations with \(\text{We}=0.07\). The simulated bridges are very close to the equilibrium bridges, until we approach \(t_{u}\), the time at which the equilibrium bridge becomes unstable. As We decreases, the quasi-static dynamics follow the equilibrium shapes more closely (not shown). ### Final bridge length \(L_{f}\) and time to breakup We show the bridge length at breakup \(L_{f}\) as a function of \(\sqrt{\mathrm{W}\mathrm{e}}L_{0}\) in Fig. 10 (top), where each point represents a single experiment and the lines show the linear fit. In Fig. 10 (bottom left) we show how the breakup time scale \(\Delta t_{b}/L_{0}\) extracted from the linear fit \(L_{f}=L_{f}^{0}+2\sqrt{\mathrm{W}\mathrm{e}}\Delta t_{b}\) for simulations of varying We-number and \(L_{0}\) behaves as a function of \(L_{0}\). A slight increase for lower \(L_{0}\) can be observed. Fig. 10 (bottom right) shows \(L_{0}^{f}\), extracted from this fit, as a function of \(L_{u}\), the unstable bridge length. Each point corresponds to a different initial aspect ratio \(L_{0}\). In the simulations, \(L_{0}^{f}\) is very close to \(L_{u}\). In the experiments, \(L_{0}^{f}\) is somewhat larger, likely due to slippage of the bridge contact line for low We experiments. ### The characteristic scale in the universal regime: \(l_{b}\) and \(h_{c}\) In Fig 11 we present the scaling of \(l_{b}\), the lateral distance between pinch-off points at the breakup time of the bridge. In the top row, on the left, we show the ensemble averaged data from experiments as a function of We-number, along with a linear fit to \(l_{b}=l_{b}^{0}+c\sqrt{\mathrm{W}\mathrm{e}}L_{0}\). It can be seen that the different lines indeed have different slopes corresponding to the different \(L_{0}\). On the right we show that the data collapses when plotted as a function of \(h_{c}\) and that there is a linear dependence between \(l_{b}\) and \(h_{c}\). Curves correspond to data from simulations in this plot. In the middle row, on the left, we show that the constant of proportionality is about 12, for the full range of We-numbers and \(L_{0}\) in our simulations. On the right in the middle row we show the ratio \(l_{b}^{0}/L_{f}^{0}\) as a function of \(L_{0}\) for the simulations, showing it is roughly constant, Figure 8: Left: The soap film bridge shape at breakup for \(L_{0}-1.1\). Note that the bubble height at the boundaries is lower than 1, implying there was slippage at the boundaries. Right: Asymmetry in the breakup shape for the highest We-number for three different aspect ratios. The asymmetry is inherited from the asymmetry between the two plates. Figure 7: Choice of \(h_{c}\) in the experiments: the choice is sometimes complicated by the fact there are oscillations of the height at the center which occur for some of the experiments. This seems to occur generically for We-number in the range \(0.006-0.03\), giving rise to a larger \(h_{c}\) evaluation for We-numbers in this range. Example from an experiment with \(\mathrm{We}=0.025\) and \(L_{0}=2.4\). with a slight increase at smaller \(L_{0}\). The bottom figure shows \(v_{b}/v_{p}\equiv(l_{b}-l_{b}^{0})/(L_{f}-L_{f}^{0})=c/b\) as a function of \(L_{0}\) for our range of simulations using the linear fits. The ratio is very near to being constant, equal to \(1.3\). For completeness we also show the constant \(h_{c}^{0}\) and slope \(a\) extracted from the linear fit \(h_{c}=h_{c}^{0}+a\sqrt{\mathrm{We}}L_{0}\) for the simulation data, shown as a function of \(L_{0}\). The same trend as observed above for \(l_{b}\) and \(L_{f}\) can be seen here: both are almost constant with a slight increase towards smaller \(L_{0}\). ### Bridge profiles at \(t_{c}\) and \(t_{f}\)--experiments In Fig 13 we present the height the soap film bridge at the time \(t_{c}\) when it is flat (top row) and at the final breakup time \(t_{f}\) (bottom row). The range of We-number and \(L_{0}\) chosen is similar to those shown in the main text for the simulations. On the left we show the curves when both axes are normalized by the initial height of the bridge at the plates, \(h_{0}\), whereas on the right we normalize by the height at the center of the bridge. The collapse of the shapes is evident (though is not as good at the time \(t_{c}\)). ### Final Bubble volume In Fig. 14 (top) we demonstrate that the functional form of \(V_{b}/V_{i}\) is indeed identical, multiplying the data on the x-axis for the simulations by a factor of \(0.55\). In Fig. 14 (bottom) we show \(v_{b}/h_{c}^{3}\) where each point corresponds to a single experiment.
2303.06098
Experimental realization of nonunitary multi-qubit operations
We demonstrate a novel experimental toolset that enables irreversible multi-qubit operations on a quantum platform. To exemplify our approach, we realize two elementary nonunitary operations: the OR and NOR gates. The electronic states of two trapped $^{40}$Ca$^{+}$ ions encode the logical information, and a co-trapped $^{88}$Sr$^{+}$ ion provides the irreversibility of the gate by a dissipation channel through sideband cooling. We measure $87\%$ and $81\%$ success rates for the OR and NOR gates, respectively. The presented methods are a stepping stone towards other nonunitary operations such as in quantum error correction and quantum machine learning.
Martin W. van Mourik, Elias Zapusek, Pavel Hrmo, Lukas Gerster, Rainer Blatt, Thomas Monz, Philipp Schindler, Florentin Reiter
2023-03-10T17:33:50Z
http://arxiv.org/abs/2303.06098v1
# Experimental realization of nonunitary multi-qubit operations ###### Abstract We demonstrate a novel experimental toolset that enables irreversible multi-qubit operations on a quantum platform. To exemplify our approach, we realize two elementary nonunitary operations: the OR and NOR gates. The electronic states of two trapped \({}^{40}\)Ca\({}^{+}\)ions encode the logical information, and a co-trapped \({}^{88}\)Sr\({}^{+}\)ion provides the irreversibility of the gate by a dissipation channel through sideband cooling. We measure 87% and 81% success rates for the OR and NOR gates, respectively. The presented methods are a stepping stone towards other nonunitary operations such as in quantum error correction and quantum machine learning. _Introduction.--_ Classical computing is an immensely successful information processing paradigm. The success of computing can largely be explained by the rapid increase in computational power enabled by the miniaturization of the underlying circuits built from classical, irreversible gate operations (cf. Fig. 1(a)). Today, the exponential growth of gate count on classical processors is reaching fundamental physical limits [1]. In the continued pursuit of increasing computational power, a multitude of alternate technologies is being explored [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. As an approach orthogonal to classical information processing quantum computing has recently received considerable attention. Here, substantial advancements have been made, allowing for first demonstrations of essential ingredients such as quantum error correction [14; 15; 16; 17; 18; 19]. This can be attributed to novel and advanced proposals and the continued improvement of established techniques [20; 21; 22; 23; 24]. Such advancements in controllability bring quantum computation closer to the ideal of an entirely unitary evolution towards the output state. In certain algorithms, however, _nonunitary_ operations are required in combination with unitary quantum gates. Among these are algorithms for quantum machine learning, quantum optimization, and simulation, which are regarded as some of the most promising near-term applications for quantum information processing [25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36]. Specifically, nonunitary operations are needed for the generation of low-temperature thermal states [27; 28; 29; 30], as a projective filter [31], for the simulation of open systems [32; 33; 34; 35; 36; 37], or in quantum neural networks [38; 18]. It has been suggested to implement nonunitary components through auxiliary qubits, randomized circuits, or mixed input states [39; 30; 31; 32; 33; 34; 35; 36]. Dissipation is inherently nonunitary, making it a natural choice for the creation of irreversible operations. The field of dissipation engineering, or reservoir engineering, uses the interaction of a quantum system with environmental degrees of freedom to achieve quantum information processing tasks [39; 18; 31; 32; 33; 34; 35; 36]. Applications include state preparation by optical pumping, squeezing [40; 41], entanglement generation [42; 43; 44; 45; 46; 47; 48; 49], quantum simulation [50], and quantum error correction [51]. Dissipation towards the environment lifts the requirement for classical measurement and feedback, and it holds scaling and robustness advantages over unitary approaches [43; 44; 45]. It has been formally shown that dissipation can be used to perform universal quantum computation [53]. Still, so far dissipation engineering has mostly been focused on quantum state preparation and subspace stabilization. We expand the possible set of applications taking a step towards a paradigm of general nonunitary quantum operations, by demonstrating the realization of irreversible classical gates (cf. Fig. 1(a)) by means of Figure 1: (a) Truth tables of the classical OR and NOR gates, with two-qubit output. The logical output is mapped on the left qubit. (b) Schematic representation of the OR gate acting on \(|01\rangle\): an engineered resonance process \(D_{f}\) is a combination of a global and single-ion laser pulse, which together allow transfer to the desired state, \(|11\rangle\), plus an increase in motional mode occupation. This action is made irreversible by dissipation \(\Gamma_{f}\) of this additional motion, by cooling a spectator ion species. (c) Overview of the relevant states in the data ions, \({}^{40}\)Ca\({}^{+}\), and cooling ion, \({}^{88}\)Sr\({}^{+}\). Logical bits \(|0\rangle\) and \(|1\rangle\) are stored in Calcium’s \(4S_{1/2}\) ground states. Auxiliary levels in the \(3D_{5/2}\) manifold are used for engineered resonance transfer, \(D_{f}\). Dissipation \(\Gamma_{e}\) and \(\Gamma_{f}\) occurs through spontaneous decay from \(P_{3/2}\) to \(S_{1/2}\), in Calcium and Strontium. engineered dissipation. Here we present a physical realization of nonunitary operations in a trapped-ion system by use of dissipation engineering. By utilizing techniques from dissipation engineering, one can create nonunitary quantum gates that operate deterministically and without the need for ancilla qubits [54]. To this end, from quantum-mechanical interactions, we engineer the desired projective dynamics effecting classical gate operations. We implement a classical OR and NOR gate, whose truth tables are shown in Fig. 1(a), where the output of the gate action is mapped onto the left qubit. We employ selective coherent couplings to conditionally excite electronic states, utilizing the ions' shared motional modes, schematically outlined in Fig. 1(b). Both sympathetic cooling and decay via an auxiliary level serve as the nonunitary components and complete the gate action. Experimentally, the desired dynamics can be implemented in a mixed-species trapped-ion system with single-qubit addressing capabilities [55]. Through our work we show that carefully engineered nonunitary quantum dynamics have the potential to enrich the quantum engineer's toolbox, by performing a broad class of operations. _Principle of operation.--_ Two co-trapped \({}^{40}\)Ca\({}^{+}\)ions serve as information carriers, with the logical states \(\ket{0}\) and \(\ket{1}\) encoded in the \(4S_{1/2}(m=-1/2)\) and \((m=+1/2)\) Zeeman sub-levels (see Fig. 1(c)). The ions are trapped in a harmonic potential, and share motional modes. We use the notation \(\ket{i}_{1}\otimes\ket{j}_{2}\otimes\ket{n}_{m}=\ket{ij}\ket{n}_{m}\), for electronic states \(i\) and \(j\) of ions \(1\) and \(2\), and mode occupation \(n\) of a specific motional mode \(m\). For brevity, the mode occupation \(n\) is often omitted in our notation when \(n=0\), i.e. \(\ket{ij}=\ket{ij}\ket{0}_{m}\). As seen in the truth-table in Fig. 1(a), the OR gate corresponds to the mapping of \(\ket{01}\) to \(\ket{11}\), which is analogous to the condition that the first qubit is flipped from \(\ket{0}\) to \(\ket{1}\) if and only if the second qubit is the state \(\ket{1}\). The desired conditional operation is augmented by making use of a specific motional mode to encode information about the parity of the system. Access to motional modes is enabled through the auxiliary state \(\ket{f}\), for which we use the metastable \(3D_{5/2}(m=+1/2)\) level. The \(4S_{1/2}\leftrightarrow 3D_{5/2}\) transition to this auxiliary state is coupled with coherent \(729\) nm light. The population transfer mechanism is summarized below (details in Supplemental material [56]). A drive with Rabi frequency \(\Omega_{f}\) and detuning \(\Delta\) is applied to the first ion's \(\ket{0}\leftrightarrow\ket{f}\) transition. We refer to this drive as the probe. Without any further couplings, the probe would excite the two states \(\ket{00}\) and \(\ket{01}\), and leave the states \(\ket{10}\) and \(\ket{11}\) unchanged. A second drive, which we refer to as the sideband drive, is applied to both ions on the \(\ket{f}\leftrightarrow\ket{1}\) transition, though red-detuned by the frequency of the motional mode. This drive therefore couples the states \(\ket{f}\ket{n}_{m}\leftrightarrow\ket{1}\ket{n+1}_{m}\). In particular, the transition between \(\ket{f}\ket{0}_{m}\leftrightarrow\ket{1}\ket{1}_{m}\) occurs with Rabi frequency \(\Omega_{\text{SB}}\). Under the condition that \(\Omega_{\text{SB}}\gg\Omega_{f}\), the states excited from \(\ket{01}\ket{0}_{m}\) form dressed states \((\ket{f1}\ket{0}_{m}+\ket{1f}\ket{0}_{m}\pm\sqrt{2}\ket{11}\ket{1}_{m})/2\). These dressed states have a frequency shift of \(\pm\Omega_{\text{SB}}/\sqrt{2}\), with respect to the bare \(\ket{f1}\ket{0}_{m}\) state, as shown in Fig. 2(a), left. In contrast, the initial state \(\ket{00}\ket{0}_{m}\) is excited to the dressed states (Fig. 2(a), right), which reside at frequencies \(\pm\Omega_{\text{SB}}/2\). The three-level dressed states have increased frequency shifts compared to the two-level dressed states, because of constructive interference of the couplings \(\{\ket{1f}\ket{0}_{m},\ket{f1}\ket{0}_{m}\}\leftrightarrow\ket{11}\ket{1}_{m}\). Choosing a Figure 2: Overview of the gate mechanism. (a) Desired (left) and undesired (right) process for the OR gate. State \(\ket{01}\ket{0}_{m}\) is off-resonantly driven to \(\ket{f1}\ket{0}_{m}\) with Rabi frequency \(\Omega_{f}\). Due to the coupling \(\Omega_{\text{SB}}\), \(\ket{f1}\ket{0}_{m}\) is hybridized with states \(\ket{11}\ket{1}_{m}\) and \(\ket{1f}\ket{0}\), resulting in a dressed state splitting. For \(\Delta=\Omega_{\text{SB}}/\sqrt{2}\), the carrier drive is on resonance with the dressed state, and is therefore excited. The gate action is completed by sympathetically cooling the motional mode. The dressed states \(\ket{f0}\ket{0}_{m}\) and \(\ket{10}\ket{1}_{m}\), accessible from \(\ket{00}\ket{0}_{m}\), are shifted by \(\pm\Omega_{\text{SB}}/2\), and are therefore not in resonance with the carrier drive. Excitation from \(\ket{00}\ket{0}_{m}\) is thus suppressed. (b) Pulse sequence of OR and NOR gates. Transitions used in the experiment are indicated in Fig. 1(c). \(\Omega_{f}\) and \(\Omega_{e}\) act on the first ion, coupling \(\ket{0}\) with \(\ket{f}\) and \(\ket{1}\) with \(\ket{e}\). \(\Omega_{\text{SB}}\) acts on both ions, coupling the red sideband of \(\ket{1}\) and \(\ket{f}\). probe pulse detuning \(\Delta=\Omega_{\mathrm{SB}}/\sqrt{2}\) therefore enables excitation from \(\ket{01}\), while excitation from \(\ket{00}\) is out of resonance and thus suppressed. This population transfer through engineered resonance is denoted by \(D_{f}\) in Fig. 1(b) and (c). The conditional excitation is made nonunitary by a decay process enabled by sideband cooling a co-trapped \({}^{88}\mathrm{Sr}^{+}\mathrm{ion}\), indicated in Fig. 1(c). In the OR gate, population that cycles through \(\ket{11}\ket{1}_{m}\) is dissipatively transferred to \(\ket{11}\ket{0}_{m}\) at a rate \(\Gamma_{f}\). Since \(\ket{11}\ket{0}_{m}\) does not couple with either the sideband drive nor the probe, population remains in this state, thus completing the transfer \(\ket{01}\rightarrow\ket{11}\). In order to avoid interfering with the excitation during the probe process, the dissipation is realized in a subsequent step [44; 48]. We expand this mechanism to the universal NOR gate, whose truth table is shown in Fig. 1(a). This gate can be constructed by concatenating a unitary NOT gate with the OR gate. However, we present a fully dissipative implementation where the mapping \(\ket{00}\rightarrow\ket{10}\) follows the same procedure as the OR gate. In contrast to the OR gate, for the NOR gate we use the detuning \(\Delta=\Omega_{\mathrm{SB}}/2\) to excite the initial state \(\ket{00}\). In addition, the transfers \(\ket{11}\rightarrow\ket{01}\) and \(\ket{10}\rightarrow\ket{00}\) are required for the gate action which can be implemented by a single-qubit dissipative process. Both mappings are achieved by optically pumping the first ion from \(\ket{1}\) to \(\ket{0}\) over another auxiliary level, \(3D_{5/2}(m=-3/2)\equiv\ket{e}\), and subsequently to \(4P_{3/2}(m=-3/2)\), from where spontaneous decay returns population to the \(\ket{0}\) state. _Experimental overview.--_ The experiments have been carried out with a segmented surface trap in a cryogenic environment [55]. Ions are stored in the Ca-Sr-Ca configuration. Collisions with particles in the background gas may disrupt this orientation. Therefore, we periodically apply a sequence of voltages to the trap electrodes that deterministically place the ions back in the desired configuration [57; 58]. We set the sideband drive to couple to the crystal's axial in-phase (ip) mode. Confining potentials are set so that the in-phase mode frequency is \(\omega_{\mathrm{ip}}/(2\pi)=550\,\mathrm{kHz}\). This value is chosen as a trade-off between ensuring a low motional mode heating rate (\(\hat{n}\propto\omega_{\mathrm{ip}}^{-\alpha}\) with \(\alpha\approx 2\)) and a sufficiently high coupling to the motional mode \(\Omega_{\mathrm{SB}}\) through laser interaction (\(\Omega_{\mathrm{SB}}\propto\omega_{\mathrm{ip}}^{-1/2}\)) [59]. At this frequency, we have measured an axial in-phase mode heating rate of \(106(20)\) phonons per second, and an initial mean mode occupation of \(0.14\) phonons after sideband cooling. Both the axial in- and out-of-phase modes of the ion crystal are sideband cooled. Ions are initialized in \(\ket{00}\) using optical pumping. We prepare the remaining possible initial states \(\ket{01}\), \(\ket{10}\), and \(\ket{11}\), using a combination of single-ion and collective \(\pi\)-pulses on the \(\ket{0}\leftrightarrow\ket{f}\) and \(\ket{f}\leftrightarrow\ket{1}\) transitions. The sequences of operations for the OR and NOR operations are schematically shown in Fig. 2(b), referring to the states shown in Fig. 1(c), with \(\Omega_{f}\) and \(\Omega_{e}\) acting only on the first ion, coupling \(\ket{0}\) with \(\ket{f}\) and \(\ket{1}\) with \(\ket{e}\), respectively. \(\Omega_{\mathrm{SB}}\) acts on both ions, and couples the red sideband of \(\ket{1}\) and \(\ket{f}\). For the OR operation, the initial state \(\ket{01}\) is to be transferred to \(\ket{11}\), while all other initial states remain unchanged. The dressed state splitting is produced with a sideband drive with Rabi frequency \(\Omega_{\mathrm{SB}}/(2\pi)\approx 8\,\mathrm{kHz}\). The probe beam, simultaneously applied to only the first ion, is detuned by \(\Delta=\Omega_{\mathrm{SB}}/\sqrt{2}\) from the \(\ket{0}\leftrightarrow\ket{f}\) carrier transition, with an on-resonance Rabi frequency \(\Omega_{f}/(2\pi)\approx 1.15\,\mathrm{kHz}\). These pulses are applied for a duration of \(2\pi/\Omega_{f}=900\,\mathrm{\SIUnitSymbolMicro s}\), which excites \(\ket{01}\) to the dressed state as shown in Fig. 2(a) left. Similar transfer from an initial state of \(\ket{00}\) is suppressed because the resonance condition, shown in Fig. 2(a) right, is not met. Following this state-dependent population transfer, the state \(\ket{11}\ket{1}_{m}\) is dissipatively transferred to \(\ket{11}\ket{0}_{m}\) by cooling the Sr ion. The sideband coupling \(\Omega_{\mathrm{SB}}\) is maintained during the cooling step, which fully depletes the populated dressed state. The NOR gate follows a similar procedure as above, shown in Fig. 2(b), though now population transfer from \(\ket{00}\) is enabled by choosing \(\Delta=\Omega_{\mathrm{SB}}/2\), which suppresses excitation from \(\ket{01}\). The additional channel of dissipation required by the NOR operation, \(\ket{1}\rightarrow\ket{0}\) for only the first ion, is performed in multiple steps since it would otherwise conflict with the simultaneously required \(\ket{00}\rightarrow\ket{10}\) operation. Preceding the engineered dissipation, population in the first ion's \(\ket{1}\) state is stored in \(\ket{e}\). After the engineered dissipation, \(\sigma^{-}\)-polarized light at \(854\) nm transfers population in \(\ket{e}\) to the \(4P_{3/2}\) level, favoring the \(m=-3/2\) Zeeman sublevel, from which spontaneous decay brings it to \(\ket{0}\). At the end of the sequence, the population is read out with state-dependent fluorescence detection using an EMCCD camera, which distinguishes excitation of the \(S\) and \(D\) manifolds. As the logical information is carried in the two \(S\)-levels (cf. Fig.1(c)), the population in \(\ket{1}\) needs to be transferred to \(\ket{e}\) before the measurement. This state-readout does not differentiate between between \(\ket{1}\) and \(\ket{f}\). We use the notation \(P_{ij}\) to indicate the population in state \(\ket{ij}\). We can separately measure the occupation of the motional mode by applying a pulse on resonance with either the red or blue sideband of one of the Strontium ion's \(5S_{1/2}\leftrightarrow 4D_{5/2}\) transitions, and reading out its state [59]. The difference of the excitation probability of the red and blue sideband excitations is used to infer the population in the motional ground state. _Results.--_ We first demonstrate the central building block of resonance engineering, the state-dependent population transfer, by showing its time-evolution, using a detuning of \(\Delta=\Omega_{\mathrm{SB}}/2\). The change in population is shown for all four initial states. The intended behavior, Rabi cycling from \(\ket{00}\) and no transfer from the other initial states, is apparent in Fig. 3(a). The solid lines denote simulated data. The simulations numerically solve the system's master equation [56], and use experimentally determined parameters described above, including the initial phonon number and heating rate. The gray dashed line marks the duration of the pulse with the maximum state transfer, \(600\,\mathrm{\SIUnitSymbolMicro s}\), where \(82\%\) of population has depleted from \(\ket{00}\), and only \(16\%\) from \(\ket{01}\). The deviation from a full population transfer is attributed to the non-zero initial phonon number and heating rate, corroborated by the simulated results. After this population transfer, the state \((\ket{10}\ket{1}_{m}+\ket{f0}\ket{0}_{m})/\sqrt{2}\) should be dissipatively transferred to \(\ket{10}\ket{0}_{m}\). We demonstrate this process by showing the evolution of the populations \(P_{f0}\) and \(P_{10}\) and the phonon ground state occupation over time in Fig. 3(b). The electronic states are differentiated by running the measurement twice: once with the transfer of \(\ket{1}\) to \(\ket{f}\), and once without. The latter measurement does not discriminate between \(\ket{0}\) and \(\ket{1}\). The population \(P_{10}\) is inferred from the difference between the first and second measurement. We additionally measure the phonon occupation. The lines are simulated results, using the same sideband coupling strength \(\Omega_{\mathrm{SB}}\) as in (a). A dissipation rate of \(\Gamma_{f}=4.5(6)\,\mathrm{kHz}\) is determined by a least-squares fit between the simulated and measured results. After \(1\,\mathrm{ms}\) of applying the dissipation pulse, approximately \(80\%\) of the population is in \(\ket{10}\ket{0}_{m}\). The population is trapped there because of the irreversible nature of the dissipation. Having demonstrated and characterized the engineered resonance and dissipation processes, we apply these steps within the full pulse sequences shown in Fig 2(b) to perform the OR and NOR gates. Figure 4 shows the measured population outcome for each of the four possible initial states for the OR and NOR gates. Populations are determined from \(50\) experimental repetitions for each input state. Both truth tables exhibit the intended gate behavior: for all input states, the majority of the population is transferred to the desired state, marked in the figure with dashed boxes. The initial states \(\ket{01}\) and \(\ket{00}\) are transferred following the engineered resonance scheme for the OR and NOR gates, and have success rates of \(84(5)\%\) and \(74(6)\%\). As confirmed by simulations and analytics [60], the primary source of error is attributed to a non-zero initial phonon number and the heating rate. Since the coupling strength to a motional sideband is dependent on the phonon number, the resonance condition of the engineered population transfer is not met for \(n\geq 1\). _Conclusion and Outlook.--_ We have implemented nonunitary multi-qubit operations in a trapped-ion system by use of engineered dissipation. The schemes for the OR and NOR gate performed the operations with average fidelities of \(87(5)\%\) and \(81(5)\%\), respectively. This constitutes the first realization of dissipative quantum gate operations. The leading source of error stems from heating and thus imperfect cooling of the mode over which the intended dissipation process occurs. This heating process is the result of electronic noise on the trap surface, and by phonon transfer from other uncooled motional modes caused by mode-coupling, both of which are known challenges of microfabricated ion traps Figure 4: Measured population truth tables of the OR and NOR gates, with the intended output states marked with dashed lines. Values are in percent, and are determined from \(50\) experimental shots for each input setting. The OR and NOR gates have an average population fidelity of \(87(5)\%\) and \(81(5)\%\). Figure 3: (a) Experimental demonstration of state-dependent population transfer transfer with \(\Delta=\Omega_{\mathrm{SB}}/2\), shown for each possible initial state. The lines indicate simulated results, which include measured initial phonon number and heating rate as simulation parameters. (b) Demonstration of dissipation, after a maximal probe transfer from \(\ket{00}\) (at a time marked by the gray dashed line in (a) ). We show the evolution of \(P_{f0}\) and \(P_{10}\), and use sideband thermometry on the Strontium ion to infer the ground-state phonon occupation. The lines show simulated results, in which the dissipation rate \(\Gamma_{f}\) is obtained through a least-squares fit between simulated and measured data. [61; 62], and is further exacerbated by complications involved in mixed-species operation [58]. Such issues are technical, and do not pose fundamental limitations: Future experiments could implement improved trap design and manufacturing to reduce heating due to technical noise, and improved cooling techniques such as polarization gradient cooling [63] and electromagnetically-induced-transparancy (EIT) cooling [64]. Much like recent dissipative high-fidelity schemes for entangled state preparation [46; 47; 48; 49] improved upon the fidelities of their first-generation counterparts we would expect future implementations to improve the fidelity. Nonunitary operations are of relevance in a wide range of quantum information algorithms. For example, in a NISQ context, quantum convolutional neural networks use measurements and conditional feedback operations to process information [18; 38]. These elements could be replaced by integrated nonunitary operations, thereby avoiding classical measurements and feedforward. Regarding universal fault-tolerant quantum computation, quantum error correction also constitutes a nonunitary process as multiple erroneous processes are mapped to the same corrected state. Our work can be seen as a stepping stone towards an implementation of autonomous quantum error correction, in which erroneous states are coherently mapped to oscillator excitations and are then removed through dissipation [51]. We have demonstrated the required techniques, resonance engineering and sympathetic cooling, in the present experiment. ###### Acknowledgements. We gratefully acknowledge discussions with Jonathan Home and support by the EU Quantum Technology Flagship grant AQTION under Grant Agreement number 820495, and by the US Army Research Office through Grant No. W911NF-14-1-010 and W911NF-21-1-0007. We also acknowledge funding by the Austrian Science Fund (FWF), through the SFB BeyondC (FWF Project No. F7109), ERC-2020-STG 948893, and by the IQI GmbH. The research is also based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via the US Army Research Office Grant No. W911NF-16-1-0070. F.R. and E.Z. acknowledge funding from the Swiss National Science Foundation (Ambizione grant no. PfZ00P2 186040) and the ETH Research Grant ETH-49 20-2. _Author Contributions_ E.Z. developed, under the guidance of F.R., the theoretical protocol. M.v.M, F.R., P.S., and E.Z. designed the experiment. M.v.M. carried out the experiment and analyzed the data. M.v.M., P.H., and L.G. contributed to the experimental setup. R.B., T.M., F.R., P.S. supervised the project. All authors contributed to the manuscript.
2301.06839
Unconventional criticality, scaling breakdown, and diverse universality classes in the Wilson-Cowan model of neural dynamics
The Wilson-Cowan model constitutes a paradigmatic approach to understanding the collective dynamics of networks of excitatory and inhibitory units. It has been profusely used in the literature to analyze the possible phases of neural networks at a mean-field level, e.g., assuming large fully-connected networks. Moreover, its stochastic counterpart allows one to study fluctuation-induced phenomena, such as avalanches. Here, we revisit the stochastic Wilson-Cowan model paying special attention to the possible phase transitions between quiescent and active phases. We unveil eight possible types of phase transitions, including continuous ones with scaling behavior belonging to known universality classes -- such as directed percolation and tricritical directed percolation -- as well as novel ones. In particular, we show that under some special circumstances, at a so-called Hopf tricritical directed percolation transition, rather unconventional behavior including an anomalous breakdown of scaling emerges. These results broaden our knowledge of the possible types of critical behavior in networks of excitatory and inhibitory units and are of relevance to understanding avalanche dynamics in actual neuronal recordings. From a more general perspective, these results help extend the theory of non-equilibrium phase transitions into quiescent or absorbing states.
Helena Christina Piuvezam, Bóris Marin, Mauro Copelli, Miguel A. Muñoz
2023-01-17T12:36:25Z
http://arxiv.org/abs/2301.06839v1
# Unconventional criticality, scaling breakdown, and diverse universality classes ###### Abstract The Wilson-Cowan model constitutes a paradigmatic approach to understanding the collective dynamics of networks of excitatory and inhibitory units. It has been profusely used in the literature to analyze the possible phases of neural networks at a mean-field level, e.g., assuming large fully-connected networks. Moreover, its stochastic counterpart allows one to study fluctuation-induced phenomena, such as avalanches. Here, we revisit the stochastic Wilson-Cowan model paying special attention to the possible phase transitions between quiescent and active phases. We unveil eight possible types of phase transitions, including continuous ones with scaling behavior belonging to known universality classes --such as directed percolation and tricritical directed percolation-- as well as novel ones. In particular, we show that under some special circumstances, at a so-called Hopf tricritical directed percolation transition, rather unconventional behavior including an anomalous breakdown of scaling emerges. These results broaden our knowledge of the possible types of critical behavior in networks of excitatory and inhibitory units and are of relevance to understanding avalanche dynamics in actual neuronal recordings. From a more general perspective, these results help extend the theory of non-equilibrium phase transitions into quiescent or absorbing states. ## I Introduction A large variety of natural systems exhibit continuous (second-order) phase transitions between an active phase and a quiescent (or absorbing) one where all activity ceases [1; 2; 3; 4; 5]. These systems often exhibit scaling behavior around the phase-transition point and this is typically described by the _directed percolation_ universality class, as originally conjectured by Janssen and Grassberger [6; 7]. Actually, directed percolation (DP) is one of the most robust classes of universal critical behavior away from thermal equilibrium [1; 2; 3; 4; 5; 8], as it describes all possible phase transitions into an absorbing state --even for multi-component systems [9]-- in the absence of additional symmetries or conservation laws [1; 3; 4; 5; 10]. Moreover, some of the representative models of this class, such as the branching process and the contact process [11; 12], have been broadly studied in a large variety of contexts, including countless applications in materials science, turbulence, epidemics, theoretical ecology, social sciences, and neuroscience. Conversely, under some circumstances, phase transitions into quiescent states occur in a discontinuous (or first-order) rather than continuous manner. This is often the case when higher-order reactions are considered, where at least a pair of active units are required to activate the third one [13; 5; 14]. This situation usually involves a bistable regime (i.e., with phase coexistence), leading to hysteresis. There are also well-studied systems (see e.g., a modified contact process [13; 15]) that include both types of transitions, continuous and discontinuous, as well as a _tricritical_ point with a scaling behavior that differs from DP and is described by the so-called tricritical directed percolation (TDP) universality class [13]. In the context of neuronal systems, the experimental work by Beggs and Plenz reported on the existence of _neuronal avalanches_ (i.e., outbursts of neuronal activity between quiescent periods). These exhibited highly-variable sizes and durations, which were power-law distributed. Moreover, the associated exponents were found to be consistent with those of critical systems in the mean-field DP universality class [16], suggesting that brain dynamics could be poised near the edge of a phase transition [17; 18; 19; 20; 21; 22]. Further experimental works reported evidence of some scaling exponents that deviate from DP [23; 24], so that the interpretation of the scaling behavior in terms of universality classes remains a current matter of debate [19]. In particular, the possible departure from the standard DP class (together with the possible existence of discontinuous transitions in brain dynamics [25; 26; 27]) raises a number of questions from the theoretical point of view. For example, the fact that neuronal networks include inhibitory units -- which hinder activity propagation and are not usually included in simple models in the DP class, such as the standard branching process -- triggered a renewed interest in the scrutiny of alternative types of critical behavior (as well as discontinuous transitions and tricriticality) in networks of excitatory and inhibitory units [28; 29; 30; 31; 32; 33; 34; 35; 36; 37]. Do different types of quiescent to active phase transitions emerge in simple models of activity propagation once inhibitory effects are considered? Here, to further advance our knowledge along these lines, we study one of the most broadly studied parsimonious models in neuroscience: the Wilson-Cowan model [38] as well as its stochastic counterpart [37; 28]. We systematically analyze the resulting phase diagram, the possible phases and phase transitions. In particular, we reveal that, depending on the relative strengths of excitatory and inhibitory couplings, there can be up to 8 different types of phase transitions into quiescence. Some of them exhibit well-known scaling behavior (such as DP or TDP), while others are discontinuous or show different types of anomalies in scaling or even mixed features of continuous and discontinuous transitions. Finally, we elucidate a novel type of phase transition that is highly non-trivial, exhibiting unconventional behavior and breakdown of scaling. Our results help rationalize and categorize the possible types of criticality in networks of excitatory and inhibitory units, contributing to the advance of the brain-criticality hypothesis and of the general theory of non-equilibrium phase transitions [4]. ## II The Wilson-Cowan model and its stochastic counterpart In its original formulation, the Wilson-Cowan model describes the collective deterministic (or "mean-field") behavior of a local population of both excitatory and inhibitory neurons by means of two coupled differential equations [38]. These equations reproduce --as a function of a set of coupling-strength parameters-- a variety of possible dynamical regimes, all of which with counterparts in actual neuronal systems [39; 40; 34; 41] that are delimited by phase transitions (bifurcation lines) [42; 43]. To go beyond this deterministic or mean-field picture, Benayoun _et al._[28] proposed a microscopic version of the Wilson-Cowan model in the form of a Markovian process for a population of coupled excitatory and inhibitory individual binary neurons that can be either active or inactive [44]. In the _stochastic Wilson-Cowan_ (SWC) model, the state of each unit \(\ell\) at a given time \(t\)-- which can be either excitatory (E) or inhibitory (I) -- is given by \(\sigma_{\ell}^{E/I}(t)=1\) for active neurons and \(\sigma_{\ell}^{E/I}(t)=0\) for inactive ones. These state variables change according to a master equation specified by transition rates defined as follows. Each active neuron, regardless of its type, shifts from the active to the quiescent state (decay), \(1\to 0\), at a constant rate \(\alpha\). The reverse transition (activation), \(0\to 1\), occurs at rate \(\Phi(s^{\ell})\), defined as \[\Phi(s^{\ell})=\left\{\begin{array}{ll}\tanh(s^{\ell}),&\mbox{if $s^{\ell}>0$}\\ 0,&\mbox{otherwise},\end{array}\right. \tag{1}\] where the _input_\(s^{\ell}\) to neuron, \(\ell\) is \[s^{\ell}=\sum_{m}w^{\ell m}\sigma_{m}+h, \tag{2}\] \(w^{\ell m}\) is the synaptic weight from neuron \(m\) to neuron \(\ell\), and \(h\) is a constant external input. Observe that the form of the _response function_, \(\Phi(s^{\ell})\), in Eq. (1) enforces the non-negativity of the transition rates. In what follows, the synaptic weights are chosen to depend only on the type (excitatory or inhibitory) of both the pre-synaptic and the post-synaptic neuron, leaving (as sketched in Fig. 1a) only four free parameters: \(w^{\ell m}\in\{w_{EE},w_{IE},w_{IE},w_{II}\}\ \ \forall\ell,m\), where, e.g., \(w_{IE}\) is the excitatory coupling strength to inhibitory neurons, and so Figure 1: (a) Sketch of the Wilson-Cowan model, including both excitatory and inhibitory populations and synaptic couplings between them. (b) The piecewise-smooth nature of the response function \(\Phi\) in Eq. (1) generates three different regions in the state space, depending on the sign of the total input \(s\): in region I, \(s^{E}\) and \(s^{I}\) are both positive; in region II, \(s^{E}\) is negative and \(s^{I}\) is positive; and, finally, in region III, \(s^{E}\) and \(s^{I}\) are negative (the figure illustrates these regions for \(w_{EE}=2.5\), \(w_{EI}=1.5\), \(w_{IE}=1.5\), \(w_{II}=0.5\), and \(h=0\)). Observe that when \(w_{EE}/w_{EI}>w_{IE}/w_{II}\), lines \(s^{E}=0\) and \(s^{I}=0\) switch position and region II then shows \(s^{E}>0\) and \(s^{I}<0\). This condition is not explicitly explored because the analytical results are preserved under it. The trajectories, shown as red dashed arrows, illustrate how the system is attracted to the origin (quiescent state). Trajectories that start from initial conditions \(E(0)>I(0)\) typically decay to zero through region I. As discussed in Section IV, initial conditions in region I that are close to the _switching manifold_\(s^{E}=0\) can cross over to region II. Furthermore, regions II and III are trapping: once trajectories cross the switching manifold, they are unable to return. forth. Previous works on this model have often employed symmetric weights as a way to reduce the dimensionality of the phase diagram; e.g., common excitatory (\(w_{E}\equiv w_{EE}=w_{IE}\)) and inhibitory (\(w_{I}\equiv w_{II}=w_{EI}\)) inputs [28; 34; 45]. In order to systematically explore the full set of possible phase transitions, here, we remove such constraints. This stochastic process can be implemented on different types of networks, as specified by a _connectivity matrix_. As a first approach, one can assume a large fully-connected network of size \(N\). Indeed, performing a (network) size expansion [46; 47], one recovers -- up to leading-order-- the standard Wilson-Cowan equations [28], written as: \[\dot{E} = -\alpha E+(1-E)\,\Phi\left(w_{EE}E-w_{EI}I+h\right), \tag{3}\] \[\dot{I} = -\alpha I+(1\ -\ I)\,\Phi\left(w_{IE}E-w_{II}\ I+h\right)\, \tag{4}\] where \(E\) and \(I\) are the densities of active excitatory and inhibitory neurons, respectively [28] and \(\alpha\) is the decay rate. Similarly, by adding next-to-leading corrections, one obtains a set of two Langevin equations including square-root noise (similar to the ones for DP and TDP), which we do not write explicitly here. These stochastic equations allow one to describe fluctuation effects in finite-size (fully connected) networks [28; 46; 48] -- though the forthcoming computational analyses refer to simulations of the microscopic model-- as well as to perform a systematic scaling analysis of the full model. Observe that, owing to the discontinuous derivative of \(\Phi\) at zero, Eq. (3) and Eq. (4) are piecewise smooth differential equations [49; 50; 51] -- i.e., they are smooth everywhere except at _switching manifolds_, which are defined by the conditions of vanishing input in the response function: \(s^{E}\equiv w_{EE}E-w_{EI}I+h=0\) and \(s^{I}\equiv w_{IE}E-w_{II}\ I+h=0\). These two conditions divide the state space into three regions: I, II, and III, as illustrated in Fig. 1b: * In region II, Eq. (3) becomes \(\dot{E}=-\alpha E\) and trajectories in this region decay exponentially fast to the quiescent phase (either crossing to region III or not). * Similarly, in region III, \(\dot{E}=-\alpha E\) and also \(\dot{I}=-\alpha I\), leading to an even faster decay to quiescence. * Conversely, in region I, the total input does not vanish for either sub-population and the dynamics can be more complex, possibly reaching non-trivial (active) fixed points. Inspection of Eq. (3) and Eq. (4) readily reveals that trajectories starting in regions II and III do not cross over to region I (as excitation always diminishes in these regimes), but the opposite can happen (see, e.g., the central trajectory shown in Fig.1 as well as Appendix VI.3). In the next sections, we explore in detail, both analytically and numerically, the features of each of the possible phase diagrams as well as all the possible phase transitions between quiescent and active states. ## III Mean-field phase diagrams: general and specific features To avoid confusion, let us first underline that in what follows we refer indistinctly to _phase transitions_ or to _bifurcations_, as the present focus is on the description of fully-connected networks (i.e., mean-field systems). Thereby, DP transitions correspond to transcritical bifurcations, discontinuous transitions to saddle-node bifurcations, tricritical points to saddle-node-transcritical (codimension-2) bifurcations [52; 53], and so on. In the absence of any external driving force (\(h=0\)), the steady-state conditions for Eq. (3) and Eq.(4) always admit a trivial solution \(E^{*}=I^{*}=0\), which defines the _quiescent phase_ as well as, possibly, some non-trivial solutions (\(E^{*}>0\) and \(I^{*}>0\)) of the following equations, \[E^{*} = \frac{1}{w_{IE}}\left[w_{II}I^{*}+\Phi^{-1}\left(\frac{\alpha I^{ *}}{1-I^{*}}\right)\right], \tag{5}\] \[I^{*} = \frac{1}{w_{EI}}\left[w_{EE}E^{*}-\Phi^{-1}\left(\frac{\alpha E^ {*}}{1-E^{*}}\right)\right] \tag{6}\] and define the _active phase_. Observe that Eq. (5) and Eq. (6) are well-defined only as long as \(\Phi^{-1}\) exists, i.e., in region I (Fig. 1b), so that non-trivial solutions exist only inside said region. Let us now analyze the overall phase diagram, describing the stable phases as a function of the model parameters. In particular, without loss of generality, we keep the activity-decay rate \(\alpha\neq 0\) and the self-inhibition weight \(w_{II}\geq 0\) fixed. Choosing \(w_{EE}\) and \(w_{EI}\) as control parameters, depending on the value of the remaining free parameter, \(w_{IE}\), the system may display three qualitatively different types of phase diagrams in the \((w_{EE},w_{EI})\) plane. Other parameter choices are possible, but the system is always described by one of these three qualitatively different types of phase diagrams. ### Quiescent phase and its stability limits First of all, let us stress that the quiescent phase is always stable (and is the only stable state) with respect to the introduction of _inhibition-dominated_ perturbations, i.e., in regions II and III, so in what follows we focus on its stability and the resulting phase diagram as a result of _excitation-dominated_ perturbations. Importantly, there are two different types of quiescent phases: (i) The first one is a standard quiescent one, i.e., a regime in which the quiescent phase is locally stable to excitation-dominated perturbations (Fig. 2a). This occurs if the eigenvalues of the associated stability matrix, as specified by: \[\lambda_{\pm}=\frac{w_{EE}-2\alpha-w_{II}\pm\sqrt{(w_{EE}+w_{II})^{2}-4w_{EI}w _{IE}}}{2}\, \tag{7}\] have negative real parts (white zone in the diagrams of Fig. 3). (ii) Alternatively, if the eigenvalues have positive real parts and an imaginary component, then, in principle, one could expect oscillations away from quiescence to emerge. However, given the non-smooth piecewise dynamics, the resulting "curvy" trajectories end up crossing over to region II, where the dynamics follow the equation \(\dot{E}=-\alpha E\) and the quiescent phase is the only attractor. Therefore, in this regime, small excitatory perturbations to the quiescent phase may give rise to large trajectories in state space before returning to quiescence (see Fig. 2 and [28; 37]). This property is called "excitability" (or "reactivity" [54]) and the corresponding quiescent state is called "excitable quiescent". In both cases, either when the quiescent state is standard or excitable, it loses its stability when the real part of the largest eigenvalue becomes positive, which (from Eq.(7)) occurs at \[w_{EE}^{T}=\alpha+\frac{w_{EI}w_{IE}}{\alpha+w_{II}}\;. \tag{8}\] Not surprisingly, separating the previous two phases (standard quiescent and excitable quiescent), there is a line of (supercritical) Hopf bifurcations (dot-dashed vertical lines in Fig. 3) at \[w_{EE}^{H}=2\alpha+w_{II} \tag{9}\] with the additional constraint that there is a non-vanishing imaginary part, i.e., from Eq.(7): \[(w_{EE}+w_{II})^{2}-4w_{EI}w_{IE}<0 \tag{10}\] (so that the bifurcation is only defined above the Hopf-transcritical line). Summing up, there are two types of quiescent phases, separated by a line of Hopf bifurcations: * A **standard quiescent phase**, which is a locally stable node (upper left regions in Fig. 3); * An **excitable quiescent phase**, which is a locally stable focus (upper right regions in Fig. 3). ### Active phase and its stability limits The active phase becomes a stable solution either at (i) a transcritical bifurcation (i.e., it emerges continuously once the quiescent phase loses its stability in a DP transition), which occurs for Eq. (8) as represented by the dashed lines in Fig. 3; (ii) a saddle-node bifurcation, i.e, emerging discontinuously (solid line in Fig. 3) at \[w_{EE}^{SN}=\min_{E^{*}}\left[\frac{w_{EI}I^{*}(E^{*})}{E^{*}}+\frac{1}{E^{*}} \Phi^{-1}\left(\frac{\alpha E^{*}}{1-E^{*}}\right)\right]\;, \tag{11}\] where \(E^{*}\) and \(I^{*}\) are solutions of Eq. (5) and Eq.(6) that can be solved numerically; or (iii) at a tricritical point, where the previous two lines meet (black dot in Fig. 3), to which one can also refer as "saddle-node-transcritical" (SNT) point (its location \((w_{EE}^{SNT},w_{EI}^{SNT})\) in the phase diagram is explicitly derived in Appendix VI.2; see, in particular, Eq.(39) and Eq. (40)). ### Relative location of the line of Hopf bifurcations Observe that the line of Hopf bifurcations -- which as shown in Fig. 3a is always vertical in the \((w_{EE},w_{EI})\) plane -- collides with the line of transcritical bifurcations at a special point (here named Hopf-transcritical (HT) bifurcation, which is marked with an empty circle in the different panels of Fig. 3). From Eq.(9) and Eq.(8) one can easily derive the conditions for the HT point to occur: \[w_{EI}^{HT} =\frac{(\alpha+w_{II})^{2}}{w_{IE}}, \tag{12}\] \[w_{EE}^{HT} =2\alpha+w_{II}. \tag{13}\] The key aspect distinguishing the three possible topologies of the phase diagram is whether this HT point lies to the right (case A), left (case C), or on top of the tricritical (SNT) point (case B) in phase space, i.e.: * Case A: \(w_{EE}^{SNT}<w_{EE}^{HT}\) * Case B: \(w_{EE}^{SNT}=w_{EE}^{HT}\) * Case C: \(w_{EE}^{SNT}>w_{EE}^{HT}\). As already mentioned, these three possibilities are illustrated in Fig. 3, in which the value of \(w_{IE}\) changes to switch from one regime to the other. Also, note that case Figure 2: Two different types of quiescent phases. Time series towards the absorbing state of trajectories (a) on the standard quiescent phase (\(w_{EE}=1.2\) and \(w_{IE}=0.2\)) and (b) on the excitable quiescent phase (\(w_{EE}=2.2\) and \(w_{IE}=1.0\)). Observe the non-monotonicity in the second case, which is a manifestation of the excitability of the quiescent state: perturbations can be amplified before trajectories finally decay to quiescence. The insets show the phase space for these two cases, respectively, as well as the corresponding switching manifolds and some sample trajectories (arrows). In the second case, trajectories cross the switching manifold. Parameter values are \(\alpha=1.0\), \(w_{II}=0.2\), and \(w_{EI}=2.0\). B requires a higher level of fine-tuning than the other two cases, which appear in broad regions of parameter space. From here on, one needs to separately discuss the three aforementioned possible structures of the phase diagram. #### iii.2.1 Case A: Left panel in Fig. 3 In this case, the HT point lies to the right of the tricritical point. Visual inspection of Fig. 3A reveals that there are four different ways to go from a quiescent phase (either standard or excitable) to the active one. These are labeled as: \(T_{1}\), for the transcritical bifurcation from the standard quiescent, \(T_{2}\), for a standard tricritical transition; \(T_{3}\), for a transition through a bistable regime (saddle-node bifurcation with coexistence between the standard quiescent and the active phase), and \(T_{4}\), also for a discontinuous transition with bistability, although in this case, between the excitable quiescent phase and the active one. #### iii.2.2 Case B: Central panel in Fig. 3 Here, the HT point lies exactly on top of the tricritical point. This structure lies to only three possible types of transitions: \(T_{1}\) and \(T_{4}\) (as already described), and a new transition labeled \(T_{5}\), which occurs through the tricritical (SNT) point that coincides with the special HT point in a codimension 3 bifurcation. #### iii.2.3 Case C: Right panel in Fig. 3 In this last case, the HT point lies to the left of the tricritical point and there are five types of transitions, including the standard transcritical (\(T_{1}\)) and saddle-node (\(T_{4}\)) ones, as well as three novel ones: \(T_{6}\), a transition through the special HT point; \(T_{7}\), a transcritical bifurcation but into the excitable quiescent phase; and, finally, \(T_{8}\), a tricritical (or SNT) transition into the excitable quiescent phase. In the next section, we analyze these eight types of phase transitions (or bifurcations) --from \(T_{1}\) to \(T_{8}\)--scrutinizing the corresponding peculiarities for each of them. ## IV Scaling properties at the different types of transitions Standard linear stability analysis of the fixed points of the (mean-field) dynamics, Eq. (3) and Eq. (4), allows one to study the nature of bifurcations and make analytical predictions for the scaling behavior [1; 4; 55]. In particular, a linear approximation of Eq. (5) around the quiescent solution yields a value of \(I^{*}\) proportional to the density of active excitatory neurons \(E^{*}\), hence in what follows we employ indistinctly either the latter or the sum of both as an order parameter. In all cases and for all possible types of transitions, we compute the usual quantities and scaling exponents Figure 3: Different phase diagrams of the model (Eq. (3) and Eq. (4)), with excitation-dominated initial conditions and parameter values: \(\alpha=1\), \(w_{II}=0\), and \(w_{IE}=3\) (Case A), \(w_{IE}=1\) (Case B), and \(w_{IE}=0.8\) (Case C). In all cases, the (vertical dot-dashed) line of Hopf bifurcations separates the standard quiescent phase from the excitable quiescent one. (a) In case A, the Hopf line collides with the (diagonal and dashed) transcritical line to the right of the tricritical point (black dot), within a region of bistability. (c) In case C, the situation is reversed and the intersection of the Hopf line with the transcritical line occurs to the left of the tricritical point, defining a Hopf-transcritical bifurcation. Observe that a line of bifurcations, induced by the piecewise smooth nature of the system, unfolds from the Hopf-transcritical point (dashed black line). (b) Between the previous two cases, case B (for which fine-tuning a third parameter, \(w_{IE}=1\), is needed) the Hopf line collides with the transcritical one at a codimension 3 bifurcation point that we call Hopf-tricritical point (Hopf saddle-node-transcritical bifurcation). The horizontal black segments \(T_{1},\dots,T_{8}\) represent 8 qualitatively different ways to transition from a quiescent to an active state as the control parameter, \(w_{EE}\), increases. customarily employed in the analysis of quiescent-active phase transitions (as long as they are well-defined). Even if, generally, three independent exponent values suffice to fully determine the universality class [55; 56], here, for the sake of completeness, we compute more, which also allows us to check for consistency. We compute "_static exponents_": such as **(i)**\(\beta\), the control parameter one (\(E^{*}\propto\Delta^{\beta}\)), where \(\Delta=w_{EE}-w_{EE}^{*}\) is the distance to the transition point and \(w_{EE}^{*}\) stands, generically, for the value of \(w_{EE}\) at the specified bifurcation; **(ii)**\(\delta_{h}\), defined by \(E^{*}\propto h^{1/\delta_{h}}\), representing the response to a constant external field \(h\), at criticality. "_Correlation exponents_" (\(\nu\)) such as the one for **(iii)** the correlation length, \(\xi_{\perp}\), \(\xi_{\perp}\propto\Delta^{\nu_{\perp}}\) and for **(iv)** the correlation time, \(\xi_{\parallel}\), \(\xi_{\parallel}\propto\Delta^{\nu_{\parallel}}\). "_Dynamic exponents_": such as **(v)**\(\theta\), that governs the time decay of the order parameter \(E(t)\propto t^{-\theta}\). "_Spreading exponents_" such as those describing: **(vi)** the total number of active sites, \(N(t)\propto t^{\eta}\); **(viii)** the mean-squared radius in surviving runs \(R^{2}(t)\propto t^{-z}\); and **(viii)** the survival probability \(P_{s}(t)\propto t^{-\delta}\)[1], as well as "_avalanche exponents_" defined by: **(ix)**\(P(S)\sim S^{-\tau}\), for the distribution of avalanche sizes, \(S\); **(x)**\(P(T)\sim T^{-\tau_{t}}\), for durations, \(T\); and (xi) \(\langle S\rangle\sim T^{\gamma}\) linking durations with averaged sizes, \(\langle S\rangle\). Note that these last exponents (spreading and avalanche ones) are not independent of each other, but related through scaling relations; e.g. [55]: \[\tau = \frac{1+\eta+2\delta}{1+\eta+\delta}\;, \tag{14}\] \[\tau_{t} = 1+\delta\;,\] (15) \[\gamma = \frac{\tau_{t}-1}{\tau-1}=1+\delta+\eta\;, \tag{16}\] where the last one describes the "crackling noise" scaling relation [57]. Other scaling relations can be found in [55; 4; 5], in particular, \[\theta=\beta/\nu_{\parallel}\;, \tag{17}\] relates static and dynamic exponents. Associated with the crackling noise exponent, for standard processes with absorbing states (e.g., DP and TDP), the averaged shape of avalanches with different durations and sizes (or "mean temporal profile of avalanches") collapses onto a universal curve that typically has a symmetric parabolic form (see Sec. V) [58; 59]). It is noteworthy that there is a set of exponents that can be argued to remain unchanged across transition types (a fact that is also confirmed numerically). Due to the diffusive nature of the system in all continuous transitions, correlations (\(\xi\)) should diverge at the critical point with mean-field exponents (\(\nu\)) as follows: \(\xi_{\perp}\propto\Delta^{\nu_{\perp}}\) with \(\nu_{\perp}=1/2\), for the correlation length; and \(\xi_{\parallel}\propto\Delta^{\nu_{\parallel}}\) with \(\nu_{\parallel}=1\), for the time correlation. From this, given that [55]\(z=2\nu_{\perp}/\nu_{\parallel}=1\), \(z=1\) for all continuous transitions here. Similarly, the survival probability exponent (whose scaling behavior was determined in [60]) always takes a value \(\delta=1\) for all the continuous transitions studied here, implying that \(\tau_{t}=2\) (see Eq. (15)) is conserved across transitions. Finally, the exponent \(\eta\) is expected to vanish for all mean-field transitions (for which there is no "anomalous dimension" [3]). However, remarkably, here we report on a possible exception to this general rule (\(\eta=2\)) for one of the "anomalous" transitions. ### \(T_{1}\): Directed Percolation (Transcritical bifurcation) \(T_{1}\) corresponds to a transcritical bifurcation, describing a continuous transition between the standard quiescent and active phases. As discussed in Section I, guided by universality principles, one expects it to lie in the usual (mean-field) directed percolation universality class (DP) [1; 3; 4; 6; 7]. Indeed, this is the case, as explicitly shown in what follows. Transcritical bifurcations occur when the quiescent steady state loses its local stability, i.e., Eq. (8). Expanding Eq. (3) and Eq. (4) in power series of \(E^{*}\) and \(I^{*}\), one finds: \[E^{*}(\Delta;h=0)=\frac{(\alpha+w_{II})^{3}}{(\alpha+w_{II})^{3}-w_{EI}w_{IE}^ {2}}\Delta+\mathcal{O}(\Delta^{2})\;, \tag{18}\] from which \(\beta=1\) follows. The introduction of an external field \(h\) smooths out the transition (as illustrated with dashed lines in Fig. 4a). Hence, expanding the fixed point in powers of \(h\), at \(\Delta=0\), yields: \[E^{*}(h;\Delta=0)=\sqrt{\frac{(\alpha+w_{II})^{2}(w_{EI}-\alpha)h}{\alpha[w_{ IE}w_{EI}^{2}-(w_{II}+\alpha)^{3}]}}+\mathcal{O}(h), \tag{19}\] so that \(\delta_{h}=2\) (see Fig. 5a). Similarly, one can derive the solution for \(I(t)\) and, by expanding it in a power series, obtain \(I(t)\approx[w_{IE}/(w_{II}+\alpha)]E(t)\). It is, thus, convenient to define two new variables: \(\Sigma\) and \(\Lambda\), as the weighted linear combinations: \[2\Sigma = w_{IE}\;E+(w_{II}+\alpha)I\;, \tag{20}\] \[2\Lambda = w_{IE}\;E-(w_{II}+\alpha)I\;, \tag{21}\] \begin{table} \begin{tabular}{l c c c} \hline \hline & DP & TDP & H+TDP \\ \hline Codim. & 1 & 2 & 3 \\ \hline \hline \(\mathbf{\beta}\) & 1 & 1/2 & 1/2 \\ \(\mathbf{\delta}_{h}\) & 2 & 3 & 2 \\ \(\mathbf{\theta}\) & 1 & 1/2 & 1 \\ \(\mathbf{\delta}\) & 1 & 1 & 1 \\ \(\mathbf{\eta}\) & 0 & 0 & 2 \\ \(\nu_{\parallel}\) & 1 & 1 & 1 \\ \(\mathbf{\tau}\) & 3/2 & 3/2 & 5/4 \\ \(\mathbf{\tau}_{t}\) & 2 & 2 & 2 \\ \(\mathbf{\gamma}\) & 2 & 2 & 4 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of mean-field exponents in terms of which the mean-field dynamics (Eq.(3) and Eq. (4)) is rewritten in a simpler form: \[2\dot{\Sigma}{\approx}\Delta\Sigma{+}(2w_{EE}{+}2w_{II}{+}\Delta) \Lambda{+}\mathcal{O} \tag{22}\] \[2\dot{\Lambda}{\approx}\Delta\Sigma{-}[2(w_{EE}{+}w_{II}{-}2 \alpha){+}\Delta]\Lambda{+}\mathcal{O}, \tag{23}\] where \(\mathcal{O}=\mathcal{O}(\Sigma^{2},\Lambda^{2},\Sigma\Lambda,...)\) stands for higher-order terms. Observe that, right at the transition (\(\Delta=0\)), the stability matrix around the origin is \[A=\left(\begin{array}{cc}0&w_{EE}^{T}+w_{II}\\ 0&w_{EE}^{T}-w_{II}-2\alpha\end{array}\right)\;, \tag{24}\] which has only one vanishing eigenvalue, while the second one is strictly negative at criticality for \(T_{1}\) transitions. This means that \(\Lambda\) decays exponentially fast, therefore, it is an _irrelevant_ field for scaling. Only one "slow mode" or "relevant field" exists, \(\Sigma\), and -- as theoretically predicted in Grinstein _et al._[61] for these conditions -- the scaling behavior should coincide with standard DP. In particular, at the transition point --where the linear term of Eq. (22) vanishes-- the quadratic term dominates and therefore \(\Sigma(t)\propto t^{-1}\), so that \(\theta=1\) (as numerically confirmed in Fig. 6a). Considering the previous three independent exponent values, one can already conclude that the \(T_{1}\) transition actually belongs in the DP universality class (see Table 1). Nevertheless, for the sake of completeness, the survival probability and the total number of particles, at \(\Delta=0\), are confirmed to scale with spreading-exponent values \(\delta=1\) (Fig. 7a) and \(\eta=0\) (Fig. 8a), respectively, as expected for the DP class. We have also confirmed the consistency with DP by numerically analyzing the statistics of avalanches at the transition, revealing exponent values compatible with the DP predictions \(\tau=3/2\), \(\tau_{t}=2\), and \(\gamma=2\) (see the distributions of sizes \(S\), durations \(T\), and average sizes as a function of durations in Figs. 9a, 9d, and 9g, respectively). Moreover, the averaged avalanche shape is approximately an inverted parabola throughout the \(T_{1}\) line (Figs. 10a and 10b), collapsing for different durations with \(\gamma=2\), even if with some asymmetry (see Section V for a more in-depth discussion on avalanche shapes). Figure 4: Order parameter as a function of the control parameter, \(\Delta=w_{EE}-w_{EE}^{*}\), across the eight possible types of transitions as represented in Fig 3. In all plots revealing continuous transitions, the dot-dashed blue curves correspond to asymptotic behavior \(E^{*}(\Delta;h=0)\sim\Delta^{\beta}\), with the corresponding values of the \(\beta\) exponent. The shaded grey areas correspond to bistability between active and standard quiescent phases and the pink shaded area between active and excitable quiescent phases. The green shaded areas represent the excitable quiescent phase (same colors as Fig. 3). (a) At \(T_{1}\) (\(w_{IE}=3\)), the system exhibits a second-order transition from the standard quiescent to the active phase, consistently with the directed-percolation (DP) universality class (\(E^{*}\sim\Delta^{1}\) for \(\Delta\geq 0\)). (b) \(T_{2}\) (\(w_{IE}=3\)) is also a continuous phase transition occurring through a tricritical point and is consistent with the tricritical directed percolation (TDP) universality class (\(E^{*}\sim\Delta^{1/2}\)). (c, d) In contrast, \(T_{3}\) and \(T_{4}\) are first-order or discontinuous phase transitions with coexistence between an active phase and one of two possible kinds of quiescence (\(w_{IE}=3\)): first, a standard quiescent state (grey shaded area) and second an excitable quiescent state (pink shaded areas). (e) Case B (\(w_{IE}=1\)) allows for a special tricritical transition (\(T_{5}\)) occurring through a Hopf-tricritical point (\(E^{*}\sim\Delta^{1/2}\)). (f, g, h) In case C (\(w_{IE}=0.8\)), both \(T_{6}\) and \(T_{7}\) are continuous phase transitions when \(h=0\), with \(E^{*}\sim\Delta^{1}\) and \(T_{8}\), \(E^{*}\sim\Delta^{1/2}\), respectively. However, once a non-vanishing external field \(h\neq 0\) is introduced (dash-dotted and dashed lines), there is bistability driven by the external field (for more details see grey shaded areas in Figs. 5f-h). Parameter values are set as in Fig 3. Thus, in summary, at the line of transcritical bifurcations (\(T_{1}\)) separating a standard quiescent from the active phase, the Wilson-Cowan stochastic model exhibits a genuine critical point in the DP class, a result that is consistent with recent analyses of de Candia _et al._[34] for their specific choice of parameter values. ### \(T_{2}\): Tricritical Directed Percolation (Saddle-node-transcritical bifurcation) The tricritical point in case A (see Fig. 3a) corresponds to a saddle-node transcritical (SNT) bifurcation -- i.e., where the lines of transcritical and saddle-node bifurcations intersect without further degeneracies [62; 63]. Thus, in order to tune to this transition point one needs to set two parameters in the phase diagram (\(w_{EE}\), \(w_{EI}\)), as explicitly calculated in Appendix VI.2. An analysis in terms of the fields \(\Sigma\) and \(\Lambda\) (similar to the previous case) shows that there is only one vanishing eigenvalue at the transition, and, thus, the second field is irrelevant for scaling. Therefore, \(T_{2}\) is expected to be described by the mean-field tricritical directed percolation universality class (TDP) [13]. Indeed, considering the leading-order term in a power expansion in both \(\Delta\) and \(h\), one has: \[E^{*}(\Delta,\!h\!=\!0)\!\approx\!\!\sqrt{\frac{\Delta}{w_{IE}}} \!+\!\mathcal{O}(\Delta), \tag{25}\] \[E^{*}(h,\!\Delta\!=\!0)\!\approx\!\!\left[\frac{3\big{[}w_{IE}^{ 2}\!-\!(\alpha\!+\!\!w_{II})^{2}\big{]}h}{w_{IE}^{2}[(\alpha^{2}\!-\!3)w_{IE}\! -\!(\alpha^{2}\!+\!3)\alpha]}\right]^{\frac{1}{3}}\!+\!\mathcal{O}\!\left(h^{ \frac{1}{2}}\right) \tag{26}\] from where \(\beta=1/2\) (Fig. 4b) and \(\delta_{h}=3\) (Fig. 5b), as expected for the TDP universality class. At the transition, the lowest order correction of Eq. (22) in \(\Sigma\) is \(\mathcal{O}(\Sigma^{3})\), so that asymptotically \(\Sigma\propto t^{-1/2}\) and, hence, \(\theta=1/2\), as numerically confirmed in Fig. 6b. Once again, considering the linear relationship between \(E(t)\) and \(I(t)\), both densities share this scaling. Finally, the exponent for the survival probability remains \(\delta=1\) (see Fig. 7b), \(\eta=0\) (see Fig. 8), \(\tau=3/2\) (Fig. 9b), \(\tau_{t}=2\) (Fig. 9e), and \(\gamma=2\) (Fig. 9h), all of which are consistent with the expected values in the TDP class (see Table 1). ### \(T_{3}\): Standard discontinuous transition (Saddle-node bifurcation) The line of saddle-node bifurcations (see Fig. 3a), Eq. (11), defines the third type of transition, \(T_{3}\), to go from a standard quiescent state to the active phase. This type of transition is characterized by a discontinuous Figure 5: Order parameter as a function of the external field right at the transition (\(\Delta=0\)). Observe that three out of the eight types of transitions described here exhibit power law scaling with the external field, i.e., \(E^{*}(h;\Delta=0)\sim h^{1/\delta_{h}}\). (a) For \(T_{1}\), \(\delta_{h}=2\), consistently with the DP universality class. (b) For \(T_{2}\), \(\delta_{h}=3\), consistently with TDP. (c, d) For \(T_{3}\) and \(T_{4}\), the order parameter’s response to an external field shows bistability (shaded area). (e) Transition \(T_{5}\) (H+TDP), differently from the usual tricritical transition (TDP), scales with \(\delta_{h}=2\). (f, g, h) Remarkably, for \(T_{6}\), \(T_{7}\), and \(T_{8}\), contrary to the behavior with \(h=0\), the order parameter becomes bistable (shaded area) as the external field increases. Parameter values as in Fig. 3. jump in the order parameter and includes an intermediate regime of bistability, where both the active and the standard quiescent state are stable (see Figs. 3a and 4c). The regime of coexistence lasts until, at a second bifurcation, the quiescent phase loses its local stability. Given that the transition is discontinuous, the exponents \(\beta\) and \(\delta_{h}\) are not properly defined (Fig 5c). Similarly, neither the activity nor the survival probability decay to 0 for initial conditions in the basin of attraction of the active phase (see Figs. 6c and 7c), so that the exponents \(\theta\) and \(\delta\) are not well-defined either. Thus, in summary, the \(T_{3}\) transition is just a standard first-order or discontinuous transition into a quiescent state [5; 13]. ### \(T_{4}\): Discontinuous transition from an excitable quiescent state (Saddle-node bifurcation) A scenario very similar to \(T_{3}\) occurs at \(T_{4}\), which appears in all three possible phase diagrams (A, B, and C; see Fig 3). Transition \(T_{4}\) is also discontinuous with phase coexistence, but it differs from \(T_{3}\) in the fact that -- as illustrated in Fig. 4d -- the quiescent phase that coexists with the active one in the regime of bistability is of the excitable type, rather than the standard one. For the same reasons as in \(T_{3}\), none of the critical exponents is well defined (see Figs. 5d, 6d, and 7d). Thus, in summary, \(T_{4}\) is a discontinuous transition with bistability, but with the peculiarity of having an excitable quiescent state coexisting with the active one. ### \(T_{5}\): Hopf Tricritical Directed Percolation (Hopf saddle-node-transcritical bifurcation) As illustrated in Fig. 3b, Case B exhibits a _codimension 3_ bifurcation point at which the tricritical point (codimension 2) and the line of Hopf bifurcations meet. This transition occurs only in case B, for the particular choice of parameters for which the vertical line of Hopf bifurcations ends up exactly at the tricritical point, \(w_{IE}^{*}=\alpha+w_{II}\), as derived from Eq. (12) and Eq (39) in Appendix B. Using this constraint, one can easily find that the location of the \(T_{5}\) point is specified by the following set of conditions (see Appendix B): \(w_{EE}=2\alpha+w_{II}\) and \(w_{EI}=w_{IE}^{*}\). Let us first write the stationary solutions of the dynamical equations, Eq. (3) and Eq. (4), up to leading order in \(\Delta\) at vanishing \(h\) and, also, up to leading order in \(h\) at vanishing \(\Delta\), i.e., \[E^{*}(\Delta;h=0) \approx\sqrt{\frac{\Delta}{\alpha+w_{II}}}+\mathcal{O}(\Delta)\;, \tag{27}\] \[E^{*}(h;\Delta=0) \approx\sqrt{\frac{h}{3(\alpha+w_{II})^{2}}}+\mathcal{O}(h)\;. \tag{28}\] These imply \(\beta=1/2\) (as illustrated in Fig. 4e) and \(\delta_{h}=2\) (see Fig. 5e). Note that \(\beta\) coincides with its counterpart Figure 6: Order parameter time series for the eight types of transitions. Observe that only three of them exhibit dynamical scaling \(E(t)\propto t^{-\theta}\). (a) \(T_{1}\) exhibits an asymptotic power law decay with the expected DP value \(\theta=1\). (b) \(T_{2}\) shows a slower asymptotic time decay in the TDP class, \(\theta=1/2\). (c) For \(T_{3}\), the saddle-node bifurcation gives rise to bistability between an active and a quiescent phase. (d) \(T_{4}\) behaves very similarly to \(T_{3}\) but frustrated oscillations drive the system more easily to regions II and III, so the bistability is between active and excitable quiescent phases. (e) \(T_{5}\) is a genuine second-order phase transition with \(\theta=1\). (f,g,h) \(T_{6}\), \(T_{7}\), and \(T_{8}\) are not genuine continuous transitions and show no signatures of dynamic scaling, but rather an exponential decay to quiescence. Parameter sets as in Fig. 3. for TDP (as expected for a tricritical-like point) but, curiously enough, \(\delta_{h}\) does not; it instead coincides with its value in the DP class. Therefore, the static exponents at \(T_{5}\) do not fully comply with either of the well-known universality classes. To make further progress, it is convenient to write the equations for \(\Sigma(t)\) and \(\Lambda(t)\) for case B, as defined in Eq. (22) and Eq. (23): \[2\dot{\Sigma}(t) =\Delta(\Sigma+\Lambda)+4(w_{II}+\alpha)\Lambda-\frac{2\alpha}{w_ {II}+\alpha}\Sigma^{2}-4\Sigma\Lambda\] \[\quad-\frac{2\alpha}{w_{II}+\alpha}\Lambda^{2}+\mathcal{O} \tag{29}\] \[2\dot{\Lambda}(t) =\Delta(\Sigma+\Lambda)-4\Lambda^{2}-\frac{4\alpha}{w_{II}+ \alpha}\Sigma\Lambda+\mathcal{O} \tag{30}\] where \(\mathcal{O}\equiv\mathcal{O}(\Sigma^{3},\Sigma^{2}\Lambda,\Sigma\Lambda^{2}, \Lambda^{3},\Delta\Sigma^{2}...)\) stands for higher-order terms and time dependences have been omitted for simplicity. Moreover, right at the transition point (\(\Delta=0\)), the dynamics simplifies to \[\dot{\Sigma}(t) = 2(w_{II}{+}\alpha)\Lambda{-}\frac{\alpha}{(w_{II}{+}\alpha)} \Sigma^{2}{+}\mathcal{O} \tag{31}\] \[\dot{\Lambda}(t) = -2\Lambda^{2}{-}\frac{2\alpha}{w_{II}{+}\alpha}\Lambda{\Sigma+ \mathcal{O}}. \tag{32}\] where the consistency of the truncation of higher-order terms will be justified a posteriori. In particular, observe that the stability matrix around the origin becomes \[A\propto\left(\begin{array}{cc}0&w_{II}+\alpha\\ 0&0\end{array}\right) \tag{33}\] so that the null eigenvalue is degenerate and, thus, an anomalous type of scaling is to be expected. Indeed, the previous matrix is characteristic of a Bogdanov-Takens bifurcation [52], which has been already discussed in the context of Wilson-Cowan models [41; 37] and, more in general, in the analysis of non-normal or _non-reciprocal phase transitions_[64]. It is important to observe that the only linear term in the first equation, Eq.(31), \(2(w_{II}+\alpha)\Lambda(t)\), has a positive coefficient. This implies that at criticality \(\Lambda(t)\) needs to decay to zero faster than \(\Sigma(t)\) as otherwise the overall right-hand-side would be positive asymptotically in time (which cannot possibly happen at criticality). Therefore, given that \(\Sigma(t)\) needs to decay slower than \(\Lambda\), the slowest-decaying non-linear term in Eq.(31) is the one proportional to \(-\Sigma^{2}\). Knowing that asymptotically, \(\dot{\Sigma}\propto-b\Sigma^{2}\), with \(b=\alpha/(w_{II}+\alpha)\) one readily finds that \(\Sigma\sim t^{-1}/b\) and, therefore, \(\theta=1\) (Fig. 6e). Finally, plugging this result into the second equation, Eq.(32), comparing constants and exponents, one readily finds that \(\Lambda\sim t^{-2}\). Observe that, indeed, as anticipated, \(\Lambda(t)\) decays faster than \(\Sigma(t)\): \(\Lambda\sim\Sigma^{2}\), which justifies the truncation of higher-order terms in the previous equations. Using these observations one concludes that, right at the transition point \(\Delta=0\), the dynamical scaling is con Figure 7: Survival probability as a function of time. The survival probability at second-order phase transitions scales as \(P_{s}(t)\propto t^{-\delta}\). Black dots stand for the numerical simulations (for the same parameters as Fig. 3 and \(N=10^{8}\)) and dashed lines show the corresponding exponent value. (a) \(T_{1}\) belongs to the DP universality class, i.e., \(\delta=1\). (b) \(T_{2}\) belongs to TDP so that \(\delta=1\) (same as DP). (c, d) For the first-order phase transitions, the system’s survival probability converges to a non-vanishing value as \(t\to\infty\) due to the possibility of being attracted to the active phase. (e) For \(T_{5}\), \(\delta=1\) as in the previous continuous transitions, but with stronger finite-size effects. (f) For \(T_{6}\), the system shows a behavior similar to \(T_{5}\): a decay with \(\delta=1\) and strong finite-size effects. (g, h) The survival probability shows a peculiar behavior of several sharp decays with some small plateaus, which stem from the excitability of the quiescent phase. sistent with DP because, since \(\Lambda\) decays faster, it does not influence the decay of \(\Sigma\) (dominated by a quadratic term). This result is surprising as we are dealing with a tricritical point so, a priori, one would expect TDP-like scaling. The situation is different away from the critical point (\(\Delta>0\)). In this case, it is convenient to focus on the equation for \(\dot{\Lambda}\), Eq. (30). At stationarity, the linear positive term (proportional to \(\Delta\)) needs to cancel with the leading non-linear one. A priori, the linear positive term is either the one proportional to \(\Delta\Sigma\) or the one proportional to \(\Delta\Lambda\), depending on the scaling dimensions of \(\Sigma\) and \(\Lambda\). Note that both yield that \(\Lambda\) scales as \(\Lambda\propto\Delta\). Now, focusing on the first equation (Eq. (29)), the leading positive term is \(4(w_{II}+\alpha)\Lambda\) (which scales as \(\Delta\), while \(\Delta(\Sigma+\Lambda)\) is a higher-order contribution). This leading term needs to be comparable with the leading negative term, which is the one proportional to \(-\Sigma^{2}\) (note that the other possibility, \(-4\Sigma\Lambda\), leads to a fixed value of \(\Sigma\) that does not change/scale with \(\Delta\) and, therefore, it is not a solution). The resulting scaling renders \(\Lambda\sim\Sigma^{2}\), which is consistent with the temporal scaling. And, then, one derives \(\Sigma\sim\Lambda^{1/2}\sim\Delta^{1/2}\), i.e., \(\beta=1/2\) (while the field \(\Lambda\) scales with an exponent \(\beta_{\Lambda}=1\)). Therefore, since (i) the order-parameter exponent is that of the TDP class, \(\beta=1/2\), (ii) the time-decay exponent \(\theta=1\) differs from its TDP value, and (iii) \(\nu_{\parallel}=1\) (as it is the case for all mean-field transitions), then it follows that \[\theta\neq\beta/\nu_{\parallel}\;, \tag{34}\] which violates one of the basic scaling relations in systems with quiescent states, i.e., Eq.(17). Let us remark that a similar violation of scaling was found by Noh and Park [65] in a model with quiescent states and two relevant fields: an "excitatory" and a "repressing" one. In both cases --here and [65]-- the breakdown of scaling stems from the non-trivial interplay between these two fields with opposing effects. Similarly to the other transitions, the survival probability decays with an exponent \(\delta=1\), albeit with a higher sensitivity to system size (see Fig. 7e). Also, consistently with the scaling relation \(\tau_{t}=\delta+1\)[55], the avalanche distribution of durations scales with the same exponent as in DP and TDP, \(\tau_{t}\approx 2\) (Fig. 9f). In contrast with the rest of the second-order phase transitions (see Fig. 8a), the growth of the total activity in spreading experiments right at criticality is \(N(t)\sim t^{2}\), yielding \(\eta=2\) (Fig. 8b). This observation is rather surprising for a mean-field model as most mean-field universality classes are characterized by \(\eta=0\) (i.e., absence of an "anomalous dimension" [3; 8]). In order to shed some light on this result, let us observe that the linearized dynamics at criticality -- controlled by the normal form of a Bogdanov-Takens bifurcation, Eq. (33) -- is such that a small initial perturbation can be largely amplified before decaying back to quiescence, i.e., around the fixed point the system is excitable. In particular, if the perturbation consists of a single seed (as in spreading experiments), the number of active sites in surviving runs grows in a deterministic way until a maximum size is reached, and, then, the asymptotic decay toward the quiescent state (controlled by the exponent \(\theta\)) begins. This initial deterministic growth -- which does not occur in the DP nor TDP classes -- is expected to be responsible for the anomalous value of \(\eta\). More specifically, observe that the density \(\Sigma\) at first grows linearly -- as \(\dot{\Sigma}\propto\Lambda\) and \(\Lambda\) can be approximately taken as a constant because its negative eigenvalue vanishes [Eq. (31) and Eq. (32)]. Furthermore, the total number of active sites \(N(t)\) is equal to the density times an additional "volume factor", which, owing to the deterministic expansion, grows linearly in time. Therefore, one concludes that \(N(t)\sim\Sigma(t)t\sim t^{2}\), which yields \(\eta=2\). Given this anomalous value and using the general scaling relations described before, one can infer other exponent values. In particular, Eq. (14) predicts \(\tau=5/4\) and Eq. (16) leads to \(\gamma=4\), which are both unusual/anomalous exponents in mean-field theories. We numerically verified both of these results; scaling compatible with \(\tau\approx 5/4\) can be observed in Fig. 9c, and with \(\gamma\approx 4\) (see Fig. 9i). This latter value also gives an excellent data collapse for \(P(S|T)\) (Figs. 11a and 11b) and is consistent with the scaling relation between size and duration cutoffs (Fig. 11c) [66; 67; 68]. In summary, the \(T_{5}\) transition defines a thus-far unknown universality class, which we named Hopf Tricritical Directed Percolation (H+TDP). In its mean-field variant, it has a set of exponents that do not match either DP or TDP universality classes (Table 1), violates at least one scaling relation, includes some anomalous exponent values, and produces highly asymmetrical avalanche shapes, as we shall show in a separate section. A more systematic and rigorous derivation of these re Figure 8: Mean number of particles \(N(t)\) in spreading experiments in bonafide continuous phase transitions (i.e., \(T_{1}\), \(T_{2}\), and \(T_{5}\)), at which one expects \(N(t)\sim t^{\eta}\). Simulations with the same parameters as Fig. 3 with \(N=10^{8}\) [panel (a)] and \(N=10^{8}\) and \(N=10^{10}\) [panel (b)]. (a) For \(T_{1}\) and \(T_{2}\), we obtain results compatible with \(\eta=0\), as expected for DP and TDP as well as, in general, for mean-field theories. (b) On the other hand, for \(T_{5}\) we obtain the unusual result \(\eta=2\), with strong finite-size effects. sults, together with a field-theoretic discussion of this universality class will be presented elsewhere. ### \(T_{6}\): Hopf-transcritical bifurcation. This type of continuous transition (see Fig. 4f) occurs when the line of Hopf bifurcations collides with the transcritical line (see Fig. 3c) and appears only in case C. This transition is peculiar in that at the critical point -- independently of the initial conditions -- the trajectories are attracted to region II (and, possibly, III; see Fig. 1b). This occurs because the Hopf bifurcation overrides the transcritical bifurcation and the elicited frustrated oscillations drive the system toward the switching manifold and, thus, into region II. Once in region II, the excitatory density decays exponentially fast, dragging down the system without signatures of scaling; e.g., the time-decay exponent \(\theta\) is not defined for \(T_{6}\) (see Fig. 6f). At the transition point (as specified by Eq. (12) and Eq. (13)), one can rewrite Eq. (18) and Eq. (19) as: \[E^{*}(\Delta;h=0) \approx \frac{\alpha+w_{II}}{(\alpha+w_{II}-w_{IE})}\Delta+\mathcal{O}( \Delta^{2}) \tag{35}\] \[E^{*}(h;\Delta=0) \approx \sqrt{\frac{[(w_{II}+\alpha)^{2}-\alpha w_{IE}]h}{\alpha w_{IE}[ w_{IE}-w_{II}-\alpha]}}+\mathcal{O}(h) \tag{36}\] On the one hand, Eq. (35) holds because, for \(h=0\), there is a stable equilibrium in region I for \(\Delta\geq 0\), which attracts the trajectories, preventing them from falling into region II. Observe that in case C, \(w_{IE}<w_{II}+\alpha\) --as the condition at the interface between case A and case C is \(w_{IE}=w_{II}+\alpha\)-- and, therefore, while Eq. (35) is valid and yields \(\beta=1\), Eq. (36) is misleading since the denominator is negative inside the square root so that it does not correspond to a real solution. The reason is that the previous equations are based on the naive linearisation of the dynamical equations, assuming the non-vanishing Figure 9: Avalanche analysis. Simulations using Gillespie’s algorithm for the same parameters as in Fig. 3 and network size \(N=10^{8}\). In this case, we report results only for true (or bonafide) continuous phase transitions, for which scale-free avalanches emerge, i.e., \(T_{1}\), \(T_{2}\), and \(T_{5}\). For the \(T_{1}\) transition, one obtains results as expected for DP: (a) \(\tau\approx 3/2\), (d) \(\tau_{t}\approx 2\), and (g) \(\gamma\approx 2\). For \(T_{2}\), the system behaves consistently with TDP: (b) \(\tau\approx 3/2\), (e) \(\tau_{t}\approx 2\), and (h) \(\gamma\approx 2\), i.e., TDP and DP share the same avalanche exponents. Finally, for the \(T_{5}\) transition, (c) \(\tau\approx 5/4\), (f) \(\tau_{t}\approx 2\), and (i) \(\gamma\approx 4\). part of the response function \(\Phi\). However, this assumption is invalid in the present case. Thus, for Eq. (36) and \(\Delta=0\), since trajectories fall into region II and \(E\) decays to zero exponentially fast, the absorbing state remains stable even as \(h\) increases from zero. Therefore, the exponent \(\delta_{h}\) is not well defined for \(T_{6}\). Nevertheless, further increasing the external field \(h\) eventually leads to a saddle-node bifurcation and a discontinuity in the order parameter (Fig. 5f). We have also verified that the system's survival probability seems to decay in time with \(\delta=1\) (Fig. 7f) as in all other transitions, even if (similarly to \(T_{5}\)) with strong finite-size effects. In conclusion, \(T_{6}\) exhibits a mixture of signatures of both continuous and discontinuous transitions. ### \(T_{7}\) and \(T_{8}\): Continuous transitions from quiescent-excitable to active states The transitions represented by \(T_{7}\) and \(T_{8}\) happen between the quiescent excitable state and the active state (only in case C, as illustrated in Fig. 3c). The first occurs through a transcritical-like bifurcation (black dashed lines in Fig. 3) and the second through a tricritical (or saddle-node-transcritical) point. Thus, these two are the counterparts of \(T_{1}\) and \(T_{2}\), respectively, for excitable -- rather than standard-- quiescent states. Let us recall that, as explained above, a naive linearization of the quiescent excitable state (assuming \(\Phi>0\)) yields eigenvalues with a non-vanishing imaginary part; in any case, the quiescent state remains stable due to frustrated oscillations that draw the system into the regions II and, possibly, III (Fig. 1b). Observe that Eq.(18) and Eq. (25) remain unchanged for \(T_{7}\) and \(T_{8}\), respectively. Therefore, the order parameter changes continuously with the control parameter with \(\beta=1\) and \(\beta=1/2\), respectively (Fig. 4g and 4h). Figure 10: Skewness and mean temporal profile of avalanches. Enlargements of Fig. 3 illustrate the excursion in parameter space (a) along a transcritical line to a tricritical point and (d) along a tricritical line to a Hopf-tricritical point (by decreasing the parameter \(w_{IE}\), the phase diagram goes from case A to case B, which is illustrated in the inset). (b) and (e) show the corresponding rescaled mean temporal profiles (avalanche shape collapses). (c) The skewness of the previous curves [in panel (b)] slightly decreases as the tricritical point is approached (i.e., as the difference \(w_{EI}-w_{IE}\) increases). (e) As the system approaches the H+TDP transition, the curves of the mean temporal profile become progressively less symmetrical, as assessed by the skewness of the curves (f). Simulations with the same parameters as Fig. 3 and \(N=10^{8}\). However, similarly to transition \(T_{6}\), the denominators of Eq. (19) and Eq. (26), governing the response to an external field \(h\) at the transition point, are negative and the system asymptotically reaches region II, i.e., converges quickly to quiescence. Thus, the response to an external field coincides with that of \(T_{6}\), and the exponent \(\delta_{h}\) is not well-defined for either \(T_{7}\) or \(T_{8}\) as there is a discontinuous jump in the order parameter (see Figs. 4g and 4h as well as Figs. 5g and 5h). On the one hand, also as in \(T_{6}\), the asymptotic dynamics of the order parameter in the \(T_{7}\) and \(T_{8}\) transitions exhibit an exponential time decay (Figs. 6g and 6h). On the other hand, the overall behavior of the survival probability, for \(T_{7}\) and \(T_{8}\), shows an intermediate plateau, as opposed to the smooth decay to zero observed for \(T_{6}\) (Figs. 7f-h). These plateaus stem from the fact that the excitable quiescent phase is well-established before the transitions take place (in opposition to what happens in \(T_{6}\)). Thus, in summary, the \(T_{7}\) and \(T_{8}\) transitions also exhibit a mixture of features of continuous and discontinuous phase transitions. ## V The average shape of avalanches The scaling of the mean avalanche shape (also called "mean temporal profile") of avalanches - i.e., the fact that the averaged shape of avalanches with different sizes and durations can be collapsed into a single curve by using the adequate value of critical exponents -- has been used as a signature of criticality in non-equilibrium systems with absorbing states [69]. As already mentioned, the DP and TDP universality classes are known to typically have symmetric inverted parabolas mean-temporal profiles of avalanches, a consequence of time-reversal symmetry [70]. The asymmetry in the mean temporal profile, when found, reflects a break in such symmetry [71]. In the present Wilson-Cowan model, we observed that, when the transition to quiescence occurs in the neighborhood of \(T_{5}\) (or H+TDP) point in parameter space, avalanche shapes exhibit a non-trivial behavior. In particular, as illustrated in Fig. 10a, when one follows the DP (\(T_{1}\)) line towards the TDP (\(T_{2}\)) point in case A, the mean temporal profile of avalanches acquires only a slight asymmetry (Fig. 10b), as quantified by the increase in the absolute value of its skewness (Fig. 10c). This observation agrees with recent results that show that the introduction of an inhibitory population causes a small tilt on the mean temporal profile at the DP transition [34; 35; 37]. However, remarkably, studying the system at the TDP transition as the overall parameters transition from case A to case B (Fig. 10d), the avalanche mean-temporal-profile becomes progressively more and more asymmetric, with its skewness reaching a maximal absolute value -- i.e., maximal asymmetry as illustrated in Fig. 10f-- at the H+TDP transition (Fig. 10e); see also [37]). ## VI Conclusions By means of detailed scaling analyses as well as extensive numerical simulations, we have thoroughly analyzed all possible types of phase transitions between active and quiescent phases that the Wilson-Cowan model exhibits. On the one hand, under some conditions, the model exhibits the standard phenomenology of systems with quiescent/absorbing states, i.e., two phases (active and quiescent) as well as a phase transition separating them. This transition can be a continuous one (in the mean-field directed-percolation class), a discontinuous one with hysteresis, or a tricritical transition in the tricritical-directed-percolation class, at the point where the previous two types of transitions meet [4; 5; 13]. On the other hand, a key feature of the mean-field Wilson-Cowan model is that -- in addition to the standard active and quiescent phases -- there is another "excitable quiescent" phase. In particular, the phase diagram describing the model at a mean-field level exhibits a line of Hopf bifurcations to the left of which there is a standard quiescent state, while, to the right of it, the convergence towards the quiescent state occurs in an oscillatory way (involving complex eigenvalues). Such pseudo-oscillations are nevertheless"frustrated" as the system enters inhibition-dominated regions of the state space (region II or region III in Fig. 1) and, then, activity decays exponentially fast to zero. Observe that in this regime, owing to the non-normality of the stability matrix (see below), small perturbations to quiescence can be transiently amplified, before decaying back again to quiescence, hence the name "excitable quiescent" phase (or also, possibly "reactive" phase, see [28; 37; 72; 73; 54]). Both of the previous features -- i.e., the presence of a line of Hopf bifurcations and of an excitable-quiescent phase -- stem from the existence of an inhibitory field and cannot possibly appear in simpler models for activity propagation, such as directed percolation or the contact process which include only one field, describing the excitatory activity. These two ingredients are at the root of the enriched set of possible phase transitions that the system can exhibit with respect to standard ones. Our analyses reveal that the Wilson-Cowan model can exhibit three possible types of (bi-dimensional) phase diagrams (as illustrated in Fig. 3) that can be viewed as sections of a larger (three-dimensional) phase diagram. These three cases (A, B, and C) differ from one another in the relative position of the (vertical) line of Hopf bifurcations with respect to the tricritical point (and are controlled by a single parameter, \(w_{IE}\) in Fig. 3). Careful inspection of the three of them reveals the existence of 8 different types of phase transitions, labeled \(T_{1},T_{2},\ldots\), and \(T_{8}\), respectively. Three of them are usual ones separating active from standard quiescent phases and are well-known from the theory of phase transitions: (i) a (mean-field) directed-percolation (DP), continuous transition (\(T_{1}\)); (ii) a (mean-field) tricritical directed-percolation (TDP) (\(T_{2}\)); and (iii) a (mean-field) discontinuous transition with bistability and hysteresis (\(T_{3}\)). These three cases correspond to transcritical, saddle-node-transcritical, and saddle-node bifurcations, respectively and exhibit the expected features for their corresponding universality classes. In particular, let us remark that our results for the DP case (\(T_{1}\)) are consistent with those recently reported by de Candia _et al._[34]. These authors chose to study the case where \(w_{E}\equiv w_{EE}=w_{IE}\) and \(w_{I}\equiv w_{EI}=w_{II}\); for this choice of parameters, one is in the \(T_{1}\) case (actually, Eq. (8) becomes \(w_{E}-w_{I}=\alpha\), which is the condition for criticality in [34]). Similarly, for \(T_{2}\) our results reproduce the TDP class as first described in [13] (note that recent research has also considered the possibility of a tricritical point in neuronal models with a population of inhibitory units [74; 37]). Finally, for \(T_{3}\), we observe the standard features from discontinuous phase transitions into absorbing states, such as hysteresis [13]. Each of the previous three transitions has a counterpart in which the quiescent phase is not a standard one but a quiescent-excitable one: the twin of \(T_{1}\) is a transcritical bifurcation from the quiescent excitable state (labeled \(T_{7}\)), the twin of the tricritical \(T_{2}\) is \(T_{8}\), and the twin of \(T_{3}\) is a discontinuous transition, which exhibits bistability between the active and the quiescent-excitable states (labeled \(T_{4}\)). A peculiar feature of the continuous ones, i.e., \(T_{7}\) and \(T_{8}\), is that their response to an external field is anomalous: even if they are continuous transitions, once the field is introduced they become discontinuous. In other words, the addition of a small external field drives slightly active states to become quiescent (a phenomenon that stems from the excitability of the quiescent state). As a consequence, critical exponents such as \(\delta_{h}\) are not well-defined, so \(T_{7}\) and \(T_{8}\) share features of both continuous and discontinuous phase transitions. The remaining transitions are unusual and involve entering the active phase right at the point where a Hopf bifurcation also occurs (i.e., they correspond to higher-codimension bifurcations). In particular, \(T_{6}\) describes the situation in which the Hopf bifurcation falls on top of a transcritical bifurcation, whereas \(T_{5}\) occurs at the special point in which the Hopf bifurcation falls exactly on top of the tricritical point (only possible in case B). For \(T_{6}\), the transition is adjacent to the excitable-quiescent phase and, thus, one observes the same phenomenon as for \(T_{7}\) and \(T_{8}\), when introducing an external field. Even if the transition (\(T_{6}\)) is continuous, the response to a small external field is anomalous, giving rise to a discontinuity and preventing the exponent \(\delta_{h}\) to be well defined, so again, \(T_{6}\) exhibits features of both continuous and discontinuous phase transitions. Finally, \(T_{5}\) is by far the most interesting and less trivial transition. We have named it Hopf-tricritical directed percolation (H+TDP) transition as it occurs when the Hopf line collides with the tricritical point, giving rise to a codimension 3 transition. In this case, both eigenvalues of the stability matrix vanish at the transition point, so that the matrix has the normal form of a Bogdanov-Takens bifurcation. From the point of view of power-counting and dimensionality analyses, this fact has important implications, as carefully discussed above. In particular, a key aspect is that the scaling features are controlled by different terms (i) right at criticality and (ii) slightly in the active phase. This dichotomy entails a remarkable and surprising violation of some well-established scaling laws. It is noteworthy, though, that a breakdown of scaling in a somehow similar model -- including also a second inhibitory-like field -- has been Figure 11: Conditional probability of avalanche sizes given durations, \(P(S|T)\), in bonafide continuous phase transitions (i.e., \(T_{1}\), \(T_{2}\), and \(T_{5}\)). Numerical results obtained with Gillespie’s algorithm for the same parameters as Fig. 3; for (a) and (b), the system is poised at \(T_{5}\) with a network size of \(N=10^{8}\); and, for (c), each transition is simulated for sizes \(N=10^{4}\), \(10^{5}\), \(10^{6}\), \(10^{7}\), and \(10^{8}\). (a) \(P(S|T)\) collapses into a single curve when one sets \(\gamma=4\). (b) In contrast, for \(\gamma=2\) there is no curve collapse. Furthermore, in the inset, one observes that the peaks of \(P(S|T)\) (\(S^{*}\)) scale with \(S^{*}\sim T^{\gamma}\), with \(\gamma=4\). (c) Cut-off for size and duration distributions; one scales as a power of the other with the corresponding value of the exponent \(\gamma\)[67; 68]. recently reported by Noh and Park [65]. Another anomalous feature of \(T_{5}\) is that the exponent controlling the growth of the total number of particles in spreading experiments, \(\eta\), does not vanish, i.e., \(\eta=2\), as opposed to what happens in (mean-field) DP and most other mean-field phase transitions (as it is related with perturbative corrections to mean-field behavior [60]). Moreover, the scaling anomaly of the H+TDP transition is also reflected in its avalanche exponents: while the duration exponent \(\tau_{t}=2\) is consistent with DP and TDP, the size-distribution exponent \(\tau=5/4\) and crackling noise exponent \(\gamma=4\) are different from the usual ones (\(\tau=3/2\) and \(\gamma=2\), respectively). A more systematic field-theoretical analysis of the H+TDP universality class -- as well as its implementation in finite-dimensional substrates -- is left for future work. A relevant hallmark of standard models in the DP class is the symmetry in the mean temporal profile of avalanches, which reveals time-reversal invariance [70; 71]. On the contrary, the mean temporal profile in \(T_{5}\) shows a strong asymmetry that we have quantified in terms of negative skewness. Previous work has shown that the introduction of inhibition tilts the once symmetric parabolas produced by models in the DP universality class [35; 37]. We further propose that not only the strength of the inhibitory coupling slightly tilts the mean temporal profiles, but that the combination of the proximity to the excitable quiescent phase and to the onset of frustrated oscillations promotes even greater distortions. Considering the inherent difficulties in assessing avalanche exponents from experimental data, the asymmetry in the avalanche shape collapse may turn out to be a useful additional tool to more directly reveal proximity to this anomalous transition. Last but not least, it is also worth stressing that the nature of the phase diagram and phase transitions that we have reported for the mean-field Wilson-Cowan model may change when sparse networks are considered [75]. In particular, the presence of enhanced stochastic effects, stemming from the finite connectivity of each unit, can significantly alter the dynamics and induce novel phenomena [37]. The study of the interplay between the transitions discussed here and such additional stochastic effects remains to be pursued. Similarly, the effect of structural heterogeneity, e.g. the presence of local excitation/inhibition imbalances, that could potentially lead to extended critical (Griffiths) phases [76; 77; 78], remains as an open challenge for future work. ###### Acknowledgements. MAM acknowledges the Spanish Ministry and Agencia Estatal de investigacion (AEI) through Project of I+D+i Ref. PID2020-113681GB-I00, financed by MICIN/AEI/10.13039/501100011033 and FEDER "A way to make Europe", as well as the Consejeria de Conocimiento, Investigacion Universidad, Junta de Andalucia and European Regional Development Fund, Project P20-00173 for financial support. HCP acknowledges CAPES (PrInT grant 88887.581360/2020-00) and thanks the hospitality of the Statistical Physics group at the Instituto Interuniversitario Carlos I de Fisica Teorica y Computacional at the University of Granada during her six-month stay, during which part of this work was developed. MC acknowledges support by CNPq (grants 425329/2018-6 and 301744/2018-1), CAPES (grant PROEX 23038.003069/2022-87), and FACEPE (grant APQ-0642-1.05/18). This article was produced as part of the activities of Programa Institucional de Internacionalizacao (PrInt). We are also very thankful to R. Corral, S. di Santo, V. Buendia, J. Pretel, and I. L. D. Pinto for valuable discussions and comments on previous versions of the manuscript. ### Appendix: Gillespie's algorithm The stochastic version of the Wilson-Cowan model [28] was simulated using Gillespie's algorithm [79; 80], following these steps: **Step 0:**: initialize the system; for spreading experiments and avalanche analyses, only an excitatory site is active at \(t=0\); **Step 1:**: at each time step, calculate the transition rates for each neuron -- if active, \(\Phi(s_{i})\), and otherwise, \(\alpha\) -- and add these rates, \(r=\sum_{i}r_{i}\); **Step 2:**: the time step is chosen from an exponential distribution with rate \(r\) and added to the total-time counter; **Step 3:**: and, the site to be updated is chosen with probability \(r_{i}/r\), where \(r_{i}\) is the transition rate of the neuron. The size (duration) of an avalanche is counted as the total number of activations (total time) of a single instance of the simulation starting from just one excitatory activated site before returning to quiescence. ### Appendix: Mathematical conditions for the bifurcation lines/points The mathematical condition for the tricritical point is derived from a standard linear-stability analysis of the stationary solution around zero. First of all, observe that Eq. (5) and Eq. (6) have positive solutions. One can express \(w_{EE}\) as a function of the fixed-point solution \((E^{*},I^{*})\) as specified by Eq. (11): \[w_{EE}=\frac{1}{E^{*}}\left[w_{EI}I^{*}+\Phi^{-1}\left(\frac{\alpha E^{*}}{1-E^{ *}}\right)\right]. \tag{37}\] Expanding in power series around the origin one can readily verify the emergence of an active-state solution at the value of \(w_{EE}^{T}\), specified in Eq. (8). For values \(w_{EE}>w_{EE}^{T}\), the origin loses stability to a positive solution in a transcritical bifurcation. Defining the distance to the critical value of the control parameter, \(\Delta=w_{EE}-w_{EE}^{T}\), the value of this non-trivial solution scales linearly with \(\Delta\) as \[E^{*}\sim I^{*}\sim[(\alpha+w_{II})^{3}-w_{EI}w_{IE}^{2}]^{-1}\Delta. \tag{38}\] Since \(\Delta\geq 0\), this solution is positive for \((\alpha+w_{II})^{3}-w_{EI}w_{IE}^{2}>0\). Observe that, for \((\alpha+w_{II})^{3}-w_{EI}w_{IE}^{2}=0\), Eq. (38) diverges. The saddle-node and transcritical bifurcations collide into a saddle-node transcritical (SNT) bifurcation or tricritical point [62; 63]. Observe that in Fig. 3, a black circle marks the tricritical point in all cases (i.e., \(T_{2}\), \(T_{5}\), and \(T_{8}\)). In cases A and C, this bifurcation has codimension 2 and occurs at: \[w_{EI}^{SNT}=\frac{(\alpha+w_{II})^{3}}{w_{IE}^{2}}, \tag{39}\] \[w_{EE}^{SNT}=\alpha+\frac{(\alpha+w_{II})^{2}}{w_{IE}}\;. \tag{40}\] The non-trivial solution emerges from the trivial solution with \(w_{EE}\) and it scales with the distance to the critical value, \(\Delta\), as \(E^{*}\propto I^{*}\propto\Delta^{1/2}\). Finally, in Fig. 3 case B, a codimension 3 bifurcation emerges from an extra fine-tuning of the parameters when \(w_{IE}=\alpha+w_{II}\). For this choice of parameters, at \(w_{EI}=w_{IE}\), the saddle-node transcritical collides with the Hopf right at the tricritical point, \(T_{5}\). Combining Eq. (12) and Eq. (39), the values of the control parameters, for this bifurcation, are: \[w_{EI}=\alpha+w_{II} \tag{41}\] \[w_{EE}=2\alpha+w_{II}. \tag{42}\] ### Appendix: Do trajectories _cross or slide_ onto the switching manifolds? Piece-wise continuous dynamics have two possible behaviors at the switching manifolds: sliding or crossing [49]. To determine the behavior of the Wilson-Cowan model system, we consider the Heaviside function, Eq. (1), in Eq. (3) and Eq. (4): \[\dot{x}=\left\{\begin{array}{ll}f_{x}^{+}\equiv-\alpha x+(1-x)\tanh(w_{i}x- w_{j}y)\,\\ \hskip 14.226378pt\mbox{if}\ s\equiv w_{i}x-w_{j}y>0\end{array}\right., \tag{43}\] where \(f_{x}^{+}\) (\(f_{x}^{-}\)) is evaluated to the right (left) of the switching manifold, \(s=0\). Let us consider the switching manifold \(s_{E}=0\), where \(E=(w_{EI}/w_{EE})I\), Fig. 1b. One can then write: \[\vec{\nabla}s_{E}=\left(\begin{array}{c}\frac{\partial}{ \partial E}s_{E}\\ \frac{\partial}{\partial I}s_{I}\end{array}\right)=\left(\begin{array}{c}W_{ EE}\\ -W_{EI}\end{array}\right) \tag{44}\] \[\vec{f}^{+}=\left(\begin{array}{c}-\alpha E+(1-E)\tanh\left(w_ {EE}E-w_{EI}I\right)\\ -\alpha I+(1-I)\tanh\left(w_{IE}E-w_{II}I\right)\end{array}\right)^{T}\] (45) \[\vec{f}^{-}=\left(\begin{array}{c}-\alpha E\\ -\alpha I+(1-I)\tanh\left(w_{IE}E-w_{II}I\right)\end{array}\right)^{T} \tag{46}\] In order to know if when the system reaches the switching manifold the trajectories will cross it or slide on it one needs to evaluate the sign of \((\vec{f}^{+}\cdot\vec{\nabla}s_{E})(\vec{f}^{-}\cdot\vec{\nabla}s_{E})\) at the switching manifold \[\vec{f}^{+}\cdot\vec{\nabla}s_{E}=-\alpha(w_{EE}E-w_{EI}I)+\] \[+w_{EE}(1-E)\tanh\left(w_{EE}E-w_{EI}I\right)+\] \[-w_{EI}(1-I)\tanh\left(w_{IE}E-w_{II}I\right) \tag{47}\] \[\vec{f}^{-}\cdot\vec{\nabla}s_{E}=-\alpha(w_{EE}E-w_{EI}I)+\] \[-w_{EI}(1-I)\tanh\left(w_{IE}E-w_{II}I\right) \tag{48}\] Given that at the switching manifold, \(w_{EE}E=w_{EI}I\): \[\vec{f}^{+}\cdot\vec{\nabla}s_{E} = -w_{EI}(1-I)\times \tag{49}\] \[\tanh\biggl{(}\frac{w_{IE}(w_{EI}-w_{II})}{w_{EE}}I\biggr{)}\] \[\vec{f}^{-}\cdot\vec{\nabla}s_{E} = -w_{EI}(1-I)\times\] (50) \[\tanh\biggl{(}\frac{w_{IE}(w_{EI}-w_{II})}{W_{EE}}I\biggr{)},\] \[(\vec{f}^{+}\cdot\vec{\nabla}s_{E})(\vec{f}^{-}\cdot\vec{\nabla}s_{E}) = [w_{EI}(1-I)\times \tag{51}\] \[\tanh\biggl{(}\frac{w_{IE}(w_{EI}-w_{II})}{w_{EE}}I\biggr{)} \biggr{]}^{2}.\] For \((\vec{f}^{+}\cdot\vec{\nabla}s_{E})(\vec{f}^{-}\cdot\vec{\nabla}s_{E})>0\), the trajectories cross the switching manifold, creating a trapping region.
2308.03292
Adiabatic quantum imaginary time evolution
We introduce an adiabatic state preparation protocol which implements quantum imaginary time evolution under the Hamiltonian of the system. Unlike the original quantum imaginary time evolution algorithm, adiabatic quantum imaginary time evolution does not require quantum state tomography during its runtime, and unlike standard adiabatic state preparation, the final Hamiltonian is not the system Hamiltonian. Instead, the algorithm obtains the adiabatic Hamiltonian by integrating a classical differential equation that ensures that one follows the imaginary time evolution state trajectory. We introduce some heuristics that allow this protocol to be implemented on quantum architectures with limited resources. We explore the performance of this algorithm via classical simulations in a one-dimensional spin model and highlight essential features that determine its cost, performance, and implementability for longer times, and compare to the original quantum imaginary time evolution for ground-state preparation. More generally, our algorithm expands the range of states accessible to adiabatic state preparation methods beyond those that are expressed as ground-states of simple explicit Hamiltonians.
Kasra Hejazi, Mario Motta, Garnet Kin-Lic Chan
2023-08-07T04:27:30Z
http://arxiv.org/abs/2308.03292v2
# Adiabatic quantum imaginary time evolution ###### Abstract We introduce an adiabatic state preparation protocol which implements quantum imaginary time evolution under the Hamiltonian of the system. Unlike the original quantum imaginary time evolution algorithm, adiabatic quantum imaginary time evolution does not require quantum state tomography during its runtime, and unlike standard adiabatic state preparation, the final Hamiltonian is not the system Hamiltonian. Instead, the algorithm obtains the adiabatic Hamiltonian by integrating a classical differential equation that ensures that one follows the imaginary time evolution state trajectory. We introduce some heuristics that allow this protocol to be implemented on quantum architectures with limited resources. We explore the performance of this algorithm via classical simulations in a one-dimensional spin model and highlight essential features that determine its cost, performance, and implementability for longer times. We find competitive performance when compared to the original quantum imaginary time evolution, and argue that the rapid convergence of this protocol and its low resource requirements make it attractive for near-term state preparation applications. ## I Introduction A central step in the quantum computation of physical ground-states is to first prepare a state with sufficient overlap with the desired ground state \(|\Psi_{0}\rangle\) of a Hamiltonian \(H\). In the context of near-term quantum algorithms [1], which minimize both qubits/ancillae and gate resources, many protocols for ground-state preparation have been proposed. Examples include variational ansatz preparation [2; 3; 4; 5; 6; 7], adiabatic state preparation (ASP) [8; 9; 10], and quantum imaginary time evolution (QITE) [11; 12; 13; 14; 15], the latter being the subject of this work. Because ground-state preparation is in general formally hard, all these methods rely on some assumptions. For example, ASP starts from an initial Hamiltonian \(H(0)\), whose ground state is simple to prepare, and defines an adiabatic path \(H(s),0\leq s\leq 1\) with \(H(1)\equiv H\), the desired Hamiltonian [16]. To prepare the ground-state to sufficient accuracy, \(H(s)\) must change slowly; an estimate of the adiabatic runtime is [8; 17]\(T\sim\max_{s,j}\frac{|\langle\Psi_{j}(s)|dH/ds|\Psi_{0}(s)\rangle|}{\Delta^{2}}\), where \(|\Psi_{0}(s)\rangle\), \(|\Psi_{j}(s)\rangle\), with \(j\geq 1\), denote the instantaneous ground and excited states of \(H(s)\) and \(\Delta(s)\) is the energy gap between the ground state and the first excited state. For ASP to be efficient, \(H(s)\) must be chosen such that \(\min_{s}\Delta(s)\) is not too small, e.g. at worst \(1/\text{poly}(L)\) in system size \(L\), for a polynomial cost algorithm. The QITE algorithm [11] on the other hand, applies \(e^{-H\tau}\) (\(\tau>0\)) to boost the overlap of a candidate state with the ground state of \(H\); for this work, we consider Hamiltonians that are sums of local terms \(H=\sum_{\alpha}h_{\alpha}\), where \(h_{\alpha}\) is geometrically local (i.e. each term \(h_{\alpha}\) acts on a constant number of adjacent qubits regardless of system size). Ref. [11] introduced a near-term quantum algorithm to obtain the states \[|\Psi(\tau)\rangle=e^{-H\tau}|\Psi(0)\rangle\ /\ \left\|e^{-H\tau}\,|\Psi(0) \rangle\right\|, \tag{1}\] without employing any ancillae or postselection. The method is particularly efficient if \(|\Psi(\tau)\rangle\) has finite correlation volume \(C\) for all earlier imaginary times, in which case \(|\Psi(\tau)\rangle\) can be prepared by implementing a series of local unitaries acting on \(O(C)\) qubits on the candidate state. By using this technique which reproduces the imaginary time trajectory, one can also use QITE as a subroutine in other ground-state algorithms, as well as to prepare non-ground-states and thermal (Gibbs) states, for example by reintroducing ancillae [14], or by sampling [11]. However, to find the unitaries in QITE one needs to perform tomography [11] of the reduced density matrices of \(|\Psi(\tau)\rangle\) over regions of volume \(C\). Although the measurement and processing cost is polynomial in system size, it can still be prohibitive for large \(C\). Despite various improvements in the QITE idea in terms of the algorithm and implementation, this remains a practical drawback [12]. (We briefly note also some other near-term imaginary time evolution algorithms, such as the variational ansatz-based quantum imaginary time evolution, introduced in Ref. [18], which reproduces the imaginary time evolution trajectory in the limit of an infinitely flexible variational ansatz, as well as the probabilistic imaginary time evolution algorithm (PITE) [19], whose probability of success decreases exponentially with evolution time). Here, we introduce an alternative near-term, ancilla-free, quantum method that generates the imaginary time evolution of a quantum state without any tomography. It thus eliminates one of the resource bottlenecks of the original QITE. The idea is to consider the imaginary time trajectory \(\Psi(\tau)\) as generated by an adiabatic process under a particular Hamiltonian \(\tilde{H}(\tau)\). This adiabatic Hamiltonian satisfies an auxiliary dynamical equation that can be solved for entirely classically, i.e. without any feedback from the quantum simulation. Although propagation under \(\tilde{H}\) reproduces imaginary time evolution when performed adiabatically (i.e., one stays in the ground-state of \(\tilde{H}(\tau)\)), this is different to the usual ASP, because \(\tilde{H}\) does not approach \(H\) at the end of the path, even though it shares the same final ground-state. We thus refer to this algorithm as adiabatic quantum imaginary time evolution, or A-QITE. Like the original QITE algorithm, it can be used not only to prepare ground-states directly, but also as a subroutine in other state-preparation algorithms, or to sample from thermal states. We examine the feasibility and performance of this algorithm for the illustrative case of preparing the ground-state of the Ising-like Heisenberg XXZ model in a transverse field. We also study the behaviour of the instantaneous gap and norm of \(\tilde{H}\) as a function of imaginary time (as these determine the cost of integrating the classical equation to determine \(\tilde{H}\)) as well to implement the adiabatic quantum simulation under \(\tilde{H}\). \(\tilde{H}(\tau)\) becomes increasingly non-local with time, and we introduce a geometric locality heuristic to truncate terms in \(\tilde{H}\), which we compare to the original inexact QITE procedure. We finish with some observations on practical implementations of the algorithm. ## II Formalism ### General theory Consider a lattice system described by a Hamiltonian \(H\). We desire an adiabatic Hamiltonian \(\tilde{H}(\tau)\) whose ground state at every \(\tau\) is given by Eq. (1). Consider an infinitesimally imaginary time evolved state from \(\tau\) to \(\tau+d\tau\): \[\left|\Psi(\tau+d\tau)\right\rangle=\left|\Psi(\tau)\right\rangle-d\tau\left. QH\left|\Psi(\tau)\right\rangle+O(d\tau^{2}), \tag{2}\] where \(Q=1-\left|\Psi(\tau)\right\rangle\left\langle\Psi(\tau)\right|\) projects out the ground subspace. Now, suppose \(\left|\Psi(\tau)\right\rangle\) is the ground state of \(\tilde{H}(\tau)\), one should determine \(\tilde{H}(\tau+d\tau)=\tilde{H}(\tau)+\delta\tilde{H}\) such that \(\left|\Psi(\tau+d\tau)\right\rangle\) is its ground state: perturbation theory determines the ground state of \(\tilde{H}(\tau+d\tau)\) as \(\left|\Psi(\tau)\right\rangle+[\tilde{E}_{0}-\tilde{H}(\tau)]^{-1}Q\;\delta \tilde{H}\left|\Psi(\tau)\right\rangle\), where \(\tilde{E}_{0}\) is the smallest eigenvalue of \(\tilde{H}\). We will be working with evolution schemes for \(\tilde{H}\) that start with and maintain \(\tilde{E}_{0}=0\) (more on this below). Thus using (2) we should have: \[\delta\tilde{H}\left|\Psi(\tau)\right\rangle=d\tau\,\tilde{H}H\left|\Psi(\tau) \right\rangle+O(d\tau^{2}), \tag{3}\] where we have used \(\tilde{H}Q=Q\tilde{H}=\tilde{H}\). Eq. (3) is the main equation the adiabatic Hamiltonian \(\tilde{H}\) should satisfy. However, it does not uniquely determine \(\delta\tilde{H}\), and so there are many generating equations for \(\tilde{H}\). A simple choice is \[\frac{d\tilde{H}}{d\tau}=\tilde{H}H+H\tilde{H}, \tag{4}\] where we have used \(\tilde{H}\left|\Psi(\tau)\right\rangle=0\) (see above Eq. (5) for justification) and added the term \(H\tilde{H}\) to make the right-hand side Hermitian. This has the formal solution \(\tilde{H}(\tau)=e^{H\tau}\tilde{H}(0)e^{H\tau}\). The above scheme can, in principle, be implemented as a hybrid quantum-classical algorithm, where \(\tilde{H}\) is first determined by the classical integration of Eq. (4), and then used to implement adiabatic state evolution quantumly. As is clear, this procedure does not involve any feedback from the quantum simulation, and thus does not involve tomography, unlike the original QITE. However, this naive scheme has some potential problems. One set is analogous to that encountered in the original quantum imaginary time evolution scheme, namely, the time evolution of \(\tilde{H}\) renders it nonlocal (both geometrically and in terms of the lengths of the Pauli strings in \(\tilde{H}\)) and increasingly complicated. For example, even if \(H=\sum_{\alpha}h_{\alpha}\) and \(\tilde{H}=\sum_{\beta}\tilde{h}_{\beta}\) are geometrically local, the time derivative introduces geometrically nonlocal terms like \(h_{\alpha}\tilde{h}_{\beta}\), and the number of such terms grows exponentially with time. This renders both the classical determination of \(\tilde{H}(\tau)\), and the quantum implementation of state evolution under \(\tilde{H}(\tau)\) inefficient. There is also a second set of problems arising from the norm of \(\tilde{H}\) and its spectrum. We use the symbol \(|\tilde{\phi}_{i}(\tau)\rangle\) to denote the instantaneous eigenstate of \(\tilde{H}(\tau)\) with eigenvalue \(\tilde{E}_{i}(\tau)\) (if \(\tilde{H}\) implements the imaginary time evolution perfectly, then \(|\tilde{\phi}_{0}(\tau)\rangle=\left|\Psi(\tau)\right\rangle\)). Taking expectation values of Eq. (4) with \(|\tilde{\phi}_{i}\rangle\), the eigenvalues of \(\tilde{H}\) evolve as \(\frac{d\tilde{E}_{i}}{d\tau}=2\tilde{E}_{i}\langle\tilde{\phi}_{i}|H|\tilde{ \phi}_{i}\rangle\). If \(\tilde{H}\) is initialized with zero ground state energy, the evolution will keep it vanishing. However, the other eigenvalues evolve as \[\tilde{E}_{i}(\tau)=\exp\left[2\int_{0}^{\tau}d\tau^{\prime}\langle\tilde{\phi }_{i}(\tau^{\prime})|H|\tilde{\phi}_{i}(\tau^{\prime})\rangle\right]\tilde{E }_{i}(\tau=0) \tag{5}\] To see the potential problems with this, consider the example where \(H\) has a finite spectrum with all eigenvalues above (or below) \(0\). In that case, we clearly see that the eigenvalues \(\tilde{E}_{i},i>0\) are always growing (shrinking) with time, potentially exponentially fast. Thus there is the possibility for numerical issues at long times in determining \(\tilde{H}\) and implementing evolution under it, as we now discuss. The integration of the classical differential equation for \(\tilde{H}\) and the corresponding time evolution under \(\tilde{H}\) will carry some finite numerical error which depends on \(||\tilde{H}||\). This means that rather than obtaining the exact \(\tilde{H}(\tau)\), we obtain \(\tilde{H}(\tau)+V\); if one uses e.g. a finite-order Runge-Kutta method, then \(||V||\propto||\tilde{H}||\). Depending on the dynamics of the eigenvalues, this may introduce a large deviation from the instantaneous ground-state, for example, if \(|\langle\tilde{\phi}_{0}|V|\tilde{\phi}_{i}\rangle/\tilde{\Delta}_{i}|\gg 1\) (where \(\tilde{\Delta}_{i}\) is the instantaneous gap to the \(i\)th state of \(\tilde{H}\)). Similarly, the quantum adiabatic evolution time depends on \(||\tilde{H}(\tau)||\); since \(\max_{x}||d\tilde{H}/ds/\tilde{\Delta}(s)^{2}||\propto\max_{\tau}||\tilde{H}( \tau)/\tilde{\Delta}(\tau)^{2}||\), the total adiabatic evolution time \(T\propto\max_{\tau}||\tilde{H}(\tau)/\tilde{\Delta}(\tau)^{2}||\), which can diverge if the numerator is exponentially growing or the denominator is exponentially decreasing. Fi nally, the implementation of Hamiltonian simulation under \(\tilde{H}(\tau)\) also introduces errors that grow with \(||\tilde{H}||\). It is important to note that these problems may not all occur in concert (and the behavior can be modified by some heuristics below), but we can expect some aspect of these challenges to appear at longer times. On the other hand, faithful imaginary time evolution produces a rapid (exponentially decaying) infidelity with the final state. Thus the long-time numerical and implementation behaviour may not be relevant if sufficient fidelity is already reached. These issues can only be studied through numerical simulation, which we describe below after some discussion of heuristics to ameliorate some of the identified challenges. ### Locality heuristic To address the growing non-locality of \(\tilde{H}\) we first write a modified generating equation for \(\tilde{H}\), with separate differential equations for the individual terms \(\tilde{h}_{\beta}\). We choose \(|\Psi(0)\rangle\) such that it is annihilated by each \(\tilde{h}_{\beta}(0)\), and the evolution preserves this annihilation condition, analogous to Eq. (4). We consider \[\frac{d\tilde{h}_{\beta}}{d\tau}=\sum_{\alpha}\left\{\tilde{h}_{\beta},h_{ \alpha}\right\}^{\prime}, \tag{6}\] where \(\{a,b\}^{\prime}\) denotes the anticommutator of \(a\) and \(b\) if they do not commute and zero if they do. Eq. (6) is consistent with the adiabatic trajectory of Eq. (3) because (4) and (6) only differ by terms that annihilate \(|\Psi(\tau)\rangle\). The expression \(\{\tilde{h}_{\beta},h_{\alpha}\}^{\prime}\) means that \(\tilde{H}\) no longer contains geometrically non-local terms: each \(\tilde{h}_{\beta}\) grows its support from the contribution of terms \(h_{\alpha}\) that overlap with the boundary of its support at every step. We can then introduce a heuristic to control the width of support. In particular, we can truncate summation over \(\alpha\) in Eq. (6) so that only a subset of terms \(h_{\alpha}\) are retained for each \(\tilde{h}_{\beta}\). More precisely, a neighborhood block is assigned to every \(\tilde{h}_{\beta}\) term which is a region with a given spatial extent \(w\) that surrounds the location of \(\tilde{h}_{\beta}\) at \(\tau=0\); every \(h_{\alpha}\) term that lies in the neighborhood block of \(\tilde{h}_{\beta}\) is retained in Eq. (6). In this approximation, \(\tilde{H}\) remains strictly \(w\)-geometrically local with time. We note that this above heuristic is different from the locality approximation in the original QITE algorithm, as it is a direct restriction on the operator, rather than the correlation length of the imaginary-time-evolved state. The relationship between the two is studied numerically below. ### Gauge degree of freedom We can introduce other modifications to the generating equation of \(\tilde{H}\) which do not modify the imaginary time evolution trajectory but which can, in principle, affect \(||\tilde{H}||\) and its spectrum. Consider, for example, \[\frac{d\tilde{h}_{\beta}}{d\tau}=\sum_{\alpha}\left\{\tilde{h}_{\beta},h_{ \alpha}\right\}^{\prime}+f(\tilde{h}_{\beta}). \tag{7}\] Figure 1: Adiabatic evolution under the Hamiltonian \(\tilde{H}\) for a Heisenberg XXZ system of length \(L=8\). \(\tilde{H}\) is generated by integrating Eq. (4) with second-order Runge-Kutta and time-step \(\epsilon\), which is varied here. \(I^{\infty}\), \(I^{\tau}\): infidelity with the ground-state of \(H\) and with the exact imaginary time evolved state; \(\tilde{\Delta}\): instantaneous gap of \(\tilde{H}\); \(||\tilde{H}||\): norm of \(\tilde{H}\). The same color is used in all of the plots for each value of \(\epsilon\), even though we have shown each \(\epsilon\) only once. Figure 2: Adiabatic evolution under \(\tilde{H}\) generated by Eq. (6), for the Heisenberg XXZ system as a function of chain length \(L\). \(\tilde{H}\) is generated using second-order Runge-Kutta and a timestep of \(\epsilon=0.05\). Quantities are same as in Fig. 1. This ensures that the ground-state of \(\tilde{H}\) is a zero eigenstate for all \(\tau\) (although it does not ensure that it is always the ground-state). We can then optimize \(f(\tilde{h}_{\beta})\) to control the gap and norm. Specifically, in tests below, we consider \(f(h_{\beta})=u_{1}\tilde{h}_{\beta}-u_{2}\tilde{h}_{\beta}^{2}\). The \(u_{1}\) term is equivalent to adding a constant shift to \(H\) in Eq. (4). While we do not expect this gauge choice to fundamentally remove the difficulties of propagating to long times, we may be able to improve the finite time before generating \(\tilde{H}\) accurately, or implementing simulation under it, becomes too expensive. In practice, we do not know how to choose \(f(u)\) ahead of time, but it may be chosen in a heuristic manner, or \(f(u)\) may be chosen as part of a variational ansatz for \(\Psi(\tau)\) at finite \(\tau\). ## III Numerical simulations We now study the imaginary time trajectory generated by the \(\tilde{H}\) dynamics, including the locality and gauge modifications described above. We classically propagate \(\tilde{H}\) by a second-order Runge-Kutta method [20] with a time step \(\epsilon\) (error \(O(\epsilon^{3})\) per time-step) and then diagonalize \(\tilde{H}(\tau)\) to study the trajectory of the ground-state \(\Phi_{0}(\tau)\). We then monitor various quantities, such as \(||\tilde{H}(\tau)||\), the gap \(\tilde{\Delta}(\tau)\), the infidelity with the exact imaginary time propagated state \(I^{\tau}(\tau)=1-|\langle\tilde{\phi}_{0}(\tau)|\Psi(\tau)\rangle|^{2}\), and the infidelity with the ground-state of \(H\), \(I^{\infty}(\tau)=1-|\langle\tilde{\phi}_{0}(\tau)|\Psi(\infty)\rangle|^{2}\). We study the antiferromagnetic Heisenberg XXZ model in one dimension with open boundary conditions: \[H_{\lambda_{z}}=\sum_{j}S_{j}^{x}S_{j+1}^{x}+S_{j}^{y}S_{j+1}^{y}+\lambda_{z} S_{j}^{z}S_{j+1}^{z}, \tag{8}\] with \(\lambda_{z}=2\) which results in an Ising-like anisotropy. We initialize the adiabatic Hamiltonian \(\tilde{H}\) as a staggered transverse field: \(\tilde{H}(\tau=0)=\sum_{j}\left[(-1)^{j}S_{j}^{x}+\frac{1}{2}\right]\) (where each term in brackets separately annihilates the ground state). We then determine \(\tilde{H}\) under the dynamics generated by Eqs. (4), (6), and (7), and study various properties of \(\tilde{H}\) and its instantaneous ground-state \(\tilde{\phi}_{0}(\tau)\). \(\tilde{H}\) has the symmetry \(\prod_{i}S_{i}^{x}\), under which \(\tilde{\phi}_{0}(\tau)\) has a definite eigenvalue. Note that \(H_{\lambda_{z}=2}\) has symmetry broken ground states with long range order [21; 22], but we take \(|\Psi(\infty)\rangle\) to belong to the same symmetry sector as \(\tilde{\phi}_{0}(\tau)\). In Fig. 1 we first consider \(\tilde{H}\) generated by Eq. (4), and the infidelities \(I(\tau)\), \(I^{\infty}(\tau)\), \(||\tilde{H}(\tau)||\) and \(\tilde{\Delta}(\tau)\) as a function of time-step \(\epsilon\) used in the classical integration of Eq. (4). As seen, for all step-sizes, at early times the infidelity with the desired ground-state of \(H\), \(I^{\infty}(\tau)\), decreases exponentially quickly, to \(10^{-2}\) or less. (Although this is a small system, we note that achieving finite infidelities of this level is important for many state preparation applications as it enables variants of quantum phase estimation adapted to the near-term setting [23]). The fact that the achieved infidelities decrease as \(\epsilon\) is decreased indicates that with an infinitesimal step size in the classical integrator, we should also have \(I^{\tau}(\tau)=0\) at all times and \(I^{\infty}(\tau)\to 0\) at long times. For a finite step size \(\epsilon\), we see \(I^{\infty}(\tau)\) reaches a minimum value while \(I^{\tau}(\tau)\) first increases with time, reaches a maximum and then plateaus. The time for the minimum \(I^{\infty}\) is close to (slightly after) the time for the maximum \(I^{\tau}\); we refer to this (approximate) time as \(\tau_{c}\). \(||\tilde{H}||\) increases exponentially, while \(\Delta\) decreases exponentially until about \(\tau_{c}\), before slowly increasing again. The above is consistent with our analysis that it is only possible to determine the adiabatic Hamiltonian accurately up to a maximum time \(\tau_{c}\) for a finite time-step. We next study \(\tilde{H}\) generated by Eq. (6) as a function of system size \(L\), and a fixed classical time step \(\epsilon=0.05\) in Fig. 2. The results are similar to those using \(\tilde{H}\) generated by Eq. (4) if no locality approximation is applied. Examining \(I^{\infty},I^{\tau}\), we see the same behaviour of reaching a minimum \(I^{\infty}\) and a maximum \(I^{\tau}\) at some approximate time \(\tau_{c}\). Although \(||\tilde{H}||\) grows more quickly and \(\tilde{\Delta}\) decays more rapidly as \(L\) increases, surprisingly, this does not seem to change the maximum reliable propagation time, \(\tau_{c}\approx 1.8\), between \(L=6-12\). We further apply the locality heuristic for the generation of \(\tilde{H}\), implemented using Eq. (6), in Fig. 3. We use width \(w=5\) and a range of system sizes and integration time-steps. Encouragingly, we also see rapid exponential decay of the infidelity to values of \(\sim 10^{-2}\) or less. Overall, the dynamics shows similar behaviour to previous examples, and at longer times there is a \(\tau_{c}\) corresponding to a change in the behaviour of the infidelities and gaps. However, unlike in the dynamics above without the locality approximation, as the time-step goes to \(0\), the minimum \(I_{\infty}\) does not keep decreasing, because there is a finite width error. In Fig. 4, we show \(\tilde{H}\) generated by Eq. (6), using the locality heuristic with widths \(3,5,7\), and timestep \(\epsilon=0.05\), as well as a protocol where the width is increased at increasing times. For comparison, we also show data from the original quantum imaginary time evolution scheme with widths \(2,4,6\) and for time step \(\epsilon=0.08\). As expected, the best achievable fidelity with the exact state \(I^{\infty}(\tau)\) increases with \(w\), however, the best \(I(\infty)\) in fact decreases moving from \(w=5\) to \(w=7\). Compared to the original QITE scheme, the achievable infidelity appears to be better (i.e.lower) using the adiabatic Hamiltonian \(\tilde{H}\) with a similar locality definition. Further work is required to establish a more precise relationship between these approximations. We finally consider the modified \(\tilde{H}\) generated in a different gauge using Eq. (7). As an illustrative example, we consider the case \(u_{1}=1/2\), \(u_{2}=2\), with the locality constraint \(w=5\). In Fig. 5, we see that although the behaviour of the infidelities \(I_{\infty}\) and \(I^{\tau}\) are qualitatively similar to what we have seen previously (corresponding to \(u_{1}=u_{2}=0\)) with a similar \(\tau_{c}\), the detailed form is different. For example, the infidelity has a flatter plateau region, and \(||H||\) and \(\Delta\) show slower growth and flatter decay up to \(\tau_{c}\). This indicates that a suitable choice of \(f(u)\) can meaningfully modify the dynamics and the achievable infidelities at finite times. ## IV Conclusions We have described an adiabatic state preparation protocol that implements the imaginary time evolution trajectory without any need for quantum tomography or ancilla resources. This hybrid algorithm involves a classical time integration to generate the adiabatic Hamiltonian. When implemented faithfully, the algorithm leads to an exponential decrease of the infidelity with the ground-state of a desired Hamiltonian with adiabatic time. However, the cost of evolving exactly to long imaginary times grows rapidly with imaginary time both in the classical and quantum parts of the protocol. The growth in cost as Figure 4: The infidelity with the desired ground state for both the adiabatic QITE and the original QITE algorithm. Even values of \(w\) correspond to the original QITE algorithm and the odd \(w\) values and the varying case correspond to the adiabatic QITE. The time steps are taken as \(0.05\) for A-QITE and \(0.08\) for QITE. The varying \(w\) case starts with \(w=3\), then changes to \(w=5\) and ultimately to \(w=7\). The position of the transitions are shown by vertical dotted lines. Figure 5: Adiabatic evolution under \(\tilde{H}\) generated by Eq. (7) with \(u_{1}=2,u_{2}=0.5\), block size \(w=5\), second-order Runge-Kutta, \(\epsilon=0.05\), for the Heisenberg XXZ system for various lengths. Quantities same as in Fig. 1. Figure 3: Adiabatic evolution under \(\tilde{H}\) generated by Eq. (6), using a locality truncation with block size \(w=5\), for Heisenberg XXZ chains of various lengths. (a) and (b) show infidelities for different values of \(\epsilon\) and \(L=8\). Note that the infidelities are defined in the same way as in Fig. 1. (c) and (d) show the behavior of the infidelities for different lengths, second-order Runge-Kutta, \(\epsilon=0.05\). (e) and (f) depict the gap and norm for different lengths, the same colors as the second column are used. a function of imaginary time arises from several sources, including the nonlocality of the derived adiabatic Hamiltonian. This nonlocality can be controlled by suitable heuristics which truncate terms in the adiabatic Hamiltonian. Another source of growing cost at long imaginary time is related to the norm of the adiabatic Hamiltonian and its gap, for which modifications of the generating equation of the adiabatic Hamiltonian can be introduced and which should be further explored. However, because of the rapid decrease of the infidelity it is possible to propagate for short times and to observe a large improvement in the approximate ground-state. More generally, the A-QITE adiabatic path introduced in this work can be considered as extending the space of possible adiabatic paths; this potentially allows for introduction of new quantum adiabatic routines consisting of composite adiabatic paths (with A-QITE as one of them), with applications such as introduction of novel adiabatic catalysts [8; 24], etc. This is a direction to be explored in the future. Overall, a significant advantage of the current approach is that it can be implemented with a minimal amount of quantum resources (i.e. no ancillae and no measurement-based feedback). This makes our procedure a good match for near-term quantum architectures. ## V Acknowledgments We thank Yu Tong, Alex Dalzell and Anthony Chen for helpful discussions. KH and GKC were supported by the US Department of Energy, Office of Science, Basic Energy Sciences, under grant no. DE-SC-0019374. GKC acknowledges support from the Simons Foundation.
2310.19080
Reward Finetuning for Faster and More Accurate Unsupervised Object Discovery
Recent advances in machine learning have shown that Reinforcement Learning from Human Feedback (RLHF) can improve machine learning models and align them with human preferences. Although very successful for Large Language Models (LLMs), these advancements have not had a comparable impact in research for autonomous vehicles -- where alignment with human expectations can be imperative. In this paper, we propose to adapt similar RL-based methods to unsupervised object discovery, i.e. learning to detect objects from LiDAR points without any training labels. Instead of labels, we use simple heuristics to mimic human feedback. More explicitly, we combine multiple heuristics into a simple reward function that positively correlates its score with bounding box accuracy, i.e., boxes containing objects are scored higher than those without. We start from the detector's own predictions to explore the space and reinforce boxes with high rewards through gradient updates. Empirically, we demonstrate that our approach is not only more accurate, but also orders of magnitudes faster to train compared to prior works on object discovery.
Katie Z Luo, Zhenzhen Liu, Xiangyu Chen, Yurong You, Sagie Benaim, Cheng Perng Phoo, Mark Campbell, Wen Sun, Bharath Hariharan, Kilian Q. Weinberger
2023-10-29T17:03:12Z
http://arxiv.org/abs/2310.19080v2
# Reward Finetuning for Faster and More Accurate Unsupervised Object Discovery ###### Abstract Recent advances in machine learning have shown that Reinforcement Learning from Human Feedback (RLHF) can improve machine learning models and align them with human preferences. Although very successful for Large Language Models (LLMs), these advancements have not had a comparable impact in research for autonomous vehicles--where alignment with human expectations can be imperative. In this paper, we propose to adapt similar RL-based methods to unsupervised object discovery, i.e. learning to detect objects from LiDAR points without any training labels. Instead of labels, we use simple heuristics to mimic human feedback. More explicitly, we combine multiple heuristics into a simple reward function that positively correlates its score with bounding box accuracy, i.e., boxes containing objects are scored higher than those without. We start from the detector's own predictions to explore the space and reinforce boxes with high rewards through gradient updates. Empirically, we demonstrate that our approach is not only more accurate, but also orders of magnitudes faster to train compared to prior works on object discovery. Code is available at [https://github.com/katieluo88/DRIFT](https://github.com/katieluo88/DRIFT). ## 1 Introduction Self-driving cars need to accurately detect the moving objects around them in order to move safely. Most modern 3D object detectors rely on supervised training from 3D bounding box labels. However, these 3D bounding box labels are hard to acquire from human annotation. Furthermore, this supervised approach relies on a pre-decided vocabulary of classes, which can cause problems when the car encounters novel objects that were never annotated. Our prior work, MODEST [55], introduced the first method to train 3D detectors without labeled data. In that work, we point out that instead of specifying millions of labels, one can succinctly describe _heuristics_ for what a good detector output should look like. For example, one can specify that detector boxes should mostly enclose transient foreground points rather than background ones; they should roughly be of an appropriate size; their sides should be aligned with the LiDAR points; their bottom should touch the ground, etc. Although such heuristics are great for _scoring_ a set of boxes proposed by a detector, training a detector on them is hard for two reasons: First, these heuristics are often non-linear, non-differentiable functions of the detector parameters (for example, a slight shift of the box can cause all foreground points to fall off.) Second, existing object detection pipelines use carefully designed training objectives that heavily rely on labeled boxes, that are difficult to modify (for example, PointRCNN [40] infers point labels from box labels and uses these for training). For these reasons, MODEST had to utilize an admittedly slow self-training pipeline to incrementally incorporate common-sense heuristics. In this paper, we propose a new reward ranking based framework that utilizes these common-sense heuristics directly as a reward signal. Our method relies on finetuning with reward ranking [31; 27; 13; 33], where given an initialized object detector, we finetune it for a predefined set of desirable detection properties. This bypasses the need to encode heuristics as differentiable loss functions and avoids the need to hand-engineer training paradigms for each kind of object detector. Recent success with reinforcement learning from human feedback (RLHF) has proven effective in improving machine learning models --in particular, large language models (LLMs)-- and aligning them with human preferences [31; 27]. However, these advancements have not been applicable to detection-based vision models that are trained with per-instance regression and are difficult to view under a probabilistic framework. To address this challenge, we utilize insights from reward ranked finetuning [13], a non-probabilistic paradigm designed for finetuning of LLMs, which inspired us to develop a similar framework for object discovery. We refer to our method as _Discovery from Reward-Incentivized Finetuning (DRIFT)_. DRIFT does not require labels, and instead uses the Persistency Prior (PP) score [55; 3] as a heuristic to identify dynamic foreground points based on historical traversals. These foreground points give rise to rough (and noisy) label estimates [55], which we use to pre-train our detector. The resulting detector performs poorly but suffices to propose many boxes of roughly the right sizes that we can use for exploration. To facilitate reward ranked finetuning, we first propose a reward function to score boxes. Ideally, only boxes that tightly contain objects (e.g. a car) should yield high rewards. We achieve this by combining several simple heuristics (e.g. high ratio of foreground points) and assuming some rough knowledge about the object dimensions. During each iteration of training, DRIFT performs the following steps: 1. the object detector proposes many boxes in a given point cloud scene; 2. the boxes are "jittered" through random perturbations (as a means of exploration); 3. the boxes are scored according to the reward function; 4. the top-\(k\%\) non-overlapping boxes are kept as pseudo-labels for gradient updates. We evaluate DRIFT on two large, real-world datasets [12; 24] and show that we significantly outperform prior self-training methods both in efficiency and generalizability. Experimental results demonstrate that using reward ranked finetuning for object discovery under our framework can quickly converge to a solution that is on par with out-of-domain supervised object detectors within a few training epochs, suggesting that DRIFT may point towards a more general unsupervised learning formulation for object detectors in an in-the-wild setting. ## 2 Related Works **3D Object Detection.** 3D object detection models usually take in LiDAR point clouds or multi-view images as input and aim to produce tight bounding boxes that describe nearby objects [9; 52; 26; 39; 40; 53; 34]. Existing methods generally assume the supervised setting, in which the detector is trained with human-annotated bounding boxes. However, annotated data are often expensive to obtain Figure 1: **Detection performance on Lyft test data as a function of training epochs. DRIFT demonstrates significantly stronger performance and faster learning. With only 9 hours of training, it outperforms both baselines that have been trained for days.** and limited in quantity. Furthermore, in tasks such as self-driving, environments can have highly varied conditions, and detectors with supervised training often require adaptation with additional labels from the new environment [47]. **Unsupervised Object Discovery**. The unsupervised object discovery task aims to identify and localize salient objects without learning from labels. Most existing works perform discovery from 2D images [8; 14; 45; 36; 2; 43; 48] or depth camera frames [21; 23; 17; 25; 20; 1]. Discovery from 3D LiDAR point clouds is underexplored. [49] performs joint unsupervised 2D detection and 3D instance segmentation from sequential point clouds and images based on temporal, motion and correspondence cues. MODEST [55] pioneers in performing label-free 3D object detection. It exploits high-level common sense properties from unlabeled data and bootstraps a dynamic object detector via repeated self-training. Despite promising performance, it requires excessive training time, which makes it difficult for practical use and development. **Reward Fine-Tuning for Model Alignment**. Recently, foundation models [6; 29; 11; 44; 35; 37] have been shown to achieve strong performance in diverse tasks [5; 50], but sometimes produce outputs that do not align with human values [18; 30; 10]. A line of research aims to improve model alignment under the paradigm of Reinforcement Learning with Human Feedback (RLHF). Some pioneering works [41; 31; 33] learn a reward model and train foundation models with Proximal Policy Optimization (PPO) [38], but PPO is often expensive and unstable to train, and more importantly, requires a probabilistic output on the action space. This makes it hard to use for the object detection setting, which primarily uses regression-based losses. Reward ranked finetuning [27; 13] is a simplified alternative paradigm. It samples from a foundation model itself, filters generations using the reward model, and conducts supervised finetuning with the filtered generations. ## 3 Discovery from Reward-Incentivized Finetuning Our framework, DRIFT, is inspired by the recent success of reward ranked finetuning methods for improving model alignment in the NLP community [13; 27]. We show that a similar approach can be adapted for 3D object discovery. **Problem Setup.** We wish to obtain a dynamic object detection model on LiDAR data, i.e., a model to detect mobile objects in the LiDAR point clouds, _without human annotations_. Let \(\mathbf{P}\in\mathcal{R}^{N\times 3}\) denote a \(N\)-point 3D point cloud captured by LiDAR from which we wish to discover objects. We assume inputs of _unlabeled_ point clouds collected by a car equipped with synchronized sensors including LiDAR (for point clouds) and GPS/INS (for accurate position and orientation). Since no annotation is involved, such a dataset is easy to acquire from daily driving routines; we additionally assume it to cover some locations with _multiple_ scans at different times for computation of PP-score. **Dynamic Point Proposals.** DRIFT leverages prior works that use unsupervised point clouds to extract foreground-background segmentation proposals. While many works [22; 3] have promising dynamic foreground segmentation results, in this work we rely on point _Persistency Prior score_ (PP-score) [55] for its accuracy and leave the extension of other proposal methods to future work. For the purpose of this research, dynamic foreground points constitute LiDAR points reflecting off traffic participants (e.g. cars, bicyclists, pedestrians). Using historical LiDAR sweeps collected at nearby locations of our point cloud \(\mathbf{P}\), the PP-score [3; 55]\(\tau(\mathbf{P})\in\left[0,1\right]^{N}\) can provide an informative estimate on the per-point persistence, i.e., whether a point belongs to persistent background or not. The PP-score is defined as the normalized entropy over past point densities, based on the assumption that background space such as ground, trees, and buildings tend to exhibit consistent point densities across different LiDAR scans (high entropy), whereas points associated with mobile objects exhibit high density only if an object is present (low entropy). ### Rewarding "Good" Dynamic Boxes We first establish a reward function that evaluates the quality of a set of bounding boxes for dynamic objects in a scene. We denote a set of \(M\) dynamic objects bounding boxes as \(\mathcal{B}=\{\mathbf{b}_{1},\dots,\mathbf{b}_{M}\}\), where each bounding box \(\mathbf{b}_{i}\) is represented as an upright box with parameters \((x_{i},y_{i},z_{i},w_{i},l_{i},h_{i},\theta_{i})\), defining the box's center, width, length, height, and z-rotation, respectively. The scoring function \(R\) scores the validity of the bounding boxes, given the observed point cloud \(\mathbf{P}\). In practice, a reward function that is positively correlated with IoU should suffice. We present our proposed reward function which aims to capture only dynamic points, filter nonsensical boxes, enforce correct size, and encourage proper box alignment to the captured dynamic points. **Shape Prior Reward.** We enforce a box to not deviate significantly from a set of prototype sizes \(\mathcal{S}=\{(\overline{w}_{1},\bar{l}_{1},\bar{h}_{1}),\ldots,(\overline{w}_{C},\bar{l}_{C},\bar{h}_{C})\}\) (Fig. 2 left). We assume the shape prior distribution is a mixture of \(C\) isotropic Gaussians with mixture weights \(\pi_{i}\), diagonal variances \(\mathbf{\Sigma}_{i}\), and corresponding means as \((\overline{w}_{i},\bar{l}_{i},\bar{h}_{i})\). These low-level statistics may be acquired directly from the dataset, or from vehicle specs and sales data available online [47]. In practice, we scale the mixtures such that the probabilities at the Gaussian means are equal for stability reasons. With this, the shape prior reward for box \(\mathbf{b}\) is computed as: \[R_{\text{shape}}(\mathbf{b})=P_{\mathcal{S}}(\mathbf{b}). \tag{1}\] **Alignment Reward.** Due to the nature of LiDAR sensing, the points will mostly fall on the lateral surfaces of an object. Therefore, a well-formed box should have dynamic points approximately close to the _boundary_ of a box (Fig. 2 middle). As [55] shows, PP-score allows for easy separation of dynamic and persistent background points. Let \(\mathbf{P}_{\text{dyn}}\) denote the set of dynamic points, and let \(\mathbf{P}_{\text{bg}}\) be that of background points. In practice, since the PP-score is an approximation of ground-truth persistence, we define \(\mathbf{P}_{\text{dyn}}=\{\mathbf{p}|\tau(\mathbf{p})<0.6\}\) and \(\mathbf{P}_{\text{bg}}=\{\mathbf{p}|\tau(\mathbf{p})\geq 0.9\}\). Given a box \(\mathbf{b}\), we denote all points within and close to the box as \(\mathcal{O}(\mathbf{b})\). Only points within \(\mathcal{O}(\mathbf{b})\) contribute to the reward of \(\mathbf{b}\). In practice, we let \(\mathcal{O}(\mathbf{b})\) consist of all points within a \(\times 2\) scaled up version of \(\mathbf{b}\) (with identical center and rotation). To score a box \(\mathbf{b}\), we design a reward function that identifies how "typical" the dynamic points within \(\mathcal{O}(\mathbf{b})\) are. For each dynamic point \(\mathbf{p}\in\mathcal{O}(\mathbf{b})\cap\mathbf{P}_{\text{dyn}}\), we compute the scaling factor \(s_{\mathbf{p},\mathbf{b}}\) required so that the rescaled box touches \(\mathbf{p}\) with one of its sides; i.e., if a point is inside the box, the box would have to be scaled down (\(s_{\mathbf{p},\mathbf{b}}<1\)) to touch the point, if the point is outside it must be scaled up (\(s_{\mathbf{p},\mathbf{b}}>1\)). We assume that \(s_{\mathbf{p},\mathbf{b}}\) roughly follows a Gaussian distribution centered near the box boundary, and visualize the actual distribution in Fig. 3. We define our reward as the likelihood under this Gaussian distribution over scaling parameters. We approximate it as a Gaussian with hyper-parameters mean \(\mu_{\text{scale}}\) and a variance \(\sigma_{\text{scale}}\). Our reward is the product of the probability of each point in \(\mathcal{O}(\mathbf{b})\): \[R_{\text{align}}(\mathbf{b})=\prod_{\mathbf{p}\in o(\mathbf{b})\cap\mathbf{P}_{\text{dyn}}} \mathcal{N}(s_{\mathbf{p},\mathbf{b}}|\mu_{\text{scale}},\sigma_{\text{scale}}). \tag{2}\] **Common Sense Heuristics and Filtering.** Lastly, a proper bounding box must capture the dynamic points, and avoid capturing the background points (Fig. 2, right). This heuristic can be encoded by a Figure 3: **Distribution of dynamic points near ground truth bounding boxes.** We observe that dynamic points near bounding boxes fall in an approximate Gaussian distribution centered near box edges (\(s_{\mathbf{p},\mathbf{b}}\approx 0.8\)). Figure 2: **Illustration of the reward components.** The reward encourages boxes that have proper shape and alignment, and capture more dynamic points and few background points. simple weighted point count for each bounding box \(\mathbf{b}\): \[R_{\text{count}}(\mathbf{b})=\lambda_{\text{dyn}}\cdot|\mathbf{P}_{\text{dyn}}\cap \mathcal{O}(\mathbf{b})|-\lambda_{\text{bg}}\cdot|\mathbf{P}_{\text{bg}}\cap\mathcal{O} (\mathbf{b})|.\] Intuitively, it assigns a reward in proportion to the number of dynamic points captured by the box, and a penalty in proportion to the number of background points captured. Furthermore, boxes violating common sense should be assigned a low reward. We filter boxes that are too high up or too low from the ground, including those with too few dynamic points, or are too small or too large by directly assigning a reward of 0. In practice, we filter boxes that contain fewer than 4 dynamic points, or that have more than 80% persistent points, similar to [55]. In summary, the reward function is designed to be \[R(\mathbf{b})=\begin{cases}\lambda_{\text{shape}}\cdot R_{\text{shape}}(\mathbf{b})+ \lambda_{\text{align}}\cdot R_{\text{align}}(\mathbf{b})+R_{\text{count}}(\mathbf{b})& \text{if obeys common sense,}\\ 0&\text{otherwise.}\end{cases} \tag{3}\] ### Exploration Strategy for Improved Discovery We assume a simple exploration strategy for identifying good box proposals. Given a set of current object proposals given by the detector, we locally perturb the boxes in the output space: \[\mathcal{B}_{\text{explore}}\sim\mathcal{P}\left(\mathcal{B}_{0},\sigma \right), \tag{4}\] where \(\mathcal{B}_{\text{explore}}\) is the set of explored boxes, perturbed from the model proposals \(\mathcal{B}_{0}\). For each box \(\mathbf{b}\in\mathcal{B}_{0}\), a set of explored boxes are sampled according to a standard Gaussian noise along the position and size dimensions and uniform noise for orientation: \[\mathbf{b}_{\text{explore}}^{\text{center}}\sim\mathcal{N}(\mathbf{b}_{0}^{\text{ center}},\sigma^{2}\mathbf{I}),\ \mathbf{b}_{\text{explore}}^{\text{size}}\sim\mathcal{N}(\mathbf{b}_{0}^{\text{size}}, \sigma^{2}\mathbf{I}),\ \mathbf{b}_{\text{explore}}^{\theta}\sim\mathcal{U}(\mathbf{b}_{0}^{\theta}- \sigma,\mathbf{b}_{0}^{\theta}+\sigma). \tag{5}\] Furthermore, to encourage proposals of boxes in foreground regions, we take inspiration from [56; 40] and re-use PP-score as _point-level_ semantic segmentation (foreground vs background) labels, with which the detector is encouraged to propose boxes at points that have low PP-scores (i.e.are likely to be foreground points). Following [56], for each point \(\mathbf{p_{i}}\in\mathbf{P}\) with prediction \(\hat{\mathbf{y_{i}}}\), we assign its target classification label \(\mathbf{y_{i}}\) as: \[\mathbf{y_{i}}=\begin{cases}\mathbf{1}&\text{if }\tau(\mathbf{p_{i}})<\tau_{L}\text{ or }\hat{\mathbf{y_{i}}}=\mathbf{1},\\ \mathbf{0}&\text{otherwise.}\end{cases} \tag{6}\] In effect, this encourages all non-persistent points (i.e., low \(\tau(\mathbf{p_{i}})\)) to propose boxes near dynamic regions for better exploration. ### Reward-Incentivized Finetuning The reward function \(R\) allows us to quickly evaluate proposed bounding boxes \(\mathcal{B}\) and the task of 3D object discovery could be reduced to an optimization problem on the total reward in box set space: \[\mathcal{B}^{*}=\operatorname*{arg\,max}_{\mathcal{B}}\sum_{\mathbf{b}\in\mathcal{ B}}R(\mathbf{b}), \tag{7}\] where the sum is taken over the boxes in the set \(\mathcal{B}\). Although a direct optimization for \(\mathcal{B}^{*}\) is not plausible due to the non-polynomial search space and discontinuity in \(R\), \(R\) can serve as effective guidance to facilitate model finetuning. The underlying intuition is similar to curriculum learning [4; 28; 46]: the object detection model takes small steps to improve from its current predictions towards \(\mathcal{B}^{*}\) by following the direction provided by the maximum \(R\) direction in a local space. As illustrated in Alg. 1, in each training iteration, we first let the object detector perform inference on a point cloud \(\mathbf{P}\) and propose a set of dynamic objects \(\mathcal{B}_{0}\) in the scene. To explore directions of improvement with the non-differentiable reward function \(R\), we sample \(n\) boxes from \(\mathcal{B}_{0}\) (with replacement) and add an _i.i.d._ Gaussian noise on their location and size, and an uniform noise on orientation following Eq. 4. These sampled boxes are then ranked by the reward function \(R\), in which the top \(k\) non-overlapping boxes are selected by Non-Maximum Suppression (NMS) as training targets to finetune the object detector. Note that since DRIFT treats the model training/inference procedures as black boxes, it can be applied to any 3D object detection model. In practice, it is observed that neural networks can acquire task knowledge from imperfect demonstrations [16; 51; 32]. MODEST [55] pre-trained the 3D object detector on noisy seed labels produced by DBSCAN [15] clustering on spatial and PP-score. We follow [55] and initialize our 3D object detector a model trained with discovered seed labels. ## 4 Experiments **Datasets.** We experimented with two different datasets: Lyft Level 5 Perception dataset [24] and Ithaca-365 dataset [12]. To the best of our knowledge, these are the two publicly available datasets that contain multiple traversals of multiple locations with accurate 6-DoF localization and 3D bounding box labels for traffic participants. In the Lyft dataset, we experiment with the same split provided by [55], where the train set and test set are geographically separated. It consists of 11,873 train scenes and 4,901 test scenes. For the Ithaca365 dataset, we experimented with the full dataset which consists of 57,107 scenes for training and 1,644 for testing. For both datasets, we do not use any human-annotated labels in training. To show the generalizability of our method, we conduct the development on the Lyft dataset, i.e., all the hyperparameters of our approach are finalized through experiments on Lyft, and we use the exact same set of hyperparameters for all experiments in Ithaca365. **Evaluation.** Following [55], we combine all traffic participants to a single mobile object class and evaluate the detector's performance on this class. Note that the labels are not used during training but solely for evaluation. For Lyft, we report the mean average precision (mAP) of the detector with the intersection over union (IoU) thresholds at 0.5 and 0.7 in bird-eye-view perspective. Note that mAP at 0.7 IoU threshold is a stricter and harder metric and was not evaluated in [55], and we include it to emphasize the effectiveness of our method. For Ithaca365, we adopt metrics similar to those in [7]: we evaluate mean average precision (mAP) for dynamic objects under \(\{0.5,1,2,4\}\)m thresholds that determine the match between detection and ground truth; we also compute 3 types of true positive metrics (TP metrics), including ATE, ASE and AOE for measuring translation, scale and orientation errors. These TP metrics are computed under a match distance threshold of 2m; additionally, we also compute a distance-based breakdown (0-30m, 30-50m, 50-80m) for these metrics. **Implementation.** We use PointRCNN [40] as our default architecture and we use the implementation provided by OpenPCDet [42]. We train DRIFT with 120 epochs in Lyft and 30 epochs in Ithaca365 as the default setting, and observe that the performance generally improves with more training epochs (Fig. 1). We use \(\lambda_{shape}=1\), \(\lambda_{align}=1\), \(\lambda_{dyn}=0.001\) and \(\lambda_{bg}=0.001\). We use \(\mu_{scale}=0.8\) and \(\sigma_{scale}=0.2\) for the alignment reward. We define the shape priors based on four typical types of traffic participants: Car, Pedestrian, Cyclist, and Truck. Specifically, we use the mean and standard deviation of box sizes of each class in the Lyft dataset, but we show that they generalize well to other domains like Ithaca365 and are not sensitive (Tab. 2) The exact prototype sizes \(\mathcal{S}\) and other implementation details can be found in the supplementary materials. **Baselines.** To the best of our knowledge, MODEST [55] is the only prior work on this problem and we compare our method DRIFT against it with various variants of MODEST: (1) No Finetuning: the model trained with seed labels from PP-score without repeated self-training (MODEST (R0)) in [55]; (2) Self-Training (_i_ ep): the model initialized with (1) and self-trained with \(i\) epochs without PP-score filtering; (3) MODEST (_i_ ep): the model initialized with (1) and self-trained with \(i\) epochs with PP-score filtering (full MODEST model). For self-training in (2) and (3), we adopt 60 epochs for each self-training round in the Lyft dataset (same as that in [55]) and 30 epochs for the Ithaca365 dataset. To ensure a fair comparison, DRIFT is also initialized from (1) and use the same detector configurations as the baselines. Following [55], we also compare with the supervised counterparts trained with human-annotated labels from the same dataset (Lyft or Ithaca365) and from another out-of-domain dataset (KITTI). **Dynamic Object Detection Results.** We report the performance of DRIFT and baseline detectors on Lyft in Tab. 1, and show the performance over the training epochs in Fig. 1. We report the performance on Ithaca365 in Tab. 2. Notably, DRIFT demonstrates significantly faster learning and strong performance. It provides more than 10\(\times\) speedup as compared to the baselines. On Lyft, DRIFT's performance at 60 epochs already surpasses the performance of both baselines at 600 epochs (10 self-training rounds) and approaches the performance of the out-of-domain supervised detector trained on KITTI [19]. On Ithaca365, its performance at 30 epochs significantly surpasses both baselines trained at 300 epochs. It even outperforms the out-of-domain supervised detector trained on KITTI in mAP. Observe that the self-training performance starts collapsing with more rounds of self-training, and does not continue to improve. Fig. 4 visualizes the detection on two scenes. Ground truth boxes are colored in green, predictions from the detector without fine-tuning are in yellow, and predictions from DRIFT are in red. We observe that the detector without fine-tuning occasionally produces false positive predictions, produces boxes with incorrect sizes, or misses moving objects, while DRIFT produces accurate detection. **Rewards ablations.** We report the average reward per box for ground truth boxes, random boxes, and predicted boxes from different detectors in Tab. 3. The ground truth boxes have the highest rewards on average, while the random boxes have the lowest. This indicates that the reward reasonably reflects \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{3}{c}{mAP IoU @ 0.5 (\(\uparrow\))} & \multicolumn{3}{c}{mAP IoU @ 0.7 (\(\uparrow\))} \\ \cline{2-9} Method & 0-30 & 30-50 & 50-80 & 0-80 & 0-30 & 30-50 & 50-80 & 0-80 \\ \hline No Finetuning & 44.1 & 21.1 & 1.2 & 23.9 & 24.4 & 6.0 & 0.1 & 10.5 \\ \hline Self-Train. (60 ep) & 50.0 & 29.0 & 3.4 & 28.6 & 32.5 & 10.0 & 0.3 & 14.0 \\ Self-Train. (600 ep) & 56.7 & 41.1 & 9.1 & 37.2 & 35.1 & 20.7 & 1.6 & 19.9 \\ MODEST (60 ep) & 49.6 & 29.7 & 3.4 & 28.8 & 31.3 & 10.2 & 0.3 & 14.4 \\ MODEST (600 ep) & 56.4 & **45.4** & 11.3 & 39.6 & 33.6 & 18.6 & 1.4 & 18.8 \\ DRIFT (30 ep) & 60.1 & 40.2 & 9.1 & 38.3 & 39.0 & 24.2 & 3.6 & 23.1 \\ DRIFT (60 ep) & 60.3 & 43.8 & 14.6 & 41.8 & 42.0 & 29.2 & 5.8 & 26.7 \\ DRIFT (120 ep) & **61.4** & 45.1 & **21.7** & **45.3** & **42.7** & **31.7** & **9.9** & **29.6** \\ \hline Sup. on KITTI & 71.9 & 49.8 & 22.2 & 49.9 & 47.0 & 26.2 & 6.4 & 27.9 \\ Sup. on Lyft & 76.9 & 60.2 & 37.5 & 60.4 & 62.7 & 50.9 & 28.2 & 48.5 \\ \hline \hline \end{tabular} \end{table} Table 1: **Detection performance on Lyft. DRIFT outperforms both baselines with 10% training time, and approaches the performance of the out-of-domain supervised detector trained on KITTI. Please refer to the setup of Sec. 4 for the metrics.** \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{3}{c}{mAP (\(\uparrow\))} & \multicolumn{3}{c}{Errors 0-80m (\(\downarrow\))} \\ \cline{2-7} Method & 0-30 & 30-50 & 50-80 & 0-80 & ATE & ASE & AOE \\ \hline No Finetuning & 18.7 & 4.8 & 0.0 & 7.7 & 1.17 & 0.60 & 1.64 \\ \hline Self-Train. (30 ep) & 25.9 & 9.2 & 1.2 & 12.4 & 1.08 & 0.62 & 1.57 \\ Self-Train. (300 ep) & 16.3 & 3.6 & 1.8 & 6.8 & 1.19 & 0.74 & 1.57 \\ MODEST (30 ep) & 14.6 & 0.7 & 0.0 & 3.7 & 0.83 & 0.52 & 1.53 \\ MODEST (300 ep) & 27.5 & 26.3 & 21.0 & 27.1 & 1.06 & 0.67 & **1.09** \\ DRIFT (15 ep) & 39.1 & 24.3 & 17.7 & 28.0 & 0.73 & **0.33** & 1.23 \\ DRIFT (30 ep) & **47.1** & **31.2** & **22.9** & **35.1** & **0.49** & 0.35 & 1.20 \\ \hline Sup. on KITTI & 59.8 & 28.3 & 4.0 & 32.0 & 0.26 & 0.22 & 0.46 \\ Sup. on Ithaca365 & 75.7 & 48.3 & 22.6 & 51.5 & 0.18 & 0.13 & 0.33 \\ \hline \hline \end{tabular} \end{table} Table 2: **Detection performance on Ithaca365. We observe DRIFT outperforms both baselines with significantly less training time. Please refer to the setup of Sec. 4 for the metrics.** the quality of the bounding box. And we observe the boxes predicted by DRIFT have higher rewards than those predicted by the baseline detectors. Ablation study on the components of our reward is presented in Tab. 4, and visualization is shown in Fig. 5. Detection performance significantly drops when we remove one or more of the components. For example, when only common sense filtering is used, the detector just predicts boxes around foreground points. Without the shape prior reward, the detector predicts boxes with incorrect sizes. Ablations Tab. 5 and Tab. 6 present the sensitivity analysis of the choices of \(\mu_{\text{scale}}\) and \(\sigma_{\text{scale}}\). DRIFT achieves stable performance across reasonable choices of \(\mu_{scale}\) and \(\sigma_{\text{scale}}\), showing the robustness of our method. **Exploration.** We study the necessity of the exploration component and the effect of incorporating other sources for box sampling. In Tab. 7, we compare no exploration to: (1) sampling 200 boxes from box predictions, (2) sampling 100 from proposals near dynamic points and 100 from predictions, and (3) sampling 100 from seed labels and 100 from predictions. Observe that the exploration component is crucial for our method; by performing local exploration instead of simply updating from its own predictions, DRIFT avoids confirmation bias and ensures that labels improve over what it predicts. Furthermore, results show that sampling from the box predictions is sufficient for obtaining good performance; other sources do not provide obvious benefits. We also explore the effects of modifying the exploration strategy. Tab. 8 compares the detector performance of using sample size of 50, 100 and 200, and noise scale \(\sigma\) (i.e.variation) of 0.3 vs. 0.6. Each detector is trained for 30 epochs. At noise scale 0.3, increasing the sample size from 50 to 200 significantly improves the detection performance. Using noise scale 0.6 significantly reduces the detection performance, indicating that smaller noise may be preferable. **Filtering Budget of the Ranked Boxes.** We study the effect of the choice of top \(k\%\) for filtering boxes by reward ranking. Tab. 9 presents the detection performance with top 55%, 65%, 75% and 85%. DRIFT is robust to the choice of \(k\), with slightly decreased performance when \(k\) is too high. \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Filter} & \multirow{2}{*}{Shape} & \multirow{2}{*}{Align.} & mAP (0 - 80m) \\ \cline{3-3} & & & IoU 0.5 & IoU 0.7 \\ \hline \(\sigma_{scale}\) & & IoU 0.5 & IoU 0.7 \\ \hline \(0.8\) & 38.3 & 23.1 \\ \(0.9\) & 38.1 & 23.4 \\ \hline \hline \end{tabular} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{\(\mu_{scale}\)} & mAP (0 - 80m) \\ \cline{3-3} & IoU 0.5 & IoU 0.7 \\ \hline \(0.1\) & 34.2 & 20.9 \\ \(0.2\) & 38.3 & 23.1 \\ \(0.3\) & 38.1 & 20.0 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation on the reward components. We report the mAP (0-80m) on Lyft. Removing components significantly degrades performance. Figure 4: **Visualization of detections.** Qualitative results on two scenes from Lyft [24] and Ithaca365 [12] datasets. Ground truth boxes are labeled with green in the LiDAR figures and predictions without fine-tuning and DRIFT are in yellow and red, respectively. We observe DRIFT learns to produce accurate detection with correct shape and localization. **Extension to Detection with Classes.** Our prototype sizes are defined by different classes of traffic participants. Thus, given a predicted box from a class agnostic detector, we can compute the likelihood of its size under the Gaussian prior of each prototype size, and assign it to the class with the highest likelihood. Fig. 6 presents the per-class performance of DRIFT and baselines under such assignment. All detectors are trained for 60 epochs. DRIFT outperforms both baselines, with especially significant improvement for the Car and Cyclist classes. More details can be found in the supplementary materials. ## 5 Discussion and Conclusion In this work, we propose a framework, DRIFT, to tackle object discovery without labels. Instead of requiring expensive 3D bounding box labels, our method utilizes succinctly described heuristics as a reward signal to initialize and subsequently fine-tune an object detector. To optimize this non-differentiable reward signal, we propose a simple but very effective reward finetuning framework, inspired by recent successes of reinforcement learning in the NLP community. Compared to prior self-training based methods [55], such a framework is an order of magnitude faster to train, while achieving higher accuracy. Traditional self-training iteratively generates pseudo-labels and retrains the model, requiring convergence before generating the next set of pseudo-labels. In general, training a detector to mimic pseudo-labels can lead to undesirable artifacts, further amplified by repeated training (confirmation bias). DRIFT addresses this issue by leveraging reinforcement learning principles, where the exploration component is crucial. Our method avoids confirmation bias by performing local exploration and ensures that labels improve over what it predicts. Thus, DRIFT is able to perform updates per-training iteration as opposed to per self-training round, which allows it to converge significantly faster and achieve higher performance. **Limitations and Future Works.** One limitation is that the current framework is geared explicitly towards dynamic objects. Static objects would require different heuristics, not based on PP-scores. Similarly, currently we restricted our framework entirely to LiDAR signals. However, the reward \begin{table} \begin{tabular}{l c based framework is extremely flexible, and could easily be extended to other data modalities. For example, one could use image features to help identify objects inside of box proposals. Although in supervised settings image features have typically not added much in to the higher resolution LiDAR point clouds, in our unsupervised setting it is certainly possible that pixel information can help disambiguate objects from background. Further, we plan to explore the use of reward fine-tuning for other vision applications beyond object discovery. ## Acknowledgments and Disclosure of Funding This research is supported in part by grants from the National Science Foundation (III-2107161, IIS 2144117, and IIS-1724282), Nvidia Research, and DARPA Geometries of Learning (HR00112290078). We also thank Wei-Lun Chao, Yihong Sun, Travis Zhang, and Junan Chen for their insightful discussions and valuable feedback.
2305.12363
Instance-Level Semantic Maps for Vision Language Navigation
Humans have a natural ability to perform semantic associations with the surrounding objects in the environment. This allows them to create a mental map of the environment, allowing them to navigate on-demand when given linguistic instructions. A natural goal in Vision Language Navigation (VLN) research is to impart autonomous agents with similar capabilities. Recent works take a step towards this goal by creating a semantic spatial map representation of the environment without any labeled data. However, their representations are limited for practical applicability as they do not distinguish between different instances of the same object. In this work, we address this limitation by integrating instance-level information into spatial map representation using a community detection algorithm and utilizing word ontology learned by large language models (LLMs) to perform open-set semantic associations in the mapping representation. The resulting map representation improves the navigation performance by two-fold (233%) on realistic language commands with instance-specific descriptions compared to the baseline. We validate the practicality and effectiveness of our approach through extensive qualitative and quantitative experiments.
Laksh Nanwani, Anmol Agarwal, Kanishk Jain, Raghav Prabhakar, Aaron Monis, Aditya Mathur, Krishna Murthy, Abdul Hafez, Vineet Gandhi, K. Madhava Krishna
2023-05-21T06:26:35Z
http://arxiv.org/abs/2305.12363v3
# Instance-Level Semantic Maps for Vision Language Navigation ###### Abstract Humans have a natural ability to perform semantic associations with the surrounding objects in the environment. This allows them to create a mental map of the environment, allowing them to navigate on-demand when given linguistic instructions. A natural goal in Vision Language Navigation (VLN) research is to impart autonomous agents with similar capabilities. Recent works take a step towards this goal by creating a semantic spatial map representation of the environment without any labeled data. However, their representations are limited for practical applicability as they do not distinguish between different instances of the same object. In this work, we address this limitation by integrating instance-level information into spatial map representation using a community detection algorithm and utilizing word ontology learned by large language models (LLMs) to perform open-set semantic associations in the mapping representation. The resulting map representation improves the navigation performance by two-fold (233%) on realistic language commands with instance-specific descriptions compared to the baseline. We validate the practicality and effectiveness of our approach through extensive qualitative and quantitative experiments. ## I Introduction Advancements in machine learning research have brought about rapid changes in the field of robotics, allowing for the development of sophisticated autonomous agents. However, making this technology practically viable for large-scale adoption requires a natural mechanism to interact with humans. Vision Language Navigation (VLN) research aims to achieve this goal by incorporating natural language understanding into autonomous agents to navigate the environment based on linguistic commands. Prior approaches to VLN have addressed this task by harnessing the capabilities of visual grounding models, which allow the navigating agents to localize objects in the visual scene or directly ground navigable regions based on linguistic descriptions. However, these approaches fail to address linguistic commands which require spatial precision to identify the goal region. Furthermore, these approaches assume that the object referred to by the linguistic command is always visible in the current scene. Such an assumption rarely holds in realistic scenarios, where things can move in or out of the current scene as we navigate the environment. Consider the example in Figure 1 with the language command, "walk to the fourth chair in your field of view". To execute this command, we first need to explore the entire room to find all instances of chairs and then find the fourth instance from where the command was given. For visual grounding-based approaches, it is non-trivial to handle such scenarios as there is no way to rank the localized chairs based on distance. To counteract the above issues, geometric maps, which create a global mapping of the surrounding environment, provide a direct mechanism to ground all the objects present in the scene, including those not visible in the current view, and additionally, are readily amenable for planning and navigation purposes. In this work, we propose a memory-efficient mechanism for creating a semantic spatial representation of the environment, which is directly applica ble to robots navigating in real-world scenes. Recent works like VLMaps and NLMap [2] propose a mechanism to build semantic spatial maps without any labeled data by fusing pre-trained vision-language features with the 3D point cloud of the physical world. They compute the similarity between visual and linguistic features in a common semantic space of a large-scale pre-trained vision-language model and utilize large-language models to convert the natural language command to a sequence of navigation goals for planning. However, their map representation doesn't allow them to differentiate between different instances of the same object and hence handle language queries that describe an instance-specific navigation goal, like the ones mentioned in Figure 2, as the visual encodings are instance-agnostic. Moreover, their mechanism is memory intensive as they require high-dimensional feature embeddings to make semantic associations for the objects in the visual scene. Our work focuses on creating spatial maps of the environment with instance-level semantics. We achieve this in a memory-efficient manner, bypassing the use of feature embeddings altogether. We show that **S**emantic **I**nstance **M**aps (SI Maps) are computationally efficient to construct and allow for a wide range of complex and realistic commands that evade prior works. ## II Related Work ### _Semantic Mapping_ With the recent progress in computer vision and natural language processing literature, there has been considerable interest in augmenting the semantic understanding of traditional SLAM algorithms. Earlier works like SLAM++ [3] propose an object-oriented SLAM, which utilizes prior knowledge about the domain-specific objects and structures in the environment. Later works like [4] assign instance-level semantics using Mask-RCNN [5] to 3D volumetric maps. Some methods [1, 2] have also explored transferring predictions from CNNs in 2D pixel space to 3D space for 3D reconstruction. Concurrent to our work, [6] proposes a deep reinforcement learning-based approach for multi-object instance navigation, albeit without linguistic commands. VLMaps [1] and NLMap-Saycan [2] propose a natural language queryable scene representation with Visual Language models (VLMs). These methods utilize large-language models (LLMs) to parse language instructions and identify the involved objects to query the scene representation for object availability and location. ### _Instance Segmentation_ The ability to identify and localize different instances of similar objects is crucial for visual perception tasks in robotics. In the Computer Vision literature, the task of instance segmentation serves to evaluate such capabilities formally. Earlier works [5] utilized region proposal networks to predict candidate bounding boxes followed by a mask head to regress the instance-level segmentation mask for each proposal. While initial approaches designed task-specific architectures, more recent methods [7] have moved towards generalized architectures for different image segmentation tasks like semantic, instance, and panoptic segmentation. Mask2Former [7] employs attention mechanism to extract localized object-centric features in an end-end manner. In this work, we utilize segmentation masks from Mask2Former to create instance-level semantic maps which are directly amenable for planning during autonomous navigation. ### _Vision Language Navigation_ Most of the work in Vision Language Navigation (VLN) has focused on navigating in the environment using semantic perception based on the front camera view of the autonomous agent. Specifically, these works take the front camera image and the language command as input, and the navigation task is reduced to a sequence modeling task where at each time stamp, the optimal action is predicted to complete the navigation task successfully. Subsequent works have tackled the VLN problem using sequence-to-sequence learning [8], reinforcement learning [9] or behavior cloning methods [10]. However, these methods are non-trivial to interpret, and recent works [8] have found that such methods are unable to utilize the visual modality effectively for the navigation task. Consequently, recent works [1, 2] on VLN have focused on creating a semantic map of the environment for motion planning and utilizing visual grounding capabilities of large-scale vision-language models [11] to ground the semantic concepts in a visual world. In this work, we focus on creating a semantic mapping representation of the environment using large-scale language models. Unlike prior works, we create these maps in an embedding-free manner, thus reducing the computational cost significantly. Fig. 2: Our top-view map representation allows indoor embodied agents to perform complex instance-specific goal navigation in object-rich environments. The language queries can refer to individual instances based on spatial and viewpoint configuration with respect to other objects of the same type while preserving the navigation performance on standard language queries. ## III Method ### _Problem Statement_ In this work, we aim to create a semantic map of the surrounding environment containing instance-level information for the various objects. Maps containing both instance-level and semantic information are necessary to handle linguistic commands which are frequently used in the daily vernacular. For example, consider the command, "Go to the empty chair near the third table". We are required to identify "which instance of the table" is being talked about and then point out the instance of the empty chair. Our approach is equipped to handle such scenarios through an instance-specific mapping representation of the environment. We build SI Maps using only RGB-D sensors, pose information, and an off-the-shelf panoptic segmentation model. SI Maps creation involves two steps: (1) Occupancy map creation with semantic labels and (2) Community detection to separate instances of a given semantic label. The whole pipeline is illustrated in Figure 3. ### _SI Map Creation_ **Building Occupancy Grid:** We define SI Maps as \(\mathcal{M}\in\mathbb{R}^{H\times W\times 2}\), where \(\bar{H}\) and \(\bar{W}\) represent the size of the top-down grid map. Similar to VLMaps, with the scale parameter \(s\) (\(=0.05m\) in our experiments), a SI Map \(\mathcal{M}\) represents an area with size \(s\bar{H}\times s\bar{W}\) square meters. \(\mathcal{M}_{i,j}=<o,t>\) means that grid cell \((i,j)\) is occupied by the \(t^{th}\) instance of object \(o\) in the environment. Since we are using the Mask2Former panoptic segmentation model trained on the COCO dataset [12], \(o\in\mathcal{O}\) (where \(\mathcal{O}\) is the set of objects present in the COCO dataset). To build our map, similar to VLMaps, we for each RGB-D frame, back-project all the depth pixels \(\textbf{u}=(u,v)\) to form a local depth point cloud that we transform to the world frame using the pose information. For depth pixel \(\textbf{u}=(u,v)\) belonging to the \(\bar{t}^{th}\) RGB-D frame, let \((p_{x}^{i,u,v},p_{y}^{i,u,v})\) represent the coordinates of the projected point in the grid map \(\mathcal{M}\). **Integrating Instance-level information:** With the occupancy map defined, we now utilize community detection algorithms to separate out the different instances in the environment. Specifically, we use the modularity-based Louvain method, a greedy, hierarchical optimization method that iteratively refines communities to maximize the modularity value. The modularity value is a measure of the density of links within communities compared to links between communities. Let the output of the panoptic segmentation model for \(\textbf{u}=(u,v)\) be \(\langle o_{i,u,v},t_{i,u,v}\rangle\). This means object \(o_{i,u,v}\)'s \(t_{i,u,v}^{th}\) instance within the frame is present at pixel **u**. We use this information to set the object label \(o\) for \(\mathcal{M}_{(p_{x}^{i,u,v},p_{y}^{i,u,v})}\) as \(o_{i,u,v}\). When there exist multiple 3D depth pixels projecting to the same grid location in the map, we retain the label of the pixel with the highest vertical height. To divide the different grid cells labeled having object \(o\) into different instances, we construct an undirected weighted graph \(G=(V,E,W)\), where each grid cell \((i,j)\) for whom the object label of \(\mathcal{M}_{i,j}\) is equal to \(o\) is included as a node in the set of vertices \(V\). Whenever two neighbouring pixels \(\textbf{u}_{\textbf{1}}=(u_{1},v_{1})\) and \(\textbf{u}_{\textbf{2}}=(u_{2},v_{2})\) belong to the same entity in the \(\bar{u}^{th}\) RGB-D frame, their corresponding grid cells \((p_{x}^{i,u_{1},v_{1}},p_{y}^{i,u_{1},v_{1}})\) and \((p_{x}^{i,u_{2},v_{2}},p_{y}^{i,u_{2},v_{2}})\) should also belong to the same instance in real-world. Hence, whenever pixels \(\textbf{u}_{\textbf{1}}\) and \(\textbf{u}_{\textbf{2}}\) have the semantic label \(o\) and \(<\)\(o_{i,u_{1},v_{1}},t_{i,u_{1},v_{1}})\(=\)\(\langle o_{i,u_{2},v_{2}},t_{i,u_{2},v_{2}}\rangle\), i.e., depth pixels \(\textbf{u}_{\textbf{1}}\) and \(\textbf{u}_{\textbf{2}}\) belong to the same entity within the image, we increase the edge weight between grid cells \((p_{x}^{i,u_{1},v_{1}},p_{y}^{i,u_{1},v_{1}})\) and \((p_{x}^{i,u_{2},v_{2}},p_{y}^{i,u_{2},v_{2}})\) by one. This helps us in transferring the instance segmentation information present in the panoptic segmentation outputs of the RGB-D frames to our map and also helps us to track the same instance across frames using the pose data. To prevent the frequency of visiting a particular area in the environment during mapping from unfairly affecting any edge weight, we normalize all the edge weights by the number of times their constituent nodes (grid cells) were observed across all RGB-D images for that scene. Ideally, in our graph, all grid cells belonging to the same connected component should belong to the same real-world entity. But Mask2Former masks are not perfect at a pixel level; hence it is possible for spurious edges to be drawn between nodes belonging to different real-world Fig. 3: In STEP 1, we create a semantic level map of the environment by back projecting the Mask2Former semantic labels of the RGB pixels across different images onto the grid map. In STEP 2, we extract the subgraph concerned with object \(o\) and run a community detection algorithm to break the grid cells containing object \(o\) into instances. entities. However, such edges are likely to be few in number. To disregard such spurious edges, we group the nodes in \(V\) using community detection algorithms instead of naively breaking them into connected components. We initialize the graph with a separate community for each node. We use the Louvain community detection method, which involves two phases: (1) Modularity optimization and (2) Community aggregation. During modularity optimization, for each node in the graph, we compute the change in modularity by moving it to neighboring communities. The node is transferred to the community, which results in the highest increase in modularity. This procedure is repeated for all nodes until no further improvement in modularity is possible. In the community aggregation phase, the communities formed in the modularity optimization phase are considered single nodes. The weights of the edges between the new nodes are determined by the sum of the weights of the edges between the nodes in the original communities. The two phases are iteratively repeated until the modularity value converges. After convergence, we get a labeled graph, where the nodes are grouped based on their community membership, i.e., occupancy grid cells belonging to the same instance are grouped together for all the objects in the environment. To correct the over-segmentation of communities, a post-processing step is applied to merge communities \(C_{1}\) and \(C_{2}\) if more than \(K\%\) of the members of \(C_{1}\) are neighbors of some member of \(C_{2}\). In contrast to VLMaps, our approach doesn't utilize the high dimensional LSeg [13] feature embeddings for semantic map creation, which provides a memory-efficient mechanism to construct the instance-level semantic occupancy grid. For comparison, VLMaps representation requires an average storage of about 2 gigabytes for a \(1000\times 1000\) map, whereas SI Maps needs only about 16 megabytes for the same map size. Additionally, the proposed approach is highly flexible and adaptable, as it can easily incorporate other types of sensor data like LiDAR, IMU and plug different segmentation models. The provision of tunable hyper-parameter \(K\) further provides controllability in our approach, which is a desired capability for real-world deployment. In the next section, we show how SI Maps can be directly used for language-conditioned navigation. ### _Language-based Navigation_ The significance of Semantic Instance maps becomes apparent when dealing with commands that necessitate instance-level grounding. For a given language command, we would like to identify the region in SI Maps where the robot must navigate to execute the command successfully. Additionally, since different commands can refer to different navigational maneuvers, we must also determine the maneuvers required for a specific language query. To achieve this, we define function primitives for each possible maneuver, reducing the task to classifying the appropriate function primitive for each sub-command. For this classification, we utilize the powerful large language model (LLM), ChatGPT[14], for motion planning. LLMs, trained on billions of lines of text and code, demonstrate advanced natural language understanding, reasoning, and coding capabilities. Similar to the approach with VLMaps, we repurpose LLMs to generate executable Python code for the robot. Specifically, we supply ChatGPT with the list of function primitives and their respective descriptions. We then prompt ChatGPT with several language queries accompanied by the corresponding ground truth Python code containing a sequence of function primitives based on the language command. During inference, for each language command, we provide ChatGPT with the list of objects present in the SI Maps and generate Python code that refers to the specific instances involved in the language command. In Figure 4, we show a few examples of the Python executable code generated by ChatGPT for the given commands. ChatGPT successfully generates the correct executable code after prompting it with a few examples of language queries and corresponding ground truth Python executable code. To ground instances, our function primitives calls also include an instance parameter to handle instance-specific queries. The instance parameter is directly inferred from the language command by ChatGPT along with the object of interest. Overall, we define 23 function primitives for complex navigational maneuvers like moving between two objects, navigating to \(n^{th}\) closest object, etc., and the essential turning and moving primitives. ## IV Experiments ### _Experimental Setup_ We showcase the effectiveness of our approach on multiple scenes from Matterport3D [15] dataset in the Habitat [16] simulator. Matterport3D is a commonly used dataset for evaluating the navigational capabilities of existing VLN agents in an indoor environment. The robot must maneuver in a continuous environment, performing navigational maneuvers Fig. 4: An example of the executable Python code generated by ChatGPT for the given language commands. The generated code includes an instance parameter in the function primitive call for navigating to the specified instance in the environment. specified by the natural language command. For top-view map creation, we collect 5,267 RGB-D frames from 5 different scenes and store the camera pose for each frame. **Baseline:** We evaluate against a logical baseline where the semantic top-view maps from the VLMaps-based approach are separated into separate instances. If the objects in the environment are well separated, the semantic segmentation output should already contain the information required to separate different instances of similar objects by simply applying connected components. As a result, our baseline involves applying connected components over the VLMaps output. However, in realistic scenarios, different instances of the same object can be close to each other; for example: in a restaurant, chairs belonging to the same table are close to each other. In such a scenario, just computing connected components will not work, as multiple instances will get clubbed into a single instance. **Evaluation Metrics:** Like prior approaches [1, 8, 17] in VLN literature, we use the gold standard _Success Rate_ metric, also known as _Task Completion_ metric to measure the success ratio for the navigation task. We compute the _Success Rate_ metric through human and automatic evaluations. For automatic evaluation, we use the ground truth environment map and compute the _Success Rate_ using a pre-defined heuristic where the navigation sub-goal is considered successful if we stop within a threshold distance of the ground truth object. For human evaluation, we verify if the agent ends up in a position desired according to the query. ### _Evaluation Results_ In this section, we perform quantitative and qualitative comparisons of SI Maps against VLMaps and VLMaps with connected components. We compare the performance of each scene representation for the downstream language-based navigation task using the _Success Rate_ in table I. We use the same function primitives for all the methods. Human evaluation was done because of the observation made during a few queries where the agent ended up close to the target object, but it did not complete the task in the desired way. We observe that SI Maps exhibit a remarkable improvement in performance compared to other approaches. SI Maps achieve an impressive two-fold increase in success rate metric compared to 24% obtained by VLMaps on human evaluation, demonstrating a substantial leap in the instance-specific goal navigation. Since VLMaps only contain semantic information, they fail on queries that refer to specific instances of an object, like "navigate to the second counter". Our logical baseline, VLMaps with connected components, can handle some instance-specific queries, resulting in an incremental performance gain of 10% for human evaluation than vanilla VLMaps. However, the success of this method is observed in scenes where neighboring instances of the same object have ample room between them. In contrast, real-life environments such as offices, restaurants, and hospitals often have objects in close proximity to each other. In these cases, instance-level information is essential for distinguishing between neighboring objects. SI Maps demonstrate robustness to object placement in the environment by directly utilizing the instance-level information provided by the instance segmentation model during the occupancy grid creation. ### _Qualitative Results_ In this section, we showcase qualitative examples of our approach for the vision language navigation task. The results are illustrated in Figure 5 with the corresponding A-star trajectory using SI Maps for navigation. SI Maps allow navigating to specific instances in the scene based on their relative distance with respect to other objects (left, center) \begin{table} \begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Success Rate} \\ \cline{2-3} & Human Evaluation & Automatic Evaluation \\ \hline VL Maps & 0.24 & 0.46 \\ VL Maps with CC & 0.34 & 0.48 \\ SI Maps (K=5) & **0.80** & **0.88** \\ SI Maps (K=9) & 0.76 & **0.88** \\ \hline \hline \end{tabular} \end{table} TABLE I: SI Maps outperform other baseline methods by significantly large margins on the _Success Rate_ metric. The best results are highlighted in **bold**. Fig. 5: The above figure shows the agent in different scenes in a simulated environment with three different queries. Images on the top show the RGB top-down view map, along with the segmented goal object instance. The corresponding images on the bottom represent the path taken by the agent to reach the desired object from the initial location. and direction-based specification in the global map (right). The downstream navigation, as a consequence of SI Maps, is agnostic to the starting pose and orientation of the agent in the environment. We also show qualitative comparisons of different methods on the quality of instance-level top-view maps in Figure 6 for different seating objects (chair, couch, sofa) in the simulated environment. Our approach effectively captures the instance-level semantics of objects in the environment, recovering 32 instances out of 29 present in the map (with 3 extra noisy segments). In contrast, the baseline of VLMaps with connected components detects 26 instances, but most of them are noisy segments, and it merges several separate instances (for the same object in close proximity) into a single instance. Our results are particularly impressive in the middle region of the map, which corresponds to the dining area in the environment. Here, the chairs are in close proximity to each other, and the vanilla VLMaps approach fails when a particular instance of chair is queried. Similarly, applying connected components-based heuristics to separate instances is not enough, as the semantic segmentation masks of the chairs end up being connected with each other, resulting in multiple instances being merged. The VLMaps-based approaches rely on alignment between per-pixel visual embeddings and linguistic feature embeddings, which can be sensitive to noise due to the unconstrained nature of the association. The benefit of our feature-embedding-free approach becomes evident as we directly constrain the occupancy grid creation with the instance segmentation masks. As a result, SI Maps have considerably less noise than derivative VLMaps approaches. Community detection further helps reduce noise by filtering out spurious communities formed due to noise, leading to a much cleaner map, which can also be observed in Figures 1, 6. ## V Conclusion In this study, we introduce a novel instance-focused scene representation for indoor settings, enabling seamless language-based navigation across various environments. Our representation accommodates language commands that refer to specific instances within the environment. Furthermore, our map creation method is more memory-efficient, resulting in an impressive 128-fold decrease in storage, as it does not rely on high-dimensional feature embeddings for visual and linguistic modalities. Additionally, our approach demonstrates robustness in relation to object placement in the environment and is less vulnerable to noise than previous methods. We showcase the practicality of the proposed SI Maps using success rate and panoptic quality metrics. Future research could investigate 3D instance segmentation techniques to incorporate instance-level semantics into the occupancy grid creation process directly. ## Acknowledgement We acknowledge iHub-Data IIIT Hyderabad for their support to this work.
2303.17195
Improving medium-range ensemble weather forecasts with hierarchical ensemble transformers
Statistical post-processing of global ensemble weather forecasts is revisited by leveraging recent developments in machine learning. Verification of past forecasts is exploited to learn systematic deficiencies of numerical weather predictions in order to boost post-processed forecast performance. Here, we introduce PoET, a post-processing approach based on hierarchical transformers. PoET has 2 major characteristics: 1) the post-processing is applied directly to the ensemble members rather than to a predictive distribution or a functional of it, and 2) the method is ensemble-size agnostic in the sense that the number of ensemble members in training and inference mode can differ. The PoET output is a set of calibrated members that has the same size as the original ensemble but with improved reliability. Performance assessments show that PoET can bring up to 20% improvement in skill globally for 2m temperature and 2% for precipitation forecasts and outperforms the simpler statistical member-by-member method, used here as a competitive benchmark. PoET is also applied to the ENS10 benchmark dataset for ensemble post-processing and provides better results when compared to other deep learning solutions that are evaluated for most parameters. Furthermore, because each ensemble member is calibrated separately, downstream applications should directly benefit from the improvement made on the ensemble forecast with post-processing.
Zied Ben-Bouallegue, Jonathan A Weyn, Mariana C A Clare, Jesper Dramsch, Peter Dueben, Matthew Chantry
2023-03-30T07:24:08Z
http://arxiv.org/abs/2303.17195v3
# Improving medium-range ensemble weather forecasts with hierarchical ensemble transformers ###### Abstract Statistical post-processing of global ensemble weather forecasts is revisited by leveraging recent developments in machine learning. Verification of past forecasts is exploited to learn systematic deficiencies of numerical weather predictions in order to boost post-processed forecast performance. Here, we introduce PoET, a post-processing approach based on hierarchical transformers. PoET has 2 major characteristics: 1) the post-processing is applied directly to the ensemble members rather than to a predictive distribution or a functional of it, and 2) the method is ensemble-size agnostic in the sense that the number of ensemble members in training and inference mode can differ. The PoET output is a set of calibrated members that has the same size as the original ensemble but with improved reliability. Performance assessments show that PoET can bring up to 20% improvement in skill globally for 2m temperature and 2% for precipitation forecasts and outperforms the simpler statistical member-by-member method, used here as a competitive benchmark. PoET is also applied to the ENS10 benchmark dataset for ensemble post-processing and provides better results when compared to other deep learning solutions that are evaluated for most parameters. Furthermore, because each ensemble member is calibrated separately, downstream applications should directly benefit from the improvement made on the ensemble forecast with post-processing. Numerical Weather Prediction Deep Learning Transformers Ensemble Forecast Neural Network Medium-range Weather Prediction ## 1 Introduction The chaotic nature of the atmosphere makes forecasting the weather a challenging and scientifically exciting task. With large and high-quality publicly available datasets (Hersbach et al., 2020), weather forecasting is becoming a new playing field for deep-learning practitioners (Pathak et al., 2022; Bi et al., 2022; Lam et al., 2022). More traditionally, at national meteorological centres, weather forecasts are generated by numerical weather prediction (NWP) models that resolve numerically physics-based equations. A Monte-Carlo approach is followed to account for uncertainties: an ensemble of deterministic forecasts is run with variations in the initial conditions, the model parametrizations, and/or the numerical discretization. This ensemble approach, initially developed to explore the limits of deterministic forecasting, has now become the backbone of operational weather forecasting (Lewis, 2005). Practically, ensemble weather forecasts are a set of physically consistent weather scenarios that ideally capture the full range of possible outcomes given the information available at the start of the forecast (Leutbecher and Palmer, 2008) and decision-making can be optimized using probabilistic forecasts derived from such an ensemble (Richardson, 2000). One can assess not only the uncertainty of a weather variable at a given point in space and time, but also any joint probability distributions across the variables. This versatility is essential for ensemble prediction systems to support downstream applications with high societal relevance, such as flood forecasting or human heat stress forecasting (Magnusson et al., 2023). However, as an output of a NWP model, an ensemble forecast is sub-optimal in a statistical sense. On top of the limited ensemble size effect (Leutbecher, 2019), systematic deficiencies like model biases (defined as the averaged differences between forecasts and observations) and over- or under-dispersiveness (too much or too little ensemble spread as measured by the standard deviation among the ensemble members) are common features of any NWP ensemble forecasts (Haiden et al., 2021). Statistical post-processing is proposed as a simple remedy where past data is exploited to learn forecast errors and correct the current forecast accordingly. A variety of post-processing approaches have been used over the years, from simple bias correction to machine learning based methods (Vannitsem et al., 2021). Classically, post-processing of ensemble forecasts is achieved either by assuming the form of the predictive probability distribution and optimizing its parameters (Gneiting et al., 2005; Raftery et al., 2005) or by correcting a limited set of quantiles of the predictive distribution (Taillardat et al., 2016; Ben Bouallegue, 2017). Recently, multiple different modern machine learning methods have been applied to ensemble post-processing. For example, Rasp and Lerch (2018) trained a neural network to predict mean and standard deviation of a normal distribution for 2m temperature forecasting at stations in Germany while Bremnes (2020) combined a neural network and Bernstein polynomials for generating quantile forecasts of wind speed at stations in Norway. In the case of downstream applications based on such a post-processed forecast, an additional post-processing step is required to "reconstruct" forecast dependencies between variables or in time and space (Ben Bouallegue et al., 2016; Baran et al., 2020). However, more recent developments in deep learning, particularly transformers, promise to resolve this issue by using mechanisms such as attention (Vaswani et al., 2017) to maintain inter-variable and inter-spatial dependencies. In this work, we target the direct generation of a calibrated ensemble for potential use in downstream applications. The focus is on 2m temperature and precipitation, which are variables of interest for many stakeholders. We propose a new approach for ensemble post-processing: _PoET_ (Post-processing of Ensembles with Transformers). Our machine learning framework, PoET, combines the self-attention ensemble transformer used for post-processing of individual ensemble members in Finn (2021) with the U-Net architecture used for bias correction in Gronquist et al. (2021), leveraging the advantages of both for the first time in a post-processing application. We compare this approach with the statistical member-by-member (MBM) method proposed by Van Schaeybroeck and Vannitsem (2015) that is simpler when compared to PoET but effective. In its simplest form, this method consists of a bias correction and a spread scaling with respect to the ensemble mean. MBM has been successfully tested on time series of 2m temperature ensemble forecasts and is now run operationally at the Royal Meteorological Institute of Belgium (Demaeyer et al., 2021). Machine learning approaches for ensemble post-processing rely on the availability of suitable datasets (Dueben et al., 2022). Here, we use re-forecasts and reanalysis for training. In this context, re-forecasts and reanalysis are praised for their consistency because they are generated from a single NWP model for long periods of validity time. In particular, the benefit of re-forecasts for post-processing has been demonstrated in pioneering works on 2m temperature and precipitation forecasts at station locations by Hagedorn et al. (2007) and Hamill et al. (2008), respectively. Re-forecasts are also becoming the cornerstones of benchmark datasets for post-processing of weather forecasts (Ashkboos et al., 2022; Demaeyer et al., 2023; Gronquist et al., 2021). In this work, we continue this trend focusing on ensemble post-processing of global gridded forecasts. The remainder of this paper is organized as follows: Section 2 introduces the dataset and methods investigated in this study; Section 3 provides details about the implementation of MBM and PoET for the post-processing of 2m temperature and precipitation ensemble forecasts as well as a description of the verification process; Section 4 provides illustrative examples of post-processing in action; Section 5 presents and discusses verification results and Section 6 concludes this paper. ## 2 Data and Methods ### Data At the European Centre for Medium-Range Weather Forecasting (ECMWF), the re-forecast dataset consists of 11 ensemble members (10 perturbed + 1 control) generated twice a week over the past 20 years (Vitart et al., 2019). In our experiments, the dataset comes from the operational Integrated Forecasting System (IFS) re-forecasts produced in 2020, that is with IFS cycles 46r1 and 47r1, with the switch in June 2020. Fields are on 1 degree horizontal grid resolution and the focus is on lead times every 6h up to 96h. Re-forecasts from 2000 to 2016 are used for training, while those in 2017 and 2018 are used for validation. The post-processing models are trained towards ERA5, the ECMWF reanalysis dataset (Hersbach et al., 2020). The target is the reanalysis of 2m temperature while the short range forecasts at T+6h (aligned with the forecast validity time) is used as a target for precipitation to account for the spin-up after data assimilation (for a comprehensive assessment of ERA5 daily precipitation please refer to Lavers et al., 2022). For testing, we use the operational ensemble data from 2021, using two forecasts each week for 104 start dates in total, according to the ECMWF sub-seasonal-to-seasonal (S2S) model iterations. The operational ensemble has 51 members (50 perturbed members + 1 control member), but we apply post-processing methods that are agnostic to ensemble size: they may be run in inference mode with a different ensemble size than used in training. The data from 2021 includes model cycles Cy47r1, Cy47r2 and Cy47r3, switching in May and then October of 2021, respectively. Notably the model upgrade in Cy47r2 included an increase to 137 vertical levels in the ensemble, an improvement that is not included in the training dataset. We are therefore directly testing our methodology for generalization across model cycles, an important property to reduce the maintenance required when operationalizing machine learning systems. ### Statistical benchmark method for comparison Neural networks are not the only methods that can be used to calibrate ensembles. There exist simpler statistical methods, which require less computational power and which are generally more 'explainable'. In this work, we use the member-by-member (MBM) approach detailed in Van Schaeybroeck and Vannitsem (2015) as a benchmark. With MBM, a correction is applied to each ensemble member individually with a component common to all members and a component that adjusts the deviation of a member with respect to the ensemble mean. Let's denote \(\hat{x}_{i}\) the corrected forecast for the \(m^{\text{th}}\) member of the ensemble. Formally, MBM consists of applying: \[\hat{x}_{i}=\alpha+\beta\overline{x}+\gamma(x_{i}-\overline{x}), \tag{1}\] where \(x_{i}\) is the ensemble member \(i\) and \(\overline{x}\) the ensemble mean. The parameter \(\alpha\) is the bias parameter that nudges the ensemble mean, \(\beta\) is the linear coefficient that scales the ensemble mean, and \(\gamma\) is the scaling parameter that adjusts the spread of the ensemble. Each parameter can be inspected separately to understand their respective contribution to the modifications of the forecasts. In our application, the parameters optimization follows the so-called _WER+CR_ approach as defined in Van Schaeybroeck and Vannitsem (2015). _WER+CR_ means that the estimated parameters are constrained to preserve two different reliability conditions, the _WER_ and the _CR_ conditions. For bias-free forecasts, climatological reliability (_CR_) is defined as the equality of forecast variability with observations variability, while weak ensemble reliability (_WER_) is defined as the agreement between average ensemble variance and the mean squared forecast error. The analytical formulae used to compute the 3 MBM parameters are provided in Appendix 6.1). Note that other flavors of MBM exist (_e.g._ with score optimization), but they have been disregarded because of their prohibitive computational costs in our application. ### Transformers Transformers are a class of neural networks that were initially invented for natural language processing (Vaswani et al., 2017). The main advantage of transformers is the capability to process arbitrary lengths of sequences, drawing context from every part of the sequence. This is in opposition to recurrent neural networks, which use techniques such as long short-term memory that can only draw context from part of a sequence due to saturating gradients (Hochreiter and Schmidhuber, 1997). In a transformer, this sequence processing is done with a mechanism called attention. The basic transformer architecture is a dense feed-forward neural network without convolutions or recurrence. The complete architecture consists of a sequence of encoder blocks, which transform input data into a latent space, followed by a sequence of decoder blocks that produce a final prediction. This encoder-decoder structure has seen much success in deep learning. The novelty within transformers, however, comes from the fact that the encoder blocks use a self-attention layer which can enable the efficient calculation of context within a full input sequence. The self-attention layer is utilized to encode both short-term and long-term dependencies in the input data. Self-attention uses a key-query-value construction which shares some similarities with Kalman filters. The learnable parameters are weight matrices \(W^{K}_{l},W^{Q}_{l},W^{V}_{l}\) that encode the response to input tensors \(X_{l}\) at the \(l\)-th layer such that \(K=W^{K}_{l}X_{l}\), \(Q=W^{Q}_{l}X_{l}\), and \(V=W^{V}_{l}X_{l}\) are the key, query, and value, respectively. Note that the term'self-attention' refers to the operation of each of these components on the same input latent state tensor \(X_{l}\). The resulting scaled dot-product attention for layer \(l\) is \[\mathbf{Attention}_{l}(Q_{l},K_{l},V_{l})=\mathbf{softmax}\left(\frac{Q_{l}K^{ T}_{l}}{\sqrt{d_{k}}}\right)\cdot V_{l} \tag{2}\] where \(d_{k}\) is the dimensionality of keys and queries. In the decoder, the attention blocks have modified values within the softmax calculation to avoid illegal 'future' values, since the sequence order is encoded implicitly rather than explicitly through, for example, a time dependence. Here the illegal values are set to \(-\infty\) within the softmax calculation. In practice, the weights are \(1\times 1\) convolution layers with a small number of filters, or heads, which produce multiple attention maps. This so-called multi-head attention enables the parallel computation of each attention layer within each encoder block. However, the dot products between all latent features do carry a significant computational burden. As the use of transformers has evolved beyond natural language processing to image and even video data, the \(O(n^{2})\) scaling of the self-attention became more prohibitive, leading to techniques such as windowed transformers (Liu et al., 2021). In our approach, as detailed in Section 2.5, we instead reduce the dimensionality of the tensors. Nevertheless, approaches to improve the efficiency of transformers remain fruitful since they have substantial advantages over previous approaches, most notably their capability to process arbitrary length sequences and retain the context of all relevant parts from the entire sequence. Moreover, in contrast to models like recurrent neural networks, transformers can be trained fully in parallel. ### Ensemble Transformer Finn (2021) introduced the application of self-attentive transformers to ensemble forecasts. This transformer is applied along the ensemble dimension of the forecasts. The application along the ensemble dimension makes the method scalable to different spatial input dimensions from global to regional models and, due to the nature of transformers, the model can accept an arbitrary number of ensemble members. This implementation borrows inspiration from ensemble data assimilation, splitting the problem into a static and dynamic formulation, where the static part is a linear combination of the input data, resulting in the value matrix, \(V\), that is used in the attention layer in equation 2. Then the observations are used as the query matrix \(Q\) and the key matrix \(K\) can be interpreted as the adjoint in data assimilation. The dynamic part adds that information to each member individually by the other members. The transformer modules are modeled as residual connections that transform the original member \(Z_{l}\). This output is then projected with a final weight matrix \(W_{o}\) from the attention space into the data space, rendering the full transformer module as \[Z_{l+1}=\sigma\left(Z_{l}+W_{o}T(Z_{l})\right), \tag{3}\] where \(Z\) is the data, \(T\) is the full transformer layer, \(W_{o}\) is the linear projection of the residual output from the attention space to the data domain, and \(l\) is the index of the layer. ### Hierarchical Self-Attentive Ensemble Transformer Figure 1: Schematic of the transformer U-net architecture of PoET. The inset on the lower right is adapted from Finn (2021), used with permission. In the PoET adaptation, the \(1\times 1\) convolution shown by the yellow arrow in the inset is replaced by a \(3\times 3\) convolution with stride of 3. PoET is an adaptation of the ensemble transformer of Finn (2021). The original model was trained on a much smaller dataset, with single input fields and a very coarse resolution of 5.625 degrees in latitude and longitude. To adapt the architecture to our higher-resolution dataset, we implement the transformer within a U-net architecture, shown in Figure1. At each depth layer in the U-net, following the embedding 2D convolution layers, we add one or more transformer blocks. Within the transformer blocks, the convolution layers producing the key and query embeddings use a \(3\times 3\) convolution with stride of 3, to further reduce the dimensionality of the ensemble similarity computation. Skip connections allow transformed hidden states to pass through directly to the decoder. Altogether, the PoET implementation reduces the memory footprint of matrix multiplication operations within the transformer and enables the transformers to operate across different spatial scales. The layer normalization of the original ensemble transformer still operates at the full resolution of the grid, but unfortunately does not allow the model to be run at a different resolution than that of the training data. Experiments omitting the layer normalization or replacing it with another common technique, batch normalization, showed much worse performance. This observation cements the layer norm as an integral part of the transformer's ability to correct forecast errors. ## 3 Experiments ### PoET configuration In our experiments, data for lead times every 6 hours from 6 hours up to a maximum of 96 hours are used for training of the 2m temperature model while, for precipitation, we start at 24h to avoid the spin-up. Because the lead time is not explicitly encoded in the model, it is possible to run inference for longer lead times1. Footnote 1: While not shown here, the PoET model remains skillful for longer lead times. For the post-processing of 2m temperature forecasts, we include input features of 2m temperature (\(T_{2}\)), temperature at 850 hPa (\(T_{850}\)), geopotential at 500 hPa (\(Z_{500}\)), \(u\)- and \(v\)-component of winds at 700 hPa (\(U_{700}\) and \(V_{700}\)), and total cloud cover (\(TCC\)). Additionally, we prescribe orography, a land-sea mask, and the top-of-atmosphere incoming solar radiation (insolation) as additional predictors. Another model using a reduced feature set consisting of only \(T_{2}\), \(T_{850}\), and \(Z_{500}\), plus the 3 prescribed variables performed only slightly worse than the one trained on the full dataset. For the post-processing of precipitation forecasts, the input predictors are changed to total precipitation, convective precipitation, convective available potential energy, total cloud cover, total column water, sea-surface temperature, temperature at 850hPa, winds at 700hPa and geopotential at 500hPa. Apart from the selected predictors, the configuration of PoET is identical for the prediction of 2m temperature and precipitation with two exceptions. Firstly, the normalization of total and convective precipitation is done with a shifted logarithmic transformation2. This transformation is applied to both the predictor and the predictand total precipitation. The second difference consists of using the kernel continuous ranked probability score (kCRPS) for precipitation instead of the Gaussian continuous ranked probability score (gCRPS) as a loss function, because the former makes no assumptions on the distribution of the ensemble (the definitions are available in Appendix 6.2.1). We tested using this formulation for 2m temperature, but observed little difference due to the Gaussian approximation being appropriate. Footnote 2: \(log(x+1)\) with \(x\) the precipitation amount normalized into a dimensionless quantity (similar to (Lopez, 2011)) ### MBM configuration The MBM parameters are estimated for each grid point and lead time separately. They also vary as a function of the time of the year in order to capture the seasonality of the forecast error. For this purpose, the training dataset differs for each forecast. We define a window centered around the forecast validity date and estimate the parameters using all training data within this time-of-the-year window. The suitable window size is different for the postprocessing of 2m temperature and of precipitation. The window size is set to \(\pm 30\) days for 2m temperature and to \(\pm 60\) days for precipitation for all lead times. As for PoET, a shifted logarithmic transformation of the precipitation data is applied with MBM. Additionally, in inference mode, spurious precipitation is removed from MBM post-processed precipitation fields and any correction leading to a change in precipitation value greater than 50mm is rejected. ### Verification process We compare PoET, MBM, and raw forecasts in terms of their ability to predict 2m temperature and precipitation up to 4 days in advance. Various aspects of the forecast performance are considered as described below. The results are presented in Section 5 while the formal definitions of the verification metrics can be found in Appendix 6.2.3. Bias and spread/skill relationships are used to assess the statistical consistency between forecast and verification. The bias is defined as the average difference between forecast and verification and a reliable forecast has a bias close to zero. The ensemble spread is defined as the standard deviation of the ensemble members with respect to the ensemble mean, while the ensemble mean error is defined as the root mean squared error of the ensemble mean. For a reliable ensemble forecast, the averaged ensemble spread should be close to the averaged ensemble mean error (Leutbecher and Palmer, 2008; Fortin et al., 2014). The continuous ranked probability score (CRPS) is computed to assess the ensemble as a probabilistic forecast. Forecast performance in a multi-dimensional space is assessed using the energy score (ES), a generalization of the CRPS to the multivariate case (Gneiting and Raftery, 2007). ES is applied over the time dimension, computed over 2 consecutive time steps. Additionally, for precipitation, probability forecast performance for pre-defined events is assessed with the Brier score (BS Brier, 1950). We consider 2 precipitation events: 6-hourly precipitation exceeding 1mm and 10mm. The relative skill of a forecast with respect to a reference forecast is estimated with the help of skill scores. In the following, we compute the continuous ranked probability skill score (CRPSS), the energy skill score (ESS), and the Brier skill score (BSS) using the raw ensemble forecast as a reference. A comparison of PoET and MBM post-processed forecasts is also performed using the latter as a reference. Scores and reliability metrics are computed at each grid point for all validity times and aggregated in time and/or space for plotting purposes. When aggregating scores in space (over the globe), we apply a weighting proportional to the cosine of the grid point latitude. ## 4 Illustrative examples ### PoET in action Fig. 2 shows the differences between each of the first 3 members of the raw ensemble and the corresponding PoET post-processed forecasts at a lead time of 2 days. The ensemble mean change (the top left panel) mostly shows a global heating, except in Asia. The top row and left-most column show the PoET forecast and raw forecast, respectively. The remaining entries show the difference between these respective ensemble members once the ensemble mean change is removed. Along the diagonal, we see the change induced by PoET on each member. The off-diagonal entries show the difference between differing raw and PoET ensemble members. The larger amplitude in these off-diagonal plots, compared to the diagonal, indicates consistency between the input and output ensemble members, _i.e._ the ensemble has not been reordered or dramatically shifted by post-processing. A comparison of PoET-corrected forecasts with ERA5 fields in 2 extreme cases is provided below. ### A 2m temperature case study At the end of June 2021, a heatwave hit the North-western United States and Canada leading to new temperature records and devastating wildfires. The top panels in Fig. 3 compare the 3-day averaged maximum (00UTC) temperature predictions of MBM and PoET with the corresponding ERA5 reanalysis field. In this example, we average 2m temperature forecasts over lead times 24, 48, and 72h. One randomly selected ensemble member is shown to illustrate post-processing in action on a single forecast. The bottom panels in Fig. 3 show the difference between ERA5 and the raw forecast as well as the corrections applied to the forecast with post-processing. We check whether post-processing compensates for errors in the raw forecast, that is if Figs 3(e) and 3(f) match Fig. 3(d). Overall, there is a good correspondence between the raw forecast error and the post-processing corrections for both MBM and PoET. For example, we note the correction of the cold bias over the continent. However, there is some spotiness visible in the PoET correction (Fig. 3f). Moreover, in both Fig. 3 (e) and (f), as expected, fine details in the error pattern are not accurately captured, due to factors such as the limited predictability of this extreme event. ### A total precipitation case study In March 2021, Australia was affected by extreme rainfall. Sustained heavy rain led to flooding in the Eastern part of the country and large precipitation amounts were observed on the Northern coast too. The top panels in Fig. 4 compare 3-day precipitation scenarios from MBM and PoET with the corresponding ERA5 precipitation field. The 3-day accumulated precipitation scenarios are derived from the post-processed ensemble of 6h accumulated precipitation forecasts that are consistent scenarios in space and time. Here again, we randomly select one member for illustration purposes. The bottom panels in Fig. 4 show the difference between ERA5 and the raw precipitation forecast along the corresponding post-processing corrections. As in the 2m temperature example, we check whether post-processing compensates for raw forecast errors, _i.e._ if Figs 4(e) and 4(f) match Fig. 4(d). The MBM correction only has some areas of consistency with the actual error while the PoET correction tends to partially compensate for the raw forecast error along the North and West coast, both over land and over the sea. ## 5 Verification results ### 2m temperature results In order to compare and assess the performance of MBM and PoET, we apply the verification metrics defined in Section 3.3 (_i.e._ the CRPSS, spread and error, bias, and ESS) to the post-processed forecasts. The results are first aggregated over the globe as a function of the forecast lead time. In Fig. 5, we see that both methods considerably improve the raw ensemble skill with similar results in terms of CRPSS and ESS. PoET generates more skillful forecasts than MBM, but both methods are able to improve by \(\sim~{}20\%\) the raw forecast throughout the assessed lead times. Both methods also have a similar ability to reduce the bias. The MBM approach seems better at maintaining a spread-error parity, with PoET struggling at early lead times. Spread/skill diagrams, showing the error as a function of spread categories, reveal that aggregated scores must be interpreted carefully (see Appendix 6.3.1). Indeed, the uncertainty of PoET-corrected forecasts appears to reflect the potential forecast error more accurately than the MBM-corrected ones. Because compensating effects can be at play when averaging over all cases, we also compute bias and spread error ratio at each grid point separately before Figure 2: Relative changes by PoET to a single date of the 2m temperature forecast at day 2, valid on 6 January 2021. The left column shows the first 3 raw ensemble members. The first row shows the first 3 PoET-corrected ensemble members. Other entries show the differences between raw and PoET-corrected ensemble members when the ensemble mean difference has been removed. The ensemble mean difference is plotted in the top left corner panel. averaging absolute terms over the verification period (see Appendix 6.3.2). This approach reveals that PoET calibration underperformance compared with MBM is moderate and limited to the first lead times of the forecast. This result suggests a geographical disparity of the post-processing impact that is now further explored with maps of scores. Fig. 6 shows maps of bias for the raw data and the PoET-corrected forecasts. We focus on lead time day 4 and the results are aggregated at each grid point over all verification days. We clearly see a general decrease in the bias with Figure 4: Same as Fig. 3 but for 3-day accumulation precipitation between 18 March and 21 March 2021. Figure 3: 3-day averaged maximum temperature between 29 June and 1 July 2021: (a) ERA5, (b) MBM member 21, (c) PoET member 21, (d) difference between ERA5 and forecast (\(y-x^{raw}\)), (e) MBM correction (\(x^{MBM}-x^{raw}\)) and (f) PoET correction (\(x^{PoET}-x^{raw}\)) made to the raw forecast. In case of post-processing methods leading to a perfect deterministic forecast, (e) and (f) would match (d). almost no remaining bias over the ocean. The remaining pockets of (generally positive) bias after post-processing are mostly found over land. The broad structure of the bias is similar for MBM and for other lead times (not shown). Fig. 7 shows the gain in skill with PoET for the same lead time as in Fig. 6. CRPSS is computed using the raw ensemble as a baseline in Fig. 7(a) and MBM as a baseline in Fig. 7(b). Fig. 7(a) shows a widespread positive impact of PoET on the raw forecast skill with a larger gain over land where the raw forecast bias is generally more pronounced. A detrimental effect of post-processing is observed in some regions (_e.g._ in South America, Africa, and Australia). These regions of negative skill score are also the ones where a bias is still present after post-processing as shown in Fig. 6(b). In Fig. 7(b), there are very few areas where the CRPSS is less than zero, _i.e._ areas where MBM forecasts have more skill than PoET. Improvements through PoET are fairly consistent across the globe, with no regions where there are larger gains due to the neural network approach. Indeed, there is a strong agreement between the locations where MBM and PoET add value to the raw ensemble (not shown). Given that MBM learns a climatological correction for each grid point this suggests PoET has mostly reproduced this climatological local correction. Figure 5: (a) CRPSS (the higher, the better), (b) spread and skill, (c) bias (optimal value zero), and (d) ESS (the higher, the better) of 2m temperature for 3 ensemble forecasts: raw ensemble, MBM, and PoET. Figure 6: Bias of 2m-temperature forecasts at lead time day 4 for (a) the raw ensemble and (b) PoET. An optimal forecast has a bias close to 0. ### Total precipitation results Post-processing of precipitation forecasts is a more challenging task because of the form of the underlying forecast probability distribution with a point mass at 0 (the no-precipitation probability) and a skewness capturing the more extreme events. Post-processing with MBM and PoET is tested with small changes to the configuration used for the post-processing of 2m temperature forecast (see Section 3), and a similar set of plots is examined to assess the corrected forecast performance. Fig. 8 shows verification metrics aggregated globally as a function of the lead time. In contrast to 2m temperature results, the added benefit of either postprocessing approach is limited. With PoET, the skill improvement is approximately 2% in terms of CRPSS and ESS for the first several days of forecasting. With MBM, the skill improvement is \(\sim\)1% for most lead times. The gain in skill originates from improved performance in forecasting lower-intensity rather than higher-intensity events. Indeed, BSS computed for 2 precipitation exceeding thresholds, 1mm and 10mm, shows a larger skill score for the former (see Fig. 14 in Appendix). One explanation for the limited gain in skill with post-processing is that the raw forecast is already well-calibrated (see also Fig. 11(c) and (d) in the Appendix). Also, PoET improves the averaged performance in Figs 8(a) and (d) but seems to degrade both the bias and the spread-error ratio in Figs 8(b) and (c). This apparent paradox is explained by the large Figure 8: Same as 5 but for total precipitation. Figure 7: CRPSS of 2m-temperature PoET forecast with respect to (a) the raw ensemble and (b) MBM. Positive values indicate a skill improvement with PoET. Note the difference in scale between the 2 plots. variations in forecast performance over the globe. A look at the mean absolute bias and mean absolute spread bias confirms that bias and spread/skill relationship are overall significantly improved with PoET when assessed locally (see Fig. 13 in Appendix). Similarly, the erratic spread correction with MBM at a shorter lead time is not visible in Fig. 13(b) suggesting an averaging artifact. Fig. 13(b) also reveals that a point-by-point application of MBM does not seem appropriate to correct spread deficiencies at longer lead times. Fig. 9 provides another perspective on the bias by presenting maps of averaged values over all verification days. Here, the focus is on lead time day 4. The precipitation bias is reduced over land and the maritime continent. However, the bias in the tropical Pacific and Atlantic remains unchanged after post-processing. We note that whilst positive biases are generally well corrected, negative biases are not. Finally, Figs 10(a) and 10(b) show maps of CRPSS at day 4 for PoET using the raw forecast and MBM as a reference, respectively. PoET improves precipitation ensemble forecasts mainly over the tropics. Very localized degradation of the skill with respect to the raw forecast could be due to a too short training sample. The benefit of using PoET rather than MBM appears predominantly in the tropics but local positive skill scores are rather scattered. Alternate areas of positive and negative skill scores over the sea in the extra-tropics suggest that the 2 approaches are complementary. ### _Ens-10_ comparison During the preparation of the manuscript at hand, Ashkboos et al. (2022) produced a benchmark dataset for the postprocessing of ensemble weather forecasts (referred to as _ENS-10_ in the following ). This framework is exploited to further test the capability of PoET and compare its performance with state-of-the-art ML post-processing techniques. _ENS-10_ dataset is similar to the one focused on here, originating from the same model re-forecast framework, but with several differences. The re-forecast dataset is constructed in 2018, the spatial resolution of the data is provided on a 0.5\({}^{\circ}\) grid, and the evaluation set comprises the last 2 years of the re-forecast, meaning that the model configuration is identical between training and testing. Also, the verification differs as a non-latitude-weighted CRPS is used as a performance metric. To contribute to the benchmarking efforts, we train our model using the _ENS-10_ dataset and evaluate following the same methodology. We reduce the data volume by only training each model on a chosen subset of the total ENS10 variables. For Z500, we utilize all ENS10 variables on this pressure level and use an equivalent approach for T850. For T2m, our model predictors are 500hPa, 850hPa alongside single level variables 2m temperature, skin temperature, sea Figure 10: Same as Fig. 7 but for total precipitation. Positive values indicate a gain in skill with PoET. Regions with annual precipitation lower than 0.1mm are masked in grey. Figure 9: Same as Fig. 6 but for total precipitation. A bias close to 0 is optimal. Regions with annual precipitation lower than 0.1mm are masked in grey. surface temperature, mean sea level pressure, total cloud cover, 10m zonal and meridional wind. Also, the gridded data contains 720 points located at each pole that are unconstrained by the latitude-weighted training with PoET but contribute to the non-latitude-weighted evaluation. Therefore, we do not use PoET to correct these points but instead use the uncorrected raw forecast (the evaluation still includes these points to mirror _ENS-10_ evaluation). In terms of model architecture, we make a small change to the PoET model to incorporate the higher spatial resolution. We increase the depth of the UNet structure by 1, putting a Transformer block at its fourth level. Table 1 contrasts the PoET scores with the raw forecast and the best benchmarks from Ashkboos et al. (2022). For almost all configurations, the PoET approach leads to significant improvement over the ENS-10 benchmarks. In particular, for 2m temperature, the CRPSS with PoET is considerably better than the results with the previous baselines. The improvement of \(\sim~{}20\%\) is similar to the gain measured with our dataset. For Z500 with a 5-member ensemble, the model improves the raw output but fails to beat the LeNet approach. We found no explanation for the limited success with this variable and ensemble member configuration. Similar results are obtained with the extreme event weighted continuous ranked probability score (EECRPS) introduced in Eq. (2) in Ashkboos et al. (2022). ## 6 Conclusion This work shows how to efficiently transform an ensemble forecast into a calibrated ensemble forecast _i.e._ a set of physically consistent scenarios with improved statistical properties. We compare two methods: one machine-learning method based on self-attention transformers (PoET) and one statistical method used as a benchmark (MBM). For both methods, each member is calibrated separately but with the aim of optimising the ensemble properties as a whole. As a result, the post-processed ensemble has the spatial, temporal, and inter variable coherence necessary to enable any downstream application. Also, both tested methods can be trained on a smaller re-forecast dataset (here with 11 members) to effectively calibrate a much larger operation ensemble (here with 51 members), preserving inter-member calibration. Ensemble post-processing is successfully applied to global gridded forecasts of 2m temperature and precipitation, using ERA5 reanalysis as the ground truth. Our results show that both MBM and PoET can significantly improve the skill of the operational IFS ensemble. This improvement is achieved through a better calibration of the ensemble, both in terms of bias and spread-skill relationship. We note that PoET is better at the headline scores (CRPS, ES) but with some areas where MBM can locally outperform PoET. This latter point suggests that a combination of the two approaches could lead to further improvement of the forecast skill. Also, our case-study examples illustrate the ability of post-processing to improve existing ensemble members. The post-processing gain is smaller for precipitation than for 2m temperature. Indeed, the skill improvement of precipitation forecasts is relatively small in this application. This result contrasts with results obtained with downscaling approaches where accounting for representativeness uncertainty can have a major impact on scores (Ben Bouallegue et al., 2020). In further work, we will consider how PoET could be applied to un-gridded observations, which would require architectural changes. \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline & & \multicolumn{2}{c}{**Z500** [\(\mathrm{m}^{2}\,\mathrm{s}^{-2}\)]} & \multicolumn{2}{c}{**T850** [K]} & \multicolumn{2}{c}{**T2m** [K]} \\ \cline{3-8} \multicolumn{1}{c}{**Metric**} & **Model** & \multicolumn{1}{c}{5-ENS} & \multicolumn{1}{c}{10-ENS} & \multicolumn{1}{c}{5-ENS} & \multicolumn{1}{c}{10-ENS} & \multicolumn{1}{c}{5-ENS} & \multicolumn{1}{c}{10-ENS} \\ \hline \multirow{4}{*}{**Metric**} & Raw & \(81.03\) & \(78.24\) & \(0.748\) & \(0.719\) & \(0.758\) & \(0.733\) \\ & EMOS & \(79.08^{\pm 0.739}\) & \(81.74^{\pm 6.131}\) & \(0.725^{\pm 0.002}\) & \(0.756^{\pm 0.052}\) & \(0.718^{\pm 0.003}\) & \(0.749^{\pm 0.054}\) \\ & LeNet & \(\mathbf{75.56^{\pm 0.101}}\) & \(74.41^{\pm 0.109}\) & \(0.689^{\pm 2e-4}\) & \(0.674^{\pm 2e-4}\) & \(0.669^{\pm 7e-4}\) & \(0.659^{\pm 4e-4}\) \\ & Transformer & \(77.30^{\pm 0.061}\) & \(74.79^{\pm 0.118}\) & \(0.686^{\pm 0.002}\) & \(0.665^{\pm 0.002}\) & \(0.649^{\pm 0.004}\) & \(0.626^{\pm 0.004}\) \\ & PoET & \(76.41\) & \(\mathbf{73.72}\) & \(\mathbf{0.671}\) & \(\mathbf{0.654}\) & \(\mathbf{0.591}\) & \(\mathbf{0.586}\) \\ \hline \multirow{4}{*}{**Metric**} & Raw & \(29.8\) & \(28.78\) & \(0.256\) & \(0.246\) & \(0.258\) & \(0.25\) \\ & EMOS & \(29.10^{\pm 0.187}\) & \(30.13^{\pm 2.166}\) & \(0.248^{\pm 3e-4}\) & \(0.259^{\pm 0.018}\) & \(0.245^{\pm 0.001}\) & \(0.255^{\pm 0.018}\) \\ & LeNet & \(\mathbf{77.72^{\pm 0.039}}\) & \(27.30^{\pm 0.037}\) & \(0.235^{\pm 5e-5}\) & \(0.230^{\pm 8e-5}\) & \(0.228^{\pm 2e-4}\) & \(0.224^{\pm 1e-4}\) \\ & Transformer & \(28.35^{\pm 0.026}\) & \(27.42^{\pm 0.047}\) & \(0.235^{\pm 0.001}\) & \(0.227^{\pm 0.001}\) & \(0.222^{\pm 0.001}\) & \(0.214^{\pm 0.001}\) \\ & PoET & \(28.02\) & \(\mathbf{27.05}\) & \(\mathbf{0.229}\) & \(\mathbf{0.223}\) & \(\mathbf{0.202}\) & \(\mathbf{0.200}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Global mean CRPS and EECRPS on the ENS-10 test set (2016–2017) for baseline models with five (5-ENS) and ten (10-ENS) ensemble members. Direct applications of the methodologies developed here include post-processing for forecast verification and inter-comparison purposes. For example, bias correction can be applied to better understand changes in CRPS results with new IFS model versions Leutbecher and Haiden (2021). Also, post-processing would be a necessary step for a fair comparison of NWP forecasts with statistically optimized (data-driven) ones in forecasting competition frameworks (see for example Rasp et al., 2020). Finally, the proposed methods could be trivially adapted to a higher-resolution version of the truth, which could pave the way to ensemble post-processing of global gridded data for operational forecasting. ## Acknowledgements The authors thank Tobias Finn for many interesting discussions and the original idea of using transformer techniques in the ensemble dimension. Peter Dueben, Matthew Chantry and Jesper Dramsch gratefully acknowledge funding from the MAELSTROM EuroHPC-JU project (JU) under No 955513. The JU receives support from the European Union's Horizon research and innovation program and United Kingdom, Germany, Italy, Luxembourg, Switzerland, and Norway. Peter Dueben gratefully acknowledges funding from the ESiWACE project funded under Horizon 2020 No. 823988. Mariana Clare gratefully acknowledges funding by the European Union under the Destination Earth initiative. Finally, all authors acknowledge the use of compute resources from both the European Weather Cloud and Microsoft Azure. ## Data and Code availability It is currently difficult for the authors to share the data as the hardware used is currently being replaced. The authors will, however, make the data available for download when the paper is eventually published. PoET source code will be made available soon, the MBM parameter estimation relies on the Climdyn/pythie package (Demaeyer, 2022).
2304.03147
Improving Visual Question Answering Models through Robustness Analysis and In-Context Learning with a Chain of Basic Questions
Deep neural networks have been critical in the task of Visual Question Answering (VQA), with research traditionally focused on improving model accuracy. Recently, however, there has been a trend towards evaluating the robustness of these models against adversarial attacks. This involves assessing the accuracy of VQA models under increasing levels of noise in the input, which can target either the image or the proposed query question, dubbed the main question. However, there is currently a lack of proper analysis of this aspect of VQA. This work proposes a new method that utilizes semantically related questions, referred to as basic questions, acting as noise to evaluate the robustness of VQA models. It is hypothesized that as the similarity of a basic question to the main question decreases, the level of noise increases. To generate a reasonable noise level for a given main question, a pool of basic questions is ranked based on their similarity to the main question, and this ranking problem is cast as a LASSO optimization problem. Additionally, this work proposes a novel robustness measure, R_score, and two basic question datasets to standardize the analysis of VQA model robustness. The experimental results demonstrate that the proposed evaluation method effectively analyzes the robustness of VQA models. Moreover, the experiments show that in-context learning with a chain of basic questions can enhance model accuracy.
Jia-Hong Huang, Modar Alfadly, Bernard Ghanem, Marcel Worring
2023-04-06T15:32:35Z
http://arxiv.org/abs/2304.03147v1
Improving Visual Question Answering Models through Robustness Analysis and In-Context Learning with a Chain of Basic Questions ###### Abstract. Deep neural networks have been critical in the task of Visual Question Answering (VQA), with research traditionally focused on improving model accuracy. Recently, however, there has been a trend towards evaluating the robustness of these models against adversarial attacks. This involves assessing the accuracy of VQA models under increasing levels of noise in the input, which can target either the image or the proposed query question, dubbed the main question. However, there is currently a lack of proper analysis of this aspect of VQA. This work proposes a new method that utilizes semantically related questions, referred to as basic questions, acting as noise to evaluate the robustness of VQA models. It is hypothesized that as the similarity of a basic question to the main question decreases, the level of noise increases. To generate a reasonable noise level for a given main question, a pool of basic questions is ranked based on their similarity to the main question, and this ranking problem is cast as a _LASSO_ optimization problem. Additionally, this work proposes a novel robustness measure, \(R_{\text{score}}\), and two basic question datasets to standardize the analysis of VQA model robustness. The experimental results demonstrate that the proposed evaluation method effectively analyzes the robustness of VQA models. Moreover, the experiments show that in-context learning with a chain of basic questions can enhance model accuracy. 2020 ## 1. Introduction Visual Question Answering (VQA) is a complex computer vision task that involves providing an algorithm with a natural language question relating to an image and requiring it to produce a natural language answer for that particular question-image pair. In recent times, numerous VQA models [1; 3; 4; 7; 9; 16; 18; 39; 43; 51; 54; 56; 62; 72; 75; 78] have been proposed to address this challenge. The primary performance metric used to evaluate these models is accuracy. The research community has begun to acknowledge that accuracy alone is not a sufficient metric to assess model performance [36; 37]. In addition to accuracy, models should also be robust, meaning their output should not be significantly affected by minor _perturbations_ or _noise_ added to the input. This includes replacing words with similar words, phrases, or sentences in input questions, or slightly altering pixel values in the image. The analysis of model robustness and training of robust models is a rapidly growing research topic for deep learning models applied to images [8; 15; 81]. However, to the best of our knowledge, an acceptable and standardized method for measuring robustness in VQA models does not currently exist. To establish a measure of robustness, we note that the ultimate goal for VQA models is to perform comparably to humans. When presented with a question or a highly similar question, humans typically provide the same or a very similar answer. This phenomenon has been reported in psychology research [70]. In this work, we designate the input question as the main question and define a basic question as a question that is semantically similar to the main question. If we add or replace some words or phrases in the main question with semantically similar entities, the VQA model should output the same or a very similar answer. This is illustrated in Figure 1, and we consider these added entities as small perturbations or noise to the input. The model is considered robust if it produces the same answer. As studying robustness necessitates the analysis of VQA model accuracy under varying noise levels, we require a method for quantifying the level of noise for a given question. We posit that a basic question with a higher similarity score to the main question introduces less noise when added to the main question, and vice versa. Inspired by this idea, we present a novel method for measuring the robustness of VQA models, as illustrated in Figure 2. The method comprises two modules: a VQA model and a Noise Generator. The Noise Generator accepts a plain text main question (MQ) and a plain text basic question dataset (BQD) as input. It begins by ranking the basic questions in BQD based on their similarity to MQ using a text similarity ranking method. We measure the robustness of the VQA model by comparing its accuracy with and without generated noise at different noise levels. We propose a robustness measure \(R_{score}\) to evaluate performance. When considering the similarity between a main question and a basic question, there are various measures that can be used to produce a score. These scores then determine the ranking of the basic questions. Text similarity metrics like BLEU (BiLingual Evaluation Understudy) [64] are commonly used to compute the overlap between two texts, but they cannot effectively capture the semantic meaning of the text. As a result, rankings based on these metrics may not accurately reflect the similarity between questions. To enhance the quality of question ranking, we introduce a new method formulated using _LASSO_ optimization and compare it with commonly used textual similarity measures. We evaluate the effectiveness of our method by ranking our proposed Basic Question Datasets (BQDs), including the General Basic Question Dataset (GBQD) and Yes/No Basic Question Dataset (YNBQD). We also examine the robustness of six pre-trained state-of-the-art VQA models [4; 7; 39; 51] and compare the results obtained from our proposed _LASSO_ ranking method with other metrics in BQD ranking through extensive experiments. The experimental results indicate the effectiveness of our proposed method and demonstrate that in-context learning with a chain of basic questions improves the model's accuracy. It is crucial to emphasize that commonly used textual similarity measures are not effective in controlling the noise level in basic question (BQ) rankings. Consequently, conducting a robustness analysis becomes extremely challenging. However, our proposed _LASSO_ basic question ranking method is highly efficient in quantifying and controlling the intensity of the injected noise level. This approach empowers us to evaluate the robustness of VQA models under various noise levels and explore their performance accurately. This paper presents several contributions to the field of Visual Question Answering (VQA): * We introduce two datasets of basic questions, which can be used to evaluate the robustness of VQA models. These datasets are made publicly available. * We propose a novel method for measuring the robustness of VQA models and apply it to six state-of-the-art models. Our method can generate noise levels of varying strength and quantifies the impact of this noise on model performance. * We introduce a new text similarity ranking method based on _LASSO_ optimization and demonstrate its superiority over seven popular similarity metrics. This method can effectively rank basic questions according to their similarity to a main question. * We adopt an in-context learning perspective to explore how basic questions, _i.e._, chain-of-question, can enhance the performance of VQA models. The rest of this paper is structured as follows. In Section 2, we provide an overview of the related works. In Section 3, we describe the details of our proposed method and demonstrate how to use it to measure the robustness of VQA models. Furthermore, in Sections 4 and 5, we present various analyses on our proposed General Basic Question Dataset (GBQD) and Yes/No Basic Question Dataset (YNBQD) [24]. Finally, in Section 6, we compare the robustness and accuracy performance of state-of-the-art VQA models. **Relations to our previous work** This paper builds upon our previous work, which was presented as an oral paper at the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-2019) [24], and presents several improvements. Firstly, we propose a framework, depicted in Figure 7, and a threshold-based criterion, outlined in Algorithm 1, to leverage basic questions (BQs) for analyzing the robustness of the HieCoAtt VQA model [51]. Secondly, we adopt an in-context learning perspective to demonstrate how BQs can enhance the performance of the HieCoAtt VQA model. Thirdly, we emphasize the necessity of preprocessing question sentences for our proposed LASSO ranking method, which ensures its correct functioning. Finally, we present an extended experiment on YNBQD. The current paper is a fully restructured and rewritten version of our previous work and incorporates these new contributions. Figure 1: This figure (Figure 1) is inspired by Deductive Reasoning in Human Thinking [70] and illustrates how humans behave when subjected to multiple questions about a specific topic. In cases (a) and (b), the person might have the same answer “Mercedes Benz” in mind for both cases. However, in case (c), the person would start to consider the relationships among the provided questions and candidate answers to form the final answer, which may differ from the final answer in cases (a) and (b). When given more basic questions, the person would need to consider all the possible relationships among the questions and answer candidates. These relationships can be very complex, especially when the additional basic questions have low similarity scores to the main question, which can mislead the person. In such cases, the extra basic questions act as large disturbances. It is important to note that the relationships and final answer in cases (a) and (b) could be the same but different from the case (c). To help clarify these relationships, we have used different colors in the figure. ## 2. Related Work In recent years, VQA has emerged as a captivating and challenging task, attracting significant attention from researchers in various fields, such as natural language processing (NLP), computer vision, and machine learning. A wide range of approaches has been proposed to tackle this task, as evidenced by a growing body of literature (Beng et al., 2015; Chen et al., 2015; Chen et al., 2016; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019; Chen et al., 2019; Chen et al., 2020; Li et al., 2021; Li et al., 2022; Li et al., 2021; Li et al., 2022). In this paper, we review related works from different perspectives, such as sentence evaluation metrics, models' accuracy and robustness, and datasets. **Sentence Evaluation Metrics** Various sentence evaluation metrics have been widely adopted in different tasks, such as video/image captioning (Shou et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2020) and text summarization (Chen et al., 2019). In this paper, we leverage commonly used metrics to measure the similarity between BQ and MQ. BLEU (BiLingual Evaluation Understudy) (Li et al., 2022) is a widely used metric for machine translation, based on precision. However, its effectiveness has been questioned by some studies (Chen et al., 2017; Li et al., 2022). METEOR (Chen et al., 2019), on the other hand, is based on the harmonic mean of unigram precision and recall, and it can handle stemming and synonym matching. It has been proposed as a solution to some of the problems found with BLEU and produces a better correlation with translations by human experts. While METEOR evaluates the correlation at the sentence and segment level, BLEU looks for correlations at the corpus level. ROUGE (Recall Oriented Understudy of Gisting Evaluation) (Li et al., 2022) is another recall-based metric that is popular in the text summarization community. It tends to reward longer sentences with higher recall. CIDEr (Li et al., 2022), a consensus-based metric, rewards a sentence for being similar to the majority of descriptions written by human experts and is often used in the image captioning community. It extends existing metrics with _tf-idf_ weights of \(n\)-grams between a candidate sentence and a reference sentence. However, CIDEr can be inefficient for natural language sentence evaluation, as it may weigh unnecessary parts of the sentence and lead to ineffective scores. In our experiments, we use all of the above metrics along with our proposed _LASSO_ ranking approach to rank BQs and compare their performance. **Evaluating Image Captioning** Several techniques commonly used in image captioning tasks have also been applied to the VQA task (Chen et al., 2017; Li et al., 2021; Li et al., 2022; Li et al., 2021). For instance, in (Chen et al., 2017), the authors utilize a language model to combine a set of possible words detected in multiple regions of the input image and generate a corresponding description. In (Li et al., 2022), a convolutional neural network model is Figure 2. The proposed method for measuring the robustness of VQA models. Our proposed method is based on the _“Accuracy Generated by Ranked Basic Question Dataset”_ and _“Accuracy Generated by Clean VQA Testing Set”_, which are combined to generate the \(R_{score}\), our proposed measure of robustness. The upper part of the figure depicts the VQA Module and Noise Generator, which are the two main components of our method. The lower part of the figure shows a detailed view of the Noise Generator. Our method allows for the use of two different Basic Question Datasets, GBQD and YNBQD, and eight different question ranking methods. If new datasets or ranking methods become available in the future, they can be incorporated into our method. The output of the Noise Generator is the concatenation of three ranked basic questions. “\(\oplus\)” denotes the direct concatenation of basic questions. used to extract high-level image features, which are then given to an LSTM unit as the first input. In [82], an algorithm is proposed to generate a word at each time step by focusing on local image regions related to the predicted word at the current time step. The authors of [38] suggest a deep neural network model to learn how to embed language and visual information into a common multimodal space. Furthermore, while BLEU is a commonly used metric to evaluate image captioning results, it may not be the most appropriate metric to assess the quality of the captions due to its inherent limitations. **Evaluating Visual Question Answering** VQA is a multimodal task, involving two types of inputs with different modalities: the question sentence and the image. Researchers have focused on modeling the interactions between the two different embedding spaces in several ways. For instance, bilinear interaction between two embedding spaces has been shown to be successful in deep learning for fine-grained classification and multimodal language modeling in previous works such as [40; 49]. Other methods proposed to compute the outer product between visual and textual features, such as Multimodal Compact Bilinear (MCB) pooling [16], or parameterize the full bilinear interactions between image and question sentence embedding spaces, as in Multimodal Low-rank Bilinear (MLB) pooling [39]. An alternative method proposed in [7] efficiently parameterizes the bilinear interactions between textual and visual representations, and shows that MCB and MLB are special cases of their proposed method. Some researchers exploit Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN) to build a question generation algorithm in [68], and RNN to combine the word and image features for the VQA task in [56; 17; 57]. The authors of [52] have tried to exploit convolutions to group the neighboring features of word and image, while the authors of [63] use Gated Recurrent Unit (GRU) [10] to encode an input question and introduce a dynamic parameter layer in their CNN model, where the weights of the model are adaptively predicted by the embedded question features. However, to the best of our knowledge, no existing VQA method has been evaluated by a robustness-based dataset, since such a dataset does not exist. **Robustness of Neural Network Models** Several recent works (e.g., [8; 12; 15; 19; 21; 22; 23; 24; 31; 33; 36; 37; 50; 79; 81; 83]) have explored the issue of deep learning model robustness from an image or text perspective. In [8; 15], the authors analyze model robustness by adding noise or perturbations to images and observing their impact on predicted results. The authors of [59] provide theoretical evidence for a strong relationship between small curvature and large robustness, proposing an efficient regularizer that encourages small curvatures and leads to significant boosts in neural network robustness. While most existing works focus on adding noise to the image input, our work instead focuses on adding noise to the text input [24]. Specifically, we consider the semantically related BQs of a given MQ as a type of noise for the MQ, using these BQs to evaluate the robustness of VQA models. **Datasets for Visual Question Answering** Recently, several VQA datasets focused on accuracy have been proposed. The first dataset is DAQUAR (DAtase for QUestion Answering on Real-world images) [53], containing around \(12.5k\) manually annotated question-answer pairs for approximately \(1449\) indoor scenes [73]. The original DAQUAR dataset provides only one ground truth answer per question, but additional answers are collected by the authors of [57]. Three other VQA datasets based on MS-COCO [48] are subsequently proposed: [4; 17; 69]. In [69], existing image caption generation annotations are transformed into question-answer pairs using a syntactic parser [42] and hand-designed rules. VQA [4], another popular dataset, includes approximately \(614k\) questions about the visual content of \(205k\) real-world images, along with \(150k\) questions based on \(50k\) abstract scenes. The VQA dataset provides \(10\) answers for each image, and the test set answers have not been released due to the VQA challenge workshop. In [(17)], approximately \(158k\) images are annotated with \(316k\) Chinese question-answer pairs and their English translations. Visual Madlibs [(85)] is introduced to simplify VQA model performance evaluation by introducing a multiple-choice question-answering task. In this task, the VQA model chooses one of four provided answers based on a given image and prompt, eliminating ambiguity in answer candidates. The performance of different VQA models is measured using a simple accuracy metric. However, the holistic reasoning required by VQA models based on the given images in this task remains challenging for machines, despite the simple evaluation. Automatic and simple performance evaluation metrics have been incorporated into building the VQA dataset [(53; 54; 55)]. The Visual7W dataset, developed by the authors [(89)], contains over \(330k\) natural language question-answer pairs based on the Visual Genome dataset [(44)]. Unlike other datasets such as VQA and DAQUAR, the Visual Genome dataset focuses on answering the six Ws (_what, where, when, who, why,_ and _how_) with a text-based sentence. Visual7W builds upon this foundation by including extra correspondences between questions and answers, as well as requiring answers that locate objects. Multiple-choice answers, similar to those in Visual Madlibs [(85)], are also included. Additionally, the authors of [(61)] have proposed Xplore-M-Ego, a dataset of images with natural language queries, a media retrieval system, and collective memories. Xplore-M-Ego focuses on a dynamic, user-centric scenario where answers are conditioned not only on the question, but also on the geographical position of the questioner. Another related task is video question answering, which requires understanding long-term relations in videos. The authors of [(88)] have proposed a task that involves filling in blanks in captions associated with videos, requiring inference of the past, present, and future across a diverse range of video descriptions data from movies [(71; 74; 88)], cooking videos [(67)], and web videos [(35)]. However, these datasets are accuracy-based and cannot evaluate the robustness of VQA models. In this study, we propose robustness-based datasets GBQD and YNBQD to address this issue. ## 3. Methodology This section presents our proposed method, which aims to analyze the robustness of pre-trained VQA models using a set of BQs generated with different metrics. First, we discuss how we embed questions and use various ranking methods, including BLEU-1, BLEU-2, BLEU-3, BLEU-4, ROUGE, CIDEr, METEOR, and our proposed _LASSO_ ranking method, to create BQs. Next, we explain how we evaluate the robustness of six state-of-the-art VQA models using these BQs. The method consists of two main components: the VQA module, which includes the model under analysis, and the Noise Generator, which generates noise for a given main question using the ranking methods. Our hypothesis, as introduced in the previous section, is that an accurately ranked set of BQs should lead to decreasing accuracy of the VQA model. To facilitate the discussion, we introduce some basic notations for our method. The overall approach is illustrated in Figure 2. **Question Encoding** The first step in our method is the embedding of the question sentences. Let \(w_{i}^{1}...,w_{i}^{N}\) be the words in question \(q_{i}\), with \(w_{i}^{t}\) denoting the \(t\)-th word for \(q_{i}\) and \(\mathbf{x}_{i}^{t}\) denoting the \(t\)-th word embedding for \(q_{i}\). Various text encoders such as Word2Vec [(58)], GloVe [(65)], and Skip-thoughts [(41)] are commonly used in natural language processing [(25; 26)]. Since we aim to generate BQs that are semantically similar to the given MQ, we need an encoder that can accurately capture the meaning of a sentence. Among these options, Skip-thoughts is particularly suited for this task because it focuses on capturing the semantic relationships between words within a sentence. Therefore, we use Skip-thoughts to embed the questions in this paper. The Skip-thoughts model utilizes an RNN encoder with GRU activations to map an English sentence, denoted by \(q_{i}\), to a feature vector \(\mathbf{v}\in\mathbf{R}^{4800}\). We encode all the training and validation questions from the VQA dataset [4] into a matrix \(\mathbf{A}\), where each column represents a Skip-thoughts embedded basic question candidate. In our approach, we use \(\mathbf{b}\) to represent the Skip-thoughts encoded main question. At each time step, the question encoder generates a hidden state \(\mathbf{h}_{i}^{t}\). This state can be viewed as the representation of the sequence \(\{w_{i}^{1},...,w_{i}^{t}\}\). As such, the final hidden state \(\mathbf{h}_{i}^{N}\) represents the entire sequence \(\{w_{i}^{1},...,w_{i}^{t},...,w_{i}^{N}\}\), which corresponds to a question sentence in our case. To simplify the presentation, we omit the index \(i\) and use the following sequential equations to encode a question: \[\mathbf{r}^{t} = \sigma(\mathbf{U}_{r}\mathbf{h}^{t-1}+\mathbf{W}_{r}\mathbf{x}^{ t}) \tag{1}\] \[\mathbf{z}^{t} = \sigma(\mathbf{U}_{z}\mathbf{h}^{t-1}+\mathbf{W}_{z}\mathbf{x}^{ t})\] (2) \[\mathbf{\hat{h}}^{t} = \tanh(\mathbf{U}(\mathbf{r}^{t}\odot\mathbf{h}^{t-1})+\mathbf{W }\mathbf{x}^{t})\] (3) \[\mathbf{h}^{t} = \mathbf{z}^{t}\odot\mathbf{\hat{h}}^{t}+(1-\mathbf{z}^{t}) \odot\mathbf{h}^{t-1}, \tag{4}\] where the matrices of weight parameters are denoted by \(\mathbf{U}_{r}\), \(\mathbf{U}_{z}\), \(\mathbf{W}_{r}\), \(\mathbf{W}_{z}\), \(\mathbf{U}\) and \(\mathbf{W}\), respectively. At the time step \(t\), \(\mathbf{\hat{h}}^{t}\) represents the state update, \(\mathbf{r}^{t}\) is the reset gate, and \(\mathbf{z}^{t}\) is the update gate. The symbol \(\odot\) denotes an element-wise product, and the activation function is denoted by \(\sigma\). Note that \(\mathbf{h}^{t}=0\) for \(t=0\). **Level-controllable Noise Generator** According to the assumption mentioned in the _Introduction_, generating level-controllable noise, _i.e._, BQ, will involve similarity-based ranking. However, existing textual similarity measures such as BLEU, CIDEr, METEOR, and ROUGE are not effective in capturing semantic similarity. To address this issue, we propose a new optimization-based ranking method in this work. We cast the problem of generating BQs that are similar to an MQ as a _LASSO_ optimization problem. By embedding all the main questions and the basic question candidates using Skip-thoughts, _LASSO_ modeling enables us to determine a sparse number of basic questions that are suitable to represent the given main question. The _LASSO_ model can be expressed as follows: \[\min_{\mathbf{x}}\ \frac{1}{2}\left\|\mathbf{Ax}-\mathbf{b}\right\|_{2}^{2}+ \lambda\left\|\mathbf{x}\right\|_{1}, \tag{5}\] where \(\lambda\) denotes a tradeoff parameter that controls the quality of BQs. To create our basic question dataset (BQD), we combine the unique questions from the training and validation datasets of the popular VQA dataset [4], and use the testing dataset as our main question candidates. However, to ensure effective _LASSO_ modeling, we must preprocess the question sentences by ensuring that none of the main questions are already present in our basic question dataset. If any main questions are already in the BQD, it will result in an unhelpful ranking. Since we are encouraging sparsity, all other questions will be neglected with a similarity score of zero. **BQ Generation by LASSO-based Ranking Method** In this subsection, we outline the procedure for using the _LASSO_-based ranking method to generate basic questions corresponding to a given main question, as illustrated in Figure 2. To obtain the sparse solution \(\mathbf{x}\), we solve the _LASSO_ optimization problem, with the elements of \(\mathbf{x}\) representing the similarity scores between the main question \(\mathbf{b}\) and each corresponding BQ in \(\mathbf{A}\). The BQ candidates are embedded using Skip-thoughts, and the top-\(k\) BQs for a given MQ are selected based on the ranking of scores in \(\mathbf{x}\). Higher similarity scores indicate greater similarity between the BQ and the MQ, and vice versa. Moreover, we note that VQA models tend to perform best on yes/no questions, which are comparatively simple. Consequently, we also generate a Yes/No Basic Question dataset using the aforementioned basic question generation approach for further experiments. **Details of the Proposed Basic Question Dataset for Robustness Analysis and In-context Learning** We recognize that the size of the basic question dataset plays a crucial role in the effectiveness of the noise generation method. Generally, having a larger dataset increases the likelihood of finding similar questions to any given main question. With this in mind, we propose two large-scale basic question datasets, the General Basic Question Dataset and the Yes/No Basic Question Dataset, using the _LASSO_-based ranking method. We set \(k=21\) to limit the number of top-ranked BQs to avoid having similarity scores that are too low. As a result, we obtain the ranked BQs of \(244,302\) testing question candidates. To analyze the robustness and enable in-context learning [11; 20; 34; 87] for VQA models, we utilize the proposed General and Yes/No BQ datasets, which are structured as \(\{\text{Image, }\ MQ,\ 21\ (BQ\) + corresponding similarity score\(\}\)). These datasets comprise \(81,434\) images from the testing images of MS COCO dataset [48] and \(244,302\) main questions from the testing questions of VQA dataset (open-ended task) [4], respectively. We generate the corresponding similarity scores of General and Yes/No BQ by our _LASSO_ ranking approach. Our General and Yes/No basic questions are extracted from the validation and training questions of the VQA dataset (open-ended task). In total, our GBQD and YNBQD contain \(5,130,342\) (General BQ + corresponding similarity score) tuples and \(5,130,342\) (Yes/No BQ + corresponding similarity score) tuples. **Analyzing Robustness Using General and Yes/No Basic Questions with \(R_{score}\)** To evaluate the robustness of a VQA model, it is important to measure how its accuracy is affected when its input is corrupted with noise. This noise can take various forms, such as random, structured, or semantically related to the final task. In VQA, the input consists of an MQ-image pair, and the noise can be introduced into both components. When injecting noise into the question, it is important to maintain some contextual semantics to ensure the measure is informative, rather than introducing misspellings or randomly changing or dropping words. In this study, we propose a novel measure of robustness for VQA by introducing semantically relevant noise to the questions, with the ability to control the level of noise. The VQA dataset [4] provides both open-ended and multiple-choice tasks for evaluation, with the latter requiring the selection of an answer from \(18\) candidates. For the former, the answer can be any phrase or word. In both cases, accuracy is used as the evaluation metric, as it is considered to reflect human consensus. We adopt the accuracy measure as defined in [4]: \[\text{Accuracy}_{VQA}=\frac{1}{N}\sum_{i=1}^{N}\min\left\{\frac{\sum_{t\in T_{ i}}\mathbb{I}[a_{i}=t]}{3},1\right\}, \tag{6}\] where \(\mathbb{I}[\cdot]\) is an indicator function, while \(N\) is the total number of examples. \(a_{i}\) is the predicted answer, and \(T_{i}\) is the answer set of the \(i^{th}\) image-question pair. For a predicted answer to be considered correct, it must have the agreement of at least three annotators. When the predicted answer is incorrect, the score depends on the total number of agreements. \[\mathit{Acc}_{di}=\left\lvert\mathit{Acc}_{\mathit{vqa}}-\mathit{Acc}_{\mathit{ bqd}}\right\rvert, \tag{7}\] where \(\mathit{Acc}_{\mathit{vqa}}\) and \(\mathit{Acc}_{\mathit{bqd}}\) are calculated based on Equation (6). To assess the robustness of a VQA model, we begin by computing its accuracy on the clean VQA dataset [4], denoted by \(\mathit{Acc}_{\mathit{vqa}}\). We then introduce noise into each question-answer pair by appending the top-ranked \(k\) BQs to the original question MQ, and re-evaluate the model's accuracy on this noisy input, denoted by \(\mathit{Acc}_{\mathit{bqd}}\). Next, we compute the absolute difference between \(\mathit{Acc}_{\mathit{vqa}}\) and \(\mathit{Acc}_{\mathit{bqd}}\) using Equation (7) to obtain \(\mathit{Acc}_{di}\), which we use to compute the robustness score \(R_{score}\). The parameters \(t\) and \(m\) in Equation (8) represent the tolerance and maximum robustness limit, respectively. We aim to make the score sensitive to small differences in \(Acc_{di}\), but only above \(t\), and less sensitive for larger differences, but only below \(m\). Therefore, \(R_{score}\) is designed to smoothly decrease from 1 to 0 as \(Acc_{di}\) varies from \(t\) to \(m\), with the rate of change transitioning from exponential to sublinear within the range \([t,m]\). \[R_{score}=clamp_{0}^{1}\left(\frac{\sqrt{m}-\sqrt{Acc_{di}}}{\sqrt{m}-\sqrt{ t}}\right) \tag{8}\] \[clamp_{a}^{b}(x)=\max\left(a,\min\left(b,x\right)\right), \tag{9}\] where \(0\leq t<m\leq 100\). To provide a better understanding, we present a visualization of the \(R_{score}\) function in Figure 3. ## 4. Experiments This section presents the implementation details and experiments performed to validate and analyze the proposed method. **Dataset.** We performed experiments on the GBQD, YNBQD, and VQA datasets (Beng et al., 2017). The VQA dataset is based on the MS COCO dataset (Zhou et al., 2017) and comprises \(248,349\) training, \(121,512\) validation, and \(244,302\) testing questions. Each question in the VQA dataset has ten associated answers annotated by different individuals on AMT (Amazon Mechanical Turk). Nearly 90% of the answers have a single word, and 98% of the answers are no more than three words long. Please refer to the _Details of the Proposed Basic Question Dataset for Robustness Analysis and In-context Learning_ section for further Figure 4. Image “(a)” corresponds to Table 1-(a). Image “(b)” corresponds to Table 1-(b). Figure 3. A visualization of the \(R_{score}\) function is denoted in red. The right-hand part of the \(f(x)\) and \(g(x)\) is plotted for convenience. The tolerance and maximum robustness limit are represented by \(t\) and \(m\), respectively. For further explanation, please see the section _Analyzing Robustness Using General and Yes/No Basic Questions with \(R_{score}\)_. \begin{table} \begin{tabular}{|c|c|c|} \hline **BQ ID** & **Similarity Score** & **BQ** \\ \hline **01** & **0.295** & **How old is the truck?** \\ **02** & **0.240** & **How old is this car?** \\ **03** & **0.142** & **How old is the vehicle?** \\ \hline **04** & **0.120** & **What number is the car?** \\ **05** & **0.093** & **What color is the car?** \\ **06** & **0.063** & **How old is the bedroom?** \\ \hline **07** & **0.063** & **What year is the car?** \\ **08** & **0.037** & **Where is the old car?** \\ **09** & **0.033** & **How old is the seat?** \\ \hline **10** & **0.032** & **How old is the cart?** \\ **11** & **0.028** & **What make is the blue car?** \\ **12** & **0.028** & **How old is the golden retriever?** \\ \hline **13** & **0.024** & **What is beneath the car?** \\ **14** & **0.022** & **Is the car behind him a police car?** \\ **15** & **0.020** & **How old is the pilot?** \\ \hline **16** & **0.017** & **How old are you?** \\ **17** & **0.016** & **How old is the laptop?** \\ **18** & **0.016** & **How old is the television?** \\ \hline **19** & **0.015** & **What make is the main car?** \\ **20** & **0.015** & **What type and model is the car?** \\ **21** & **0.015** & **What is lifting the car?** \\ \hline \end{tabular} \end{table} Table 1. ”MQ¿ How old is the car?” and image ”(a)” corresponds to Figure 4-(a). ”MQ¿ What is the cat sitting on?” and image ”(b)” corresponds to Figure 4-(b). information on GBQD and YNBQD. To gain a better understanding of the datasets, we provide some examples in Table 1. **Setup.** We utilize the Skip-thought Vector to encode all the training and validation questions of the VQA dataset into the columns of \(\mathbf{A}\in\mathbf{R}^{4800\times 186027}\), and represent the given main question as \(\mathbf{b}\in\mathbf{R}^{4800}\). For generating our General and Yes/No BQ Datasets, we set \(\lambda=10^{-6}\) to ensure a better quality of BQs. We collect only the top 21 ranked General and Yes/No BQs, as similarity scores beyond this limit are insignificant, and use them to create our GBQD and YNBQD. Since many state-of-the-art VQA models are trained under the assumption of a maximum of 26 input words, we divide the 21 top-ranked BQs into seven consecutive partitions, _i.e._, \(21=3*7\), for robustness analysis, as shown in Table 2 for GBQD and Table 3 for YNBQD. Note that each MQ with three BQs contains a total number of words equal to or less than 26, under this setting. **BQ Generation by Popular Text Evaluation Metrics.** In this subsection, we compare the performance of the proposed _LASSO_-based ranking method with the non-_LASSO_-based ranking methods for generating BQs of a given MQ. Specifically, we consider seven popular sentence evaluation metrics [5, 47, 64, 76], including BLEU-1, BLEU-2, BLEU-3, BLEU-4, ROUGE, CIDEr, and METEOR, which are commonly used to measure the similarity score between MQ and BQs. We build a general basic question dataset for each metric following the setup for building the General Basic Question Dataset (GBQD). **Results and Analysis.** Next, we will present our experimental results and robustness analysis. **(i) Are the rankings of BQs effective?** We divide the top 21 ranked BQs into seven partitions, each containing three top-ranked BQs, and observe that the accuracy decreases from the first partition to the seventh partition (Figure 5-(a)-1). Additionally, the accuracy decrement increased from the first partition to the seventh (Figure 5-(a)-2), indicating that the similarity of BQs to the given MQ decreased from the first partition to the seventh (_i.e._, the noise level increased). These trends are also observed when we use the YNBQD dataset (Figure 5-(b)-1 and Figure 5-(b)-2). These results suggest that the rankings by the proposed _LASSO_-based ranking method are effective. However, the accuracy of the seven similarity metrics ((\(\{(BLEU_{1}...4,\ ROUGE,\ CIDEr,\ METEOR)\}\)) was much more random and less monotonous from the first partition to the seventh partition (Figure 6). This indicates that the added BQs based on these metrics represent much more noise than the ones ranked by the _LASSO_-based ranking method, significantly harming the accuracy of state-of-the-art VQA models. Hence, we conclude that the rankings by these seven sentence similarity metrics are not effective in this context. **(ii) Which VQA model is the most robust?** We classify the utilized state-of-the-art VQA models into two distinct groups:: attention-based and non-attention-based, as shown in Table 4. HAV, HAR, MUA, and MLB belong to the attention-based models, while LQI and MU are non-attention-based. Generally, based on Table 4, attention-based VQA \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Model & LQI & HAV & HAR & MU & MUA & MLB \\ \hline \(R_{score1}\) & 0.19 & **0.48** & 0.45 & 0.30 & 0.34 & 0.36 \\ \hline \(R_{score2}\) & 0.08 & 0.48 & **0.53** & 0.30 & 0.23 & 0.37 \\ \hline \end{tabular} \end{table} Table 4: This table shows the robustness scores, \(R_{score}\), of six state-of-the-art VQA models based on GBQD (\(R_{score}\)), YNBQD (\(R_{score2}\)) and VQA [4] dataset. LQI denotes LSTM Q+I, HAV denotes HieCoAtt (Alt,VGG19), HAR denotes HieCoAtt (Alt,Resnet200), MU denotes MUTAN without Attention, MUA denotes MUTAN with Attention and MLB denotes MLB with Attention. The \(R_{score}\) parameters are \((t,\ m)=(0.05,\ 20)\). models are more robust than non-attention-based ones. However, when we examine MU and MUa in Table 4 (\(R_{score2}\)), the non-attention-based model (MU) is more robust than the attention-based model (MUA). It is worth noting that the only difference between MU and MUa is the attention mechanism. Meanwhile, in Table 4 (\(R_{score1}\)), MUa is more robust than MU, indicating that the diversity of BQ candidates affects the robustness of attention-based VQA models in some cases. Ultimately, based on the results in Table 4, we conclude that HieCoAtt [51] is the most robust VQA model. The HieCoAtt model employs a co-attention mechanism that repeatedly exploits the text and image information to guide the attention mechanism. Fig. 5: The figure shows the “accuracy” and “accuracy decrement” of the six state-of-the-art pretrained VQA models evaluated on GBQD, YNBQD and VQA [4] datasets. These results are based on our proposed _LASSO_ BQ ranking method. Note that we divide the top 21 ranked GBQs into 7 partitions where each partition contains 3 ranked GBQs; this is in reference to (a)-1 and (a)-2. We also divide the top 21 ranked YNBQs into 7 partitions and each partition contains 3 ranked YNBQs; this is in reference to (b)-1 and (b)-2. BQs are acting as noise, so the partitions represent the noises ranked from the least noisy to the noisiest. That is, in this figure the first partition is the least noisy partition and so on. Because the plots are monotonously decreasing in accuracy, or, equivalently, monotonously increasing in accuracy decrement, the ranking is effective. In this figure, “First top 3” represents the first partition, “Second top 3” represents the second partition and so on. Fig. 6: This figure shows the accuracy of six state-of-the-art pretrained VQA models evaluated on the GBQD and VQA dataset by different BQ ranking methods, BLEU-1, BLEU-2, BLEU-3, BLEU-4, ROUGE, CIDEr and METEOR. In (a), the grey shade denotes BLEU-1, blue shade denotes BLEU-2, orange shade denotes BLEU-3, purple shade denotes BLEU-4 and green shade denotes ROUGE. In this figure, the definition of partitions are same as Figure 5. The original accuracy of the six VQA models can be referred to Table 2-(a), Table 2-(b), etc. To make the figure clear, we plot the results of CIDEr and METEOR in (b) and (c), respectively. Based on this figure and Figure 5 in our paper, our _LASSO_ ranking method performance is better than those seven ranking methods. each other, which enhances the robustness of VQA models [24, 51]. Our experimental results show that HieCoAtt is indeed the most robust VQA model, which motivates us to conduct further experiments on this model. **(iii) Could in-context learning through a chain of BQs improve the accuracy of the HieCoAtt model?** Table 4 shows that HieCoAtt is the most robust VQA model and was previously the state-of-the-art model in terms of accuracy [51]. These factors motivate us to conduct an extended experiment and analysis of this model. We propose a framework called Visual Question Answering by Basic Questions (VQABQ) to analyze the HieCoAtt VQA model using selected high-quality BQs, as shown in Figure 7. We use Algorithm 1 to select BQs with good quality based on a threshold-based criterion. In our proposed BQD, each MQ has 21 corresponding BQs with scores and these scores are all between \([0-1]\) with the following order: \[score1\geq score2\geq...\geq score21, \tag{10}\] where we define three thresholds (\(s1\), \(s2\), and \(s3\)) for the selection process and only consider the top three ranked BQs. We compute the averages (\(avg\)) and standard deviations (\(std\)) for \(score1\), \(score2/score1\), and \(score3/score2\), (refer to Table 5) and use \(avg\pm std\) as the initial estimation of the thresholds. We find that when \(s1=0.60\), \(s2=0.58\), and \(s3=0.41\), we get the BQs that best help the accuracy of the HieCoAtt VQA model with the MQ-BQs direct concatenation method. However, Table 6 shows that only around 3.16% of MQs benefit from BQs, with 96.84%of testing questions unable to find the proper BQs to improve the accuracy of the HieCoAtt model. Despite this, based on Table 7, our method still improves the performance of the HieCoAtt model, increasing accuracy from 60.32% to 60.34%, and answering approximately 49 more questions correctly than the original HieCoAtt VQA model [51]. Based on these results, we believe that BQs with good enough quality can help increase the accuracy of the HieCoAtt VQA model using the direct concatenation method. **The reason why in-context learning with a chain of BQs helps the HieCoAtt model.** By incorporating the chain of BQs into the process of VQA, the performance of the HieCoAtt model can be enhanced. The reason is that BQs are \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{\(0\) BQ} & \multicolumn{2}{c|}{\(1\) BQ} & \multicolumn{2}{c|}{\(2\) BQ} & \multicolumn{2}{c|}{\(3\) BQ} \\ & (96.84\%) & (3.07\%) & (0.09\%) & (0.00\%) \\ \hline \# Q & 236570 & 7512 & 211 & 9 \\ \hline \end{tabular} \end{table} Table 6: The table shows how many BQs are appended. “\(X\) BQ” means \(X\) BQs are appended by MQ, where \(X=0,1,2,3\), and “# Q” denote number of questions. Figure 7: Visual Question Answering by Basic Questions (VQABQ) pipeline. Note that in Module 1 all of the training and validation questions are only encoded by Skip-Thought Question Encoder once for generating the Basic Question Matrix. That is, the next input of Skip-Thought Question Encoder is only a new main question. Module 2 is a VQA model which we want to test, and it is the HieCoAtt VQA model in our case. Regarding the input question of the HieCoAtt model, it is the direct concatenation of a given main question with the corresponding selected basic questions based on the Threshold-based Criterion. “\(\oplus\)” denotes the direct concatenation of basic questions. designed to capture the basic semantic concepts of images and questions. By using the BQs in a chain, the model can learn in context and leverage the knowledge gained from the BQs to better understand the input. Furthermore, the use of BQs can overcome the limitations of the HieCoAtt model, which relies heavily on co-attention between the image and question modalities. Co-attention models can struggle with complex questions that require multiple steps to answer, but by incorporating BQs, the model can break down complex questions into simpler sub-questions that it can answer more easily. The chain of BQs also serves as a form of scaffolding, guiding the model towards the correct answer by providing intermediate steps that can help the model reason about the question more effectively. Overall, in-context learning with a chain of BQs can help the HieCoAtt model make more accurate predictions by providing additional information and guidance, especially for complex questions that may be difficult for the model to answer using co-attention alone. **(iv) Is question sentences preprocessing necessary?** We propose that preprocessing of question sentences is essential for our proposed _LASSO_-based ranking method. For convenience, we exploit the same HieCoAtt model to demonstrate the claim. In the _Methodology_ section, we do the preprocessing question sentences before the sentence embedding. Without question sentences preprocessing, the _LASSO_-based ranking method generates random ranking results. As illustrated in Figure 8, the ranking result jumps randomly due to the lack of question sentences preprocessing. If the proposed method were functioning correctly, the trend of the ranking result should be monotonic, as seen in Figure 5. Therefore, question sentences preprocessing is a necessary step for our proposed _LASSO_-based ranking method to work effectively. **(v) What are the pros and cons of each metric?** In order to compare our proposed _LASSO_-based BQ ranking method with other methods, we conduct BQ ranking experiments using seven text similarity metrics on the same BQ candidate dataset. While the performance of these metrics is not satisfactory, they are still used in various works [14; 38; 60; 77; 82] due to their simple implementation. On the other hand, despite its simplicity, our _LASSO_-based ranking method shows quite effective performance. It should be noted that in practice, we will use our proposed datasets directly to test the robustness of VQA models without re-running the _LASSO_-based ranking method, so the computational complexity of the _LASSO_-based ranking method is not an issue in this case. \begin{table} \begin{tabular}{c|c} \hline \hline \multicolumn{2}{c}{HieCoAtt (Alt,VGG19)} \\ \hline \hline (s1, s2, s3) & (test-dev-acc, Other, Num, Y/N) \\ \hline (0.60, 0.58, 0.41) & (60.49, 49.12, 38.43, 79.65) \\ \hline \hline (s1, s2, s3) & (test-std-acc, Other, Num, Y/N) \\ \hline (0.60, 0.58, 0.41) & (60.34, 49.16, 36.50, 79.49) \\ \hline \hline \end{tabular} \end{table} Table 7: Evaluation results of HieCoAtt (Alt,VGG19) model improved by Algorithm 1. Note that the original accuracy of HieCoAtt (Alt,VGG19) VQA model for “test-dev-acc” is 60.48 and for “test-std-acc” is 60.32. Figure 8: This figure demonstrates what is the ranking result of jumping randomly. For convenience, we only take the most robust VQA model, HieCoAtt, to demonstrate the random jump. If we do not have question sentences preprocessing, then the proposed _LASSO_ ranking method is ineffective. That is, if we have done the question sentences preprocessing, the trend in this figure should be similar to Figure 5-(a)-1 and Figure 5-(b)-1. In this figure, "MQ only" represents the original query question and "First top 3" represents the first partition, "Second top 3" represents the second partition and so on. For the detailed numbers, please refer to Table 8. \begin{table} \begin{tabular}{c|c c c c|c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Task Type}} & \multicolumn{4}{c}{Open-Ended} \\ \hline Method & \multicolumn{4}{c}{HieCoAtt (Alt,VGG19)} \\ \hline Test Set & \multicolumn{4}{c}{dev} & \multicolumn{4}{c}{diff} \\ \hline Partition & Other & Num & Y/N & All & All \\ \hline First-dev & 33.83 & 37.19 & 51.34 & **41.38** & **20.43** \\ Second-dev & 15.46 & 31.42 & 55.38 & **33.58** & **28.23** \\ Third-dev & 35.33 & 36.53 & 70.76 & **50.00** & **11.81** \\ Fourth-dev & 36.05 & 36.46 & 70.05 & **50.05** & **11.76** \\ Fifth-dev & 29.89 & 30.02 & 65.14 & **44.37** & **17.44** \\ Sixth-dev & 35.81 & 34.48 & 63.02 & **46.83** & **14.98** \\ Seventth-dev & 39.12 & 34.45 & 59.84 & **47.12** & **14.69** \\ \hline Original-dev & 51.77 & 38.65 & 79.70 & **61.81** & - \\ Original-std & 51.95 & 38.22 & 79.95 & **62.06** & - \\ \hline \hline \end{tabular} \end{table} Table 8: The HieCoAtt (Alt,VGG19) model evaluation results on BQD and VQA dataset [4] without question sentences preprocessing. “-” indicates the results are not available, “-std” means that the VQA model is evaluated by the complete testing set of BQD and VQA dataset, and “-dev” means that the VQA model is evaluated by the partial testing set of BQD and VQA dataset. In addition, \(diff=Original_{dee_{All}}-X_{dee_{All}}\), where \(X\) is equal to “First”, “Second”, “-, “Seventth”. Figure 9: This figure shows the accuracy of six state-of-the-art pretrained VQA models evaluated on the YNBQD and VQA [4] dataset by different BQ ranking methods, BLEU-1, BLEU-2, BLEU-3, BLEU-4, ROUGE, CIDEr and METEOR. The result in this figure is consistent with the result in Figure 6. Note that in Figure 9-(a), the grey shade denotes BLEU-1, blue shade denotes BLEU-2, orange shade denotes BLEU-3, purple shade denotes BLEU-4 and green shade denotes ROUGE. For more detailed explanation, please refer to the _Extended experiments on YNBQD dataset_ subsection. ### (vi) What affects the quality of BQs? In our model, the parameter \(\lambda\) plays a crucial role in determining the quality of BQs. After conducting experiments, we have discovered that the value of \(\lambda\in[10^{-6},10^{-5}]\) produces satisfactory ranking performance, as demonstrated in Figure 5. We have provided some ranking examples using the _LASSO_-based ranking method in Figure 4 and Table 1 to showcase the quality of BQs when \(\lambda\) is set to \(10^{-6}\). ### (vii) Extended experiments on YNBQD dataset Although we have conducted BQ ranking experiments using seven different text similarity metrics, _BLEU\({}_{1\cdots 4}\)_, _ROUGE_, _CIDEr_, and _METEOR_, on the GBQD dataset, we haven't done the same experiments on the YNBQD dataset. Therefore, we provide the experimental details in this subsection. We extend our experiments to include the YNBQD dataset and used the aforementioned seven metrics to rank the BQs. The definition of partitions in Figure 9 is the same as that in Figure 5. The original accuracy of the six VQA models is given in Table 3-(a) to 3-(f). For convenience, we plot the results of CIDEr and METEOR in Figure 9-(b) and Figure 9-(c), respectively. Based on Figures 9, 6, and 5, we conclude that the proposed _LASSO_-based ranking method outperforms the seven ranking methods on both the YNBQD and GBQD datasets. For detailed experiment results, please refer to Table 2, 3,..., 22. ## 5. Discussion In this section, we discuss our findings on the state-of-the-art VQA models among the six models that we tested, namely (Beng et al., 2019; Chen et al., 2019; Chen et al., 2019; Li et al., 2020), in various aspects. ### In the sense of robustness Based on Table 4, we can see that the "HieCoAtt (Alt,VGG19)" model achieves the highest \(R_{score1}\) of 0.48, while the "HieCoAtt (Alt,Resnet200)" model has the highest \(R_{score2}\) of 0.53. Therefore, among our six tested VQA models, the "HieCoAtt (Alt,VGG19)" model is the state-of-the-art for GBQD in terms of robustness, while the "HieCoAtt (Alt,Resnet200)" model is the state-of-the-art for YNBQD. On the other hand, the "LSTM Q+I" model performs the worst with the lowest \(R_{score1}\) and \(R_{score2}\). Generally, we can conclude that attention-based VQA models are more robust than non-attention-based ones. ### In the sense of accuracy Based on the results in Table 2, we can see that the "MUTAN with Attention" model has the highest accuracy of 65.77%, while 'LSTM Q+I" has the lowest accuracy of 58.18%. Thus, we can conclude that the "MUTAN with Attention" model is the state-of-the-art VQA model among the six models tested, in terms of accuracy. These findings also suggest that the attention-based VQA model performs better in terms of accuracy compared to the non-attention-based model. ## 6. Conclusion In this work, we introduce a novel approach consisting of several components, including the General Basic Question Dataset, Yes/No Basic Question Dataset, and a robustness measure (\(R_{score}\)) for assessing the robustness of VQA models. Our method is composed of two main modules, the Noise Generator and the VQA module. The former ranks the given BQs, while the latter takes the query, basic questions, and an image as input and generates a natural language answer to the query question about the image. The aim of our proposed method is to serve as a benchmark for aiding the community in developing more accurate _and_ robust VQA models. Furthermore, using our proposed General and Yes/No Basic Question Datasets and \(R_{score}\), we demonstrate that our _LASSO_-based BQ ranking method performs better than most popular text evaluation metrics. Finally, we have presented some new methods for evaluating the robustness of VQA models, which could inspire interesting future work on building robust _and_ accurate VQA models. ## Acknowledgments This work is supported by competitive research funding from the University of Amsterdam and King Abdullah University of Science and Technology (KAUST).
2307.15275
Computing Invariant Zeros of a Linear System Using State-Space Realization
It is well known that zeros and poles of a single-input, single-output system in the transfer function form are the roots of the transfer function's numerator and the denominator polynomial, respectively. However, in the state-space form, where the poles are a subset of the eigenvalue of the dynamics matrix and thus can be computed by solving an eigenvalue problem, the computation of zeros is a non-trivial problem. This paper presents a realization of a linear system that allows the computation of invariant zeros by solving a simple eigenvalue problem. The result is valid for square multi-input, multi-output (MIMO) systems, is unaffected by lack of observability or controllability, and is easily extended to wide MIMO systems. Finally, the paper illuminates the connection between the zero-subspace form and the normal form to conclude that zeros are the poles of the system's zero dynamics
Jhon Manuel Portella Delgado, Ankit Goel
2023-07-28T02:55:36Z
http://arxiv.org/abs/2307.15275v2
# Computing Invariant Zeros of a Linear System ###### Abstract It is well known that zeros and poles of a single-input, single-output system in the transfer function form are the roots of the transfer function's numerator and the denominator polynomial, respectively. However, in the state-space form, where the poles are a subset of the eigenvalue of the dynamics matrix and thus can be computed by solving an eigenvalue problem, the computation of zeros is a non-trivial problem. This paper presents a realization of a linear system that allows the computation of invariant zeros by solving a simple eigenvalue problem. The result is valid for square multi-input, multi-output (MIMO) systems, is unaffected by lack of observability or controllability, and is easily extended to wide MIMO systems. Finally, the paper illuminates the connection between the zero-subspace form and the normal form to conclude that _zeros are the poles of the system's zero dynamics_. ## I Introduction Zeros are fundamental in the study of systems and control theory. While the poles affect the system stability, transients, and convergence rate, zeros affect the undershoot, overshoot, and zero crossings [1, 2, 3]. Furthermore, nonminimum phase zeros, which are zeros in the open-right-half-plane, limit performance and bandwidth due to limited gain margin, exacerbate the tradeoff between the robustness and achievable performance of a feedback control system, and prevent input-output decoupling [4, 5, 6]. Precise knowledge of zeros is thus crucial in the design of reliable control and estimation systems. In the transfer function representation of single-input, single-output (SISO) linear systems, zeros and poles are the roots of the numerator and the denominator polynomials, respectively. In the state-space form of a SISO as well as a MIMO linear system, the system's poles are a subset of the eigenvalues of the dynamics matrix. However, the zeros are not readily apparent in the state-space form. Invariant zeros of a MIMO system with a state-space realization \((A,B,C,D)\), which are the complex numbers for which the rank of the Rosenbrock system matrix \[\mathcal{Z}(\lambda)=\begin{bmatrix}\lambda I-A&B\\ C&-D\end{bmatrix}, \tag{1}\] drops, are a subset of the generalized eigenvalues of the Rosenbrock system matrix [7, 8, 9]. The generalized eigenvalues of the Rosenbrock system matrix can be computed by decomposing it into the generalized Schur form as shown in [10, 11, 12, 13]. However, this approaches yields extraneous zeros, which are removed heuristically. Alternatively, since zeros are invariant under output feedback and the closed-loop poles approach the zeros of the system under high-gain output feedback, the zeros of \((A,B,C,D)\) are a subset of the eigenvalues of \(\lim_{K\to\infty}A+BK(I-DK)^{-1}C\)[14, 15]. Similar to the approach based on the Schur decomposition, this approach also yields extraneous zeros. Furthermore, both approaches are computationally expensive. In this paper, we present a technique to compute the invariant zeros of a MIMO system that reduces the problem to an eigenvalue problem instead of a generalized eigenvalue problem. Since the paper's main result does not depend on the minimality of the realization, the computed zeros are indeed the invariant zeros of the state-space realization. This technique is motivated by and is closely related to the normal form of a dynamic system [16, 17]. Specifically, we construct a diffeomorphism that isolates the zeros in a partition of the transformed dynamics matrix. Next, we show that the zeros of the system are precisely the eigenvalues of this partition. Finally, we show that the partitioned system is in fact the zero dynamics of the system. This observation is stated in [17, p. 514], however, the numerator polynomial of the transfer function is used to construct the state-space realization of the zero dynamics. In contrast, in this paper, we construct the zero dynamics as well as compute the zeros using the system's state-space realization, without requiring its transfer function. A similar geometric approach, based on differential geometry, is described in [18, 19] to compute the invariant zeros of the system by solving an eigenvalue problem. In contrast, in this paper, we construct the eigenvalue problem, whose solution provides the invariant zeros, by constructing a simple state transformation matrix. Furthermore, we present a simplified proof that does not require differential geometry concepts and is instead based on simple algebra. Although all results presented in the paper are applicable to square MIMO systems as well, due to page limits, we restrict the proofs to the case of SISO systems. The technique to compute the zeros can be easily extended to wide MIMO systems as shown in an example in Section VI. However, the technique does not work in the case of tall MIMO systems since the construction of the zero-subspace form yields a singular transformation matrix. Ad-hoc techniques such as modifying the rows of the transformation matrix to make it nonsingular yield mixed and inconclusive results. The paper thus does not consider the case of tall MIMO systems. The paper is organized as follows. Section II presents the notation used in this paper, Section III introduces the zero-subspace form of a strictly proper linear system, Section IV presents and proves the main result of the paper, Section V shows the connection of the zero-subspace form with the normal form, and Section VI presents numerical examples to confirm the main result of this paper. Finally, the paper concludes with a discussion in Section VII. ## II Notation Let \(A\in\mathbb{R}^{n\times m}.\) Then, \(A_{[i,j]}\) is the matrix obtained by removing the \(i\)th row and \(j\)th column of \(A.\) Note that, if \(i=0,\) then only the \(j\)th column is removed. Similarly, if \(j=0,\) then only the \(i\)th row is removed. That is, \(A_{[0,0]}=A.\) The set of integers between \(n\) and \(m,\) where \(n\leq m,\) that is, \(\{n,n+1,\ldots,m\},\) is denoted by \(\{\iota\}_{n}^{m}.\)\(0_{n\times m}\) denotes the \(n\times m\) zero matrix and \(I_{n}\) denotes the \(n\times n\) identity matrix. ## III zero-subspace form This section introduces the _zero-subspace form_ of a linear system. The zero-subspace form is a realization in which the zeros of the system are the eigenvalues of a partition of the dynamics matrix. Consider a linear system \[\lambda x =Ax+Bu, \tag{2}\] \[y =Cx, \tag{3}\] where \(x\in\mathbb{R}^{l_{x}}\) is the state, \(u\in\mathbb{R}^{l_{u}}\) is the input, \(y\in\mathbb{R}^{l_{y}}\) is the output, and the operator \(\lambda\) is the time-derivative operator or the forward-shift operator. The relative degree of \(y_{i}\) is denoted by \(\rho_{i},\) and the relative degree of \(y,\) defined as \(\sum_{i}^{l_{y}}\rho_{i},\) is denoted by \(\rho.\) Note that (2)-(3) is a strictly proper system since \(D=0.\) Although the main result presented in this paper requires \(\rho>0,\) that is, direct feedthrough term \(D=0,\) it is not a restrictive condition since, without affecting the zeros of the system, the dynamic extension of the system with an additional pole renders the direct feedthrough term \(D\) of the augmented system zero. Example VI.2 considers the case of an exactly proper system. Due to substantially lengthier and more complex proofs in the case of multi-input, multi-output (MIMO) linear systems, we restrict our attention to single-input, single-output (SISO) linear systems in the following. However, the main result of this paper and the procedure to transform a realization of a MIMO system into its zero-subspace form is the same as that of a SISO system. First, we construct a diffeomorphism to transform a realization into the zero-subspace form, as shown below. Let \(\overline{B}\in\mathbb{R}^{l_{x}-1\times l_{x}}\) be a full rank matrix such that \(\overline{B}B=0.\) Note that \(B\in\mathcal{N}(\overline{B}),\) and thus, in MATLAB, \(\overline{B}\) can be computed using null(B'). Define \[\overline{C}\stackrel{{\triangle}}{{=}}\begin{bmatrix}C\\ CA\\ \vdots\\ CA^{\rho-1}\end{bmatrix}\in\mathbb{R}^{\rho\times l_{x}}, \tag{4}\] and \[\overline{T}\stackrel{{\triangle}}{{=}}\begin{bmatrix}\overline{B }\\ \overline{C}\end{bmatrix}\in\mathbb{R}^{(l_{x}-1+\rho)\times l_{x}}. \tag{5}\] **Proposition III.1.** Let \(\overline{T}\) be given by (5). Then, \(\mathrm{rank}\ \overline{T}=l_{x}.\) **Proof.** Note that \(\overline{B}\) has \(l_{x}-1\) linearly independent rows. Since the relative degree of \(y\) is \(\rho,\) it follows that, for \(i\in\{\iota\}_{i=1}^{\rho-1},\)\(CA^{i-1}B=0,\) which implies that each element of \(\{C,CA,\ldots,CA^{\rho-2}\}\) is in the row range space of \(\overline{B}.\) Furthermore, since \(CA^{\rho-1}B\neq 0,\) it follows that \(CA^{\rho-1}\) is not linearly dependent on the rows of \(\overline{B},\) thus implying that \(\mathrm{rank}\ \overline{T}=l_{x}.\) Next, define \(l_{z}\stackrel{{\triangle}}{{=}}l_{x}-\rho\) and \[T\stackrel{{\triangle}}{{=}}\begin{bmatrix}B_{\mathsf{z}}\\ \overline{C}\end{bmatrix}\in\mathbb{R}^{l_{x}\times l_{x}}, \tag{6}\] where \(B_{\mathsf{z}}\in\mathbb{R}^{l_{z}\times l_{x}}\) is chosen such that rows of \(B_{\mathsf{z}}\) and rows of \(\overline{C}\) are linearly independent. Consequently, \(T\) is full-rank and thus invertible. Note that \(\mathcal{R}(B_{\mathsf{z}}^{\mathrm{T}})\subseteq\mathcal{R}(\overline{B}^{ \mathrm{T}}).\) Using the diffeomorphism \(T,\) transform the realization \((A,B,C)\) into the zero-subspace form \((\mathcal{A},\mathcal{B},\mathcal{C}),\) which implies that \[\mathcal{A}\stackrel{{\triangle}}{{=}}TAT^{-1},\quad\mathcal{B} \stackrel{{\triangle}}{{=}}TB,\quad\mathcal{C}\stackrel{{ \triangle}}{{=}}CT^{-1}. \tag{7}\] Next, define \(\eta\in\mathbb{R}^{l_{z}}\) as the first \(l_{z}\) components of \(Tx\) and \(\xi\in\mathbb{R}^{\rho}\) as the rest of \(Tx,\) that is, \[\begin{bmatrix}\eta\\ \xi\end{bmatrix}=Tx. \tag{8}\] Note that since \(\eta\in\mathcal{R}(B_{\mathsf{z}}),\) we call \(\mathcal{R}(B_{\mathsf{z}})\) the _zero subspace_ of the system. Next, partition \(\mathcal{A}\) as \[\mathcal{A}=\begin{bmatrix}\mathcal{A}_{\eta}&\mathcal{A}_{\eta\xi}\\ \mathcal{A}_{\xi\eta}&\mathcal{A}_{\xi}\end{bmatrix}, \tag{9}\] where \(\mathcal{A}_{\eta}\) is the \(l_{z}\times l_{z}\) upper-left block, \(\mathcal{A}_{\eta\xi}\) is the \(l_{z}\times\rho\) upper-right block, \(\mathcal{A}_{\xi\eta}\) is the \(\rho\times l_{z}\) lower-left block, and \(\mathcal{A}_{\xi}\) is the \(\rho\times\rho\) lower-right block of \(\mathcal{A}.\) Finally, define \(S\stackrel{{\triangle}}{{=}}T^{-1}\) and partition \(S\) as \(S=\begin{bmatrix}S_{\eta}&S_{\xi}\end{bmatrix},\) where \(S_{\eta}\in\mathbb{R}^{l_{x}\times l_{z}}\) contains the first \(l_{z}\) columns of \(S\) and \(S_{\xi}\in\mathbb{R}^{l_{x}\times\rho}\) contains the last \(\rho\) columns of \(S.\) Substituting \(T\) and \(S\) in (7) yields \[\mathcal{A}_{\eta} =B_{\mathsf{z}}AS_{\eta}, \tag{10}\] \[\mathcal{A}_{\eta\xi} =B_{\mathsf{z}}AS_{\xi},\] (11) \[\mathcal{A}_{\xi\eta} =\overline{C}AS_{\eta},\] (12) \[\mathcal{A}_{\xi} =\overline{C}AS_{\xi}. \tag{13}\] As will be shown in the next section, the zeros of (2)-(3) are the eigenvalues of \(\mathcal{A}_{\eta}\) given by (10). ### _Sparse Structure of the zero-subspace form_ The following two facts about the zero-subspace form show that the matrices \(\mathcal{A}_{\xi\eta},\,\mathcal{A}_{\xi},\,\mathcal{B},\) and \(\mathcal{C}\) have _sparse_ structure. **Proposition III.2.** Let \(\mathcal{A}_{\xi\eta}\) and \(\mathcal{A}_{\xi}\) be defined by (12) and (13). Then, \[A_{\xi\eta} =\begin{bmatrix}0_{(\rho-1)\times l_{x}}\\ CA^{\rho}S_{\eta}\end{bmatrix}, \tag{14}\] \[A_{\xi} =\begin{bmatrix}[0_{(\rho-1)\times 1}&I_{\rho-1}]\\ CA^{\rho}S_{\xi}\end{bmatrix}. \tag{15}\] **Proof.** Since \(\begin{bmatrix}B_{\underline{\pi}}\\ \overline{C}\end{bmatrix}\begin{bmatrix}S_{\eta}&S_{\xi}\end{bmatrix}=I_{l_{x}}\), it follows that \[\overline{C}S_{\eta} =\begin{bmatrix}C\\ \vdots\\ CA^{\rho-1}\end{bmatrix}S_{\eta}=0_{\rho\times l_{x}},\] \[\overline{C}S_{\xi} =\begin{bmatrix}C\\ \vdots\\ CA^{\rho-1}\end{bmatrix}S_{\xi}=I_{\rho},\] which implies that, for each \(j\in\{\iota\}_{i=0}^{\rho-1},\,CA^{j}S_{\eta}=0\) and \(CA^{j}S_{\xi}=e_{j+1},\) where \(e_{j}\) is the \(j\)th row of \(I_{\rho}\). Therefore, \[\mathcal{A}_{\xi\eta} =\overline{C}AS_{\eta}\] \[\mathcal{A}_{\xi} =\overline{C}AS_{\xi} =\begin{bmatrix}CAS_{\xi}\\ \vdots\\ CA^{\rho-1}S_{\xi}\end{bmatrix}=\begin{bmatrix}[0_{(\rho-1)\times 1}&I_{\rho-1}]\\ CA^{\rho}S_{\xi}\end{bmatrix}.\] **Proposition III.3.** Let \(\mathcal{B}\) and \(\mathcal{C}\) be defined by (7). Then, \[\mathcal{B}=\begin{bmatrix}0_{(l_{x}-1)\times 1}\\ CA^{\rho-1}B\end{bmatrix},\quad\mathcal{C}=\begin{bmatrix}0_{1\times l_{x}}&1&0_ {1\times(\rho-1)}\end{bmatrix}. \tag{16}\] **Proof.** Note that, for each \(j\in\{\iota\}_{\epsilon=0}^{\rho-1},\,CA^{j}B=0,\) which implies \[\mathcal{B}=TB=\begin{bmatrix}B_{\underline{\pi}}B\\ \overline{C}B\end{bmatrix}=\begin{bmatrix}0_{l_{x}\times l_{x}}\\ \begin{bmatrix}C\\ CA\\ CA^{\rho-1}\end{bmatrix}B\end{bmatrix}=\begin{bmatrix}0_{(l_{x}-1)\times 1}\\ CA^{\rho-1}B\end{bmatrix}.\] Next, since \(C\) is the \((l_{z}+1)\)th row of \(T,\) and \[C=\mathcal{C}T=\mathcal{C}\begin{bmatrix}B_{\underline{\pi}}\\ \overline{C}\end{bmatrix}=\mathcal{C}\begin{bmatrix}B_{\underline{\pi}}\\ C\\ CA\\ \vdots\\ CA^{\rho-1}\end{bmatrix},\] it follows that \(\mathcal{C}=\begin{bmatrix}0_{1\times l_{x}}&1&\cdots&0_{1\times\rho-1} \end{bmatrix}.\) In summary, the zero-subspace form of (2)-(3) is \[\dot{\eta} =A_{\eta}\eta+A_{\eta\xi}\xi, \tag{17}\] \[\dot{\xi} =A_{\xi\eta}\eta+A_{\xi}\xi+B_{\xi}u,\] (18) \[y =C_{\xi}\xi=\xi_{1}, \tag{19}\] where \[B_{\xi}\stackrel{{\triangle}}{{=}}\begin{bmatrix}0_{(\rho-1) \times 1}\\ CA^{\rho-1}B\end{bmatrix}\in\mathbb{R}^{\rho},\quad C_{\xi}\stackrel{{ \triangle}}{{=}}\begin{bmatrix}1&0_{1\times(\rho-1)}\end{bmatrix}\in \mathbb{R}^{1\times\rho}. \tag{20}\] ## IV Main Result This section presents the paper's main result, that is, the zeros of a SISO linear system are the eigenvalues of a partition of the dynamics matrix represented in its zero-subspace form. Specifically, Theorem IV.1 shows that the zeros of the system (2)-(3) are exactly the eigenvalues of \(A_{\eta}\). The following lemma appears in [20] and is used in the proof of Theorem IV.1. **Lemma IV.1.** Let \(A=\begin{bmatrix}A_{11}&A_{12}\\ A_{21}&A_{22}\end{bmatrix},\) where \(A_{11}\) is square and \(A_{22}\) is nonsingular. Then, \[\det A=\det\left(A_{11}-A_{12}A_{22}^{-1}A_{21}\right)\det A_{22}. \tag{21}\] The following two facts about the zero-subspace form are used in the proof of Theorem IV.1. **Proposition IV.1.** Let \(\mathcal{A}_{\xi\eta}\) be defined by (12). Then, \[\mathcal{A}_{\xi\eta_{[\rho,0]}}=0_{(\rho-1)\times l_{x}}. \tag{22}\] **Proof.** Note that \(\mathcal{A}_{\xi\eta}\) is given by (14). Removing the \(\rho\)th row of \(\mathcal{A}_{\xi\eta}\) yields (22). **Proposition IV.2.** Let \(\mathcal{A}_{\xi}\) be defined by (13). Then, for all \(s\in\mathbb{C},\) \[\det\left((sI_{\rho}-\mathcal{A}_{\xi})_{[\rho,1]}\right)=\pm 1. \tag{23}\] **Proof.** Note that \(\mathcal{A}_{\xi}\) is given by (15). Removing the \(\rho\)th row and the first column of \(\mathcal{A}_{\xi}\) yields \[(sI_{\rho}-\mathcal{A}_{\xi})_{[\rho,1]}=\begin{bmatrix}-1&0&\cdots&0&0\\ s&-1&\cdots&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&-1&0\\ 0&0&\cdots&s&-1\end{bmatrix},\] which implies (23). **Theorem IV.1.** Consider the system (2)-(3) and its zero-subspace form (17)-(19). Then, the zeros of the system (2)-(3) are the eigenvalues of \(\mathcal{A}_{\eta}\). **Proof.** Note that the zeros of the system (2)-(3) satisfy \[C\mathrm{adj}(sI-A)B=0.\] Since the zeros are invariant under state transformation, it follows that the zeros also satisfies \[\mathcal{C}\mathrm{adj}(sI-\mathcal{A})\mathcal{B}=0. \tag{24}\] Next, note that \[\mathcal{C} \mathrm{adj}(sI-\mathcal{A})\mathcal{B}\] \[= \left[0_{1\times l_{z}}\quad 1\quad 0_{1\times(\rho-1)}\right].\] \[\mathrm{adj}\left(\left[\begin{bmatrix}sI_{l_{z}}-\mathcal{A}_{ \eta}&-\mathcal{A}_{\eta\xi}\\ -\mathcal{A}_{\xi\eta}&sI_{\rho}-\mathcal{A}_{\xi}\end{bmatrix}\right)\begin{bmatrix} 0_{(l_{x}-1)\times 1}\\ CA^{\rho-1}B\end{bmatrix}\] \[= (-1)^{n}.\] \[\det\left(\begin{bmatrix}sI_{l_{z}}-\mathcal{A}_{\eta}&-\mathcal{A }_{\eta\xi_{[0,1]}}\\ -\mathcal{A}_{\xi\eta_{[\rho,0]}}&(sI_{\rho}-\mathcal{A}_{\xi})_{[\rho,1]} \end{bmatrix}\right)CA^{\rho-1}B\] \[= (-1)^{n}\det\left(sI_{l_{z}}-\mathcal{A}_{\eta}-\mathcal{A}_{\eta \xi_{[0,1]}}(sI_{\rho}-\mathcal{A}_{\xi})_{[\rho,1]}^{-1}\mathcal{A}_{\xi\eta_{ [\rho,0]}}\right).\] \[\qquad\det((sI_{\rho}-\mathcal{A}_{\xi})_{[\rho,1]})CA^{\rho-1}B,\] \[= (-1)^{n}\det\left(sI_{l_{z}}-\mathcal{A}_{\eta}\right)\det((sI_{ \rho}-\mathcal{A}_{\xi})_{[\rho,1]})CA^{\rho-1}B,\] where \(n\stackrel{{\triangle}}{{=}}2l_{x}-\rho+1\). Note that the last equality uses the fact that \(\mathcal{A}_{\xi\eta_{[\rho,0]}}=0\), which follows from Proposition IV.1. Since the relative degree of the system is \(\rho\), \(CA^{\rho-1}B\neq 0\). Next, it follows from Proposition IV.2 that, for all \(s\in\mathbb{C}\), \(\det((sI_{\rho}-\mathcal{A}_{\xi})_{[\rho,1]})=\pm 1\neq 0\). Therefore, (24) is satisfied if and only if \(\det(sI_{l_{z}}-\mathcal{A}_{\eta})=0,\) thus implying that the zeros of the system (2)-(3) are the eigenvalues of \(\mathcal{A}_{\eta}\). **Remark IV.1**.: Note that \(\mathcal{A}_{\eta}\) has \(l_{z}\) eigenvalues and thus (2)-(3) has \(l_{z}=l_{x}-\rho\) zeros, which is a well-known fact. **Remark IV.2**.: A simplistic proof of the zeros being the eigenvalues of a partition of the zero-subspace form can be motivated by the blocking property of zeros. Let \(z\) denote a zero of an asymptotically stable system. It follows that the input \(u(t)=e^{zt}u_{0}\) yields zero asymptotic output, that is, \(\lim_{t\to\infty}y(t)=0,\) which implies that \(\lim_{t\to\infty}\xi(t)=0.\) It follows from (18) that \[\lim_{t\to\infty}\mathcal{A}_{\xi\eta}\eta(t)+\mathcal{B}_{\xi}u(t)=0. \tag{25}\] Now, consider the case where \(\mathbb{R}(z)>0\). Since \(u(t)=e^{zt}u_{0}\) diverges, it follows that \(\eta(t)\) must diverge to satisfy (25). In fact, \(\eta(t)\) must grow exponentially as \(e^{zt}\) to satisfy (25). Since \(\xi(t)\to 0,\) it follows from (17) that \(\eta(t)\) grows exponentially if and only if \(z\in\text{spec}(A_{\eta}).\) **Theorem IV.2**.: Consider the square MIMO system (2), (3) and its zero-subspace form, given by (17)-(19). Then, the invariant zeros of the system (2), (3) are the eigenvalues of \(\mathcal{A}_{\eta}\), that is, \[\mathrm{izeros}\left(\left[\begin{array}{c|c}A&B\\ \hline C&0\end{array}\right]\right)=\mathrm{mspec}(\mathcal{A}_{\eta}),\] where \(\mathrm{mspec}(\mathcal{A}_{\eta})\) denotes the multispectrum of \(\mathcal{A}_{\eta}\), that is, the multiset consisting of the eigenvalues of \(\mathcal{A}_{\eta}\) including their algebraic multiplicity [21, p. 506]. **Proof.** The proof is omitted due to restrictions on allowable number of pages. ## V Normal Form of a Linear System This section shows the relation between the zero-subspace form (17)-(19) of a system and its normal form. In the nonlinear systems theory, the normal form is used to decompose a nonlinear system into its zero dynamics and the input-output feedback linearizable dynamics [16, 17]. This section shows that a system's normal form can also be used to deduce its zeros. To maintain consistent notation, the procedure to transform a multi-input, multi-output (MIMO), nonlinear affine system to its normal form is summarized in Appendix A. In the case of a SISO linear system, \(f(x)=Ax,\;g(x)=B,\) and \(h(x)=Cx,\) and thus it follows from (48), (49) that \[\alpha(x) =L_{f}^{\rho}h(x)=CA^{\rho}x, \tag{26}\] \[\beta(x) =L_{g}L_{f}^{\rho-1}h(x)=CA^{\rho-1}B. \tag{27}\] Furthermore, \[\psi(x)=\begin{bmatrix}C\\ CA\\ \vdots\\ CA^{\rho-1}\end{bmatrix}x. \tag{28}\] Note that \(\psi(x)=\overline{C}x,\) where \(\overline{C}\) is defined by (4). Finally, let \(\phi(x)=B_{\mathrm{n}}x,\) where \(B_{\mathrm{n}}\) is chosen to satisfy \(L_{g}\phi(x)=B_{\mathrm{n}}B=0\). Note that \[\begin{bmatrix}\eta\\ \xi\end{bmatrix}=\begin{bmatrix}\phi(x)\\ \psi(x)\end{bmatrix}=\begin{bmatrix}B_{\mathrm{n}}\\ \overline{C}\end{bmatrix}x, \tag{29}\] and thus \(x=R_{\eta}\eta+R_{\xi}\xi,\) where \[\begin{bmatrix}R_{\eta}&R_{\xi}\end{bmatrix}=\begin{bmatrix}B_{\mathrm{n}}\\ \overline{C}\end{bmatrix}^{-1}. \tag{30}\] Note that the state-transformation matrix in (29) is invertible. The proof is similar to the proof of Proposition III.1. The normal form of (2)-(3) is then \[\dot{\eta} =B_{\mathrm{n}}Ax=B_{\mathrm{n}}AR_{\eta}\eta+B_{\mathrm{n}}AR_{ \xi}\xi, \tag{31}\] \[\dot{\xi} =B_{\mathrm{c}}CA^{\rho}AR_{\eta}\eta+(A_{\mathrm{c}}+B_{\mathrm{c }}CA^{\rho}AR_{\xi})\xi+B_{\mathrm{c}}CA^{\rho-1}Bu,\] (32) \[y =\xi_{1}, \tag{33}\] where \[A_{\mathrm{c}}\stackrel{{\triangle}}{{=}}\begin{bmatrix}0&1&0& \cdots&0\\ 0&0&1&\cdots&0\\ \vdots&\vdots&\ddots&\ddots&\vdots\\ 0&\vdots&\ldots&0&1\\ 0&\vdots&\ldots&0&0\end{bmatrix}\in\mathbb{R}^{\rho\times\rho},\quad B_{ \mathrm{c}}\stackrel{{\triangle}}{{=}}\begin{bmatrix}0\\ \vdots\\ 1\end{bmatrix}\in\mathbb{R}^{\rho}. \tag{34}\] Note that since \(B_{\mathrm{z}}\) and \(B_{\mathrm{n}}\) satisfy exactly the same conditions, it follows that \(\mathcal{R}(B_{\mathrm{z}})=\mathcal{R}(B_{\mathrm{n}})\) and therefore (17), (18) are same as (31), (32). In fact, if \(B_{\mathrm{n}}\) is chosen to be equal to \(B_{\mathrm{z}}\), then the zero-subspace form (17), (18) is exactly the normal form of (2)-(3). In the normal form, (31) is called the _zero dynamics_ of the system. Theorem IV.1 thus shows that _zeros are the eigenvalues of the dynamics matrix of the zero dynamics._ ## VI Numerical Examples This section presents several examples verifying the main result of this paper. These examples, listed in Table I, show that the technique presented in the paper to compute the zeros can be extended to exactly proper systems and nonsquare MIMO systems and is not affected by pole-zero cancellations. **Example VI.1**.: **[Strictly proper SISO system.]** Consider the system \[G(s)=\frac{s^{2}-9s+8}{s^{3}+11s^{2}+36s+36}.\] Note that \(\rho=1\) and \(\operatorname{zeros}(G)=\{1,8\}.\) A controllable canonical realization of \(G\) is \[A =\begin{bmatrix}0&1&0\\ 0&0&1\\ -36&-36&-11\end{bmatrix},\quad B=\begin{bmatrix}0\\ 0\\ 1\end{bmatrix},\] \[C =\begin{bmatrix}8&-9&1\end{bmatrix}.\] Letting \({B_{\mathrm{z}}}^{1}=\begin{bmatrix}0&1&0\\ 1&0&0\end{bmatrix},\) it follows from (6) that \[T=\begin{bmatrix}B_{\mathrm{z}}\\ C\end{bmatrix}=\begin{bmatrix}0&1&0\\ 1&0&0\\ 8&-9&1\end{bmatrix},\] and thus \[\mathcal{A} =\begin{bmatrix}9&-8&1\\ 1&0&0\\ \hline-208&124&-20\end{bmatrix}\mathcal{B}=\begin{bmatrix}0\\ 0\\ 1\end{bmatrix},\] \[\mathcal{C} =\begin{bmatrix}0&0&1\end{bmatrix}.\] Note that \(\mathcal{A}_{\eta}=\begin{bmatrix}9&-8\\ 1&0\end{bmatrix}\) and thus \(\operatorname{mspec}(\mathcal{A}_{\eta})=\{1,8\},\) which confirms Theorem IV.1. \(\diamondsuit\) **Example VI.2**.: **[SISO System with zero relative degree.]** Consider the system \[G(s)=\frac{s^{3}+21s^{2}+116s+96}{s^{3}+11s^{2}+38s+40}.\] Note that \(\rho=0\) and \(\operatorname{zeros}(G)=\{-1,-8,-12\}.\) Furthermore, \(l_{z}=3\) and thus \(\mathcal{A}_{\eta}\) is \(3\times 3,\) which implies that \(\mathcal{A}_{\eta}\) and the dynamics matrix of \(G\) are similar, that is, they have same eigenvalues. Therefore, in the case where \(\rho=0,\) eigenvalues of \(\mathcal{A}_{\eta}\) are the poles of \(G.\) Note that Theorem IV.1 is not applicable in this case since \(D\neq 0.\) However, consider the system \[\overline{G}(s)\stackrel{{\triangle}}{{=}}\frac{G(s)}{s}=\frac{s ^{3}+21s^{2}+116s+96}{s^{4}+11s^{3}+38s^{2}+40s}.\] Note that \(\overline{G}(s)\) has the same set of zeros as \(G(s),\) but its relative degree \(\rho=1.\) Therefore, Theorem IV.1 can be used to compute the zeros of \(\overline{G},\) and thus the zeros of \(G.\) A controllable canonical realization of \(\overline{G}\) is \[A =\begin{bmatrix}0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ 0&-40&-38&-11\end{bmatrix},\quad B=\begin{bmatrix}0\\ 0\\ 0\\ 1\end{bmatrix},\] \[C =\begin{bmatrix}96&116&21&1\end{bmatrix}.\] Letting \(B_{\mathrm{z}}=\begin{bmatrix}0&1&0&0\\ 0&0&1&0\\ -1&0&0&0\end{bmatrix},\) it follows from (6) that \[T=\begin{bmatrix}B_{\mathrm{z}}\\ C\end{bmatrix}=\begin{bmatrix}0&1&0&0\\ 0&0&1&0\\ -1&0&0&0\\ 96&116&21&1\end{bmatrix},\] and thus \[\mathcal{A} =\begin{bmatrix}0&1&0&0\\ -116&-21&96&1\\ -1&0&0&0\\ \hline-1104&-132&960&10\end{bmatrix}\mathcal{B}=\begin{bmatrix}0\\ 0\\ 0\\ \hline 1\end{bmatrix},\] \[\mathcal{C} =\begin{bmatrix}0&0&0&1\end{bmatrix}.\] Note that \(\mathcal{A}_{\eta}=\begin{bmatrix}0&1&0\\ -116&-21&96\\ -1&0&0\end{bmatrix}\) and thus \(\operatorname{mspec}(\mathcal{A}_{\eta})=\{-12,-8,-1\},\) which confirms Theorem IV.1. \(\diamondsuit\) **Example VI.3**.: **[SISO System with pole-zero cancellation.]** Consider the system \[G(s)=\frac{s+5}{s^{3}+10s^{2}+31s+30}.\] Note that \(\rho=2,\)\(\operatorname{zeros}(G)=\{-5\},\) and \(\operatorname{poles}(G)=\{-2,-3,-5\}.\) A controllable canonical realization of \(G\) is \[A =\begin{bmatrix}0&1&0\\ 0&0&1\\ -30&-31&-10\end{bmatrix},\quad B=\begin{bmatrix}0\\ 0\\ 1\end{bmatrix},\] \[C =\begin{bmatrix}5&1&0\end{bmatrix}.\] Letting \(B_{\mathrm{z}}=\begin{bmatrix}0&1&0\end{bmatrix},\) it follows from (6) that \[T=\begin{bmatrix}B_{\mathrm{z}}\\ C\end{bmatrix}=\begin{bmatrix}0&1&0\\ 0&0&1\\ -1&0&0&0\end{bmatrix}.\] Letting \(B_{\mathrm{z}}=\begin{bmatrix}0&1&0&0\\ 0&0&1&0\\ -1&0&0&0\end{bmatrix},\) it follows from (6) that \[T=\begin{bmatrix}B_{\mathrm{z}}\\ C\end{bmatrix}=\begin{bmatrix}0&1&0&0\\ 0&0&1&0\\ -1&0&0&0\\ 96&116&21&1\end{bmatrix},\] and thus \[\mathcal{A} =\begin{bmatrix}0&1&0\\ -116&-21&96\\ -1&0&0&0\end{bmatrix}\mathcal{B}=\begin{bmatrix}0\\ 0\\ 0\\ 1\end{bmatrix},\] \[\mathcal{C} =\begin{bmatrix}0&0&0&1\end{bmatrix}.\] Note that \(\mathcal{A}_{\eta}=\begin{bmatrix}0&1&0\\ -116&-21&96\\ -1&0&0\end{bmatrix}\) and thus \(\operatorname{mspec}(\mathcal{A}_{\eta})=\{-12,-8,-1\},\) which confirms Theorem IV.1. \(\diamondsuit\) **Example VI.3**.: **[SISO System with pole-zero cancellation.]** Consider the system \[G(s)=\frac{s+5}{s^{3}+10s^{2}+31s+30}.\] Note that \(\rho=2,\)\(\operatorname{zeros}(G)=\{-5\},\) and \(\operatorname{poles}(G)=\{-2,-3,-5\}.\) A controllable canonical realization of \(G\) is \[A =\begin{bmatrix}0&1&0\\ 0&0&1\\ -30&-31&-10\end{bmatrix},\quad B=\begin{bmatrix}0\\ 0\\ 1\end{bmatrix},\] \[C =\begin{bmatrix}5&1&0\end{bmatrix}.\] Letting \(B_{\mathrm{z}}=\begin{bmatrix}0&1&0\end{bmatrix},\) it follows from (6) that \[T=\begin{bmatrix}B_{\mathrm{z}}\\ C\end{bmatrix}=\begin{bmatrix}0&1&0\\ 0&0&1\\ -1&0&0&0\end{bmatrix},\] and thus \[\mathcal{A} =\begin{bmatrix}0&1&0\\ -116&-21&96\\ -1&0&0\end{bmatrix},\] \[\mathcal{C} =\begin{bmatrix}0&1&0\end{bmatrix},\] \[\mathcal{C} =\begin{bmatrix}0&1&0\end{bmatrix},\] \[\mathcal{C} =\begin{bmatrix}0&1&0\end{bmatrix},\] and thus \[\mathcal{A} =\begin{bmatrix}0&1&0\\ -116&-21&96\\ -1&0&0\end{bmatrix},\] \[\mathcal{C} =\begin{bmatrix}0&1&0\end{bmatrix},\] \[\mathcal{C} =\begin{bmatrix}0&1&0\end{bmatrix},\] \[\mathcal{C} =\begin{bmatrix}0&1&0\end{bmatrix},\] and thus \[\mathcal{C} =\begin{bmatrix}0&1&0\end{bmatrix},\] \[\mathcal{C} =\begin{bmatrix}0&1&0\end{bmatrix},\] \[\mathcal{C} =\begin{bmatrix}0&1&0\end{bmatrix},\] and thus \[\mathcal{C} =\begin{bmatrix}0&1&0\end{bmatrix},\] \[\mathcal{C} =\begin{bmatrix}0&1&0\end{bmatrix},\] \[\mathcal{C} =\begin{bmatrix}0&1&0\end{bmatrix},\] Note that \(\mathcal{A}_{\eta}=-5\) and thus \(\operatorname{mspec}(\mathcal{A}_{\eta})=\{-5\},\) which confirms Theorem IV.1. This example shows that Theorem IV.1 is unaffected by the state-space realization's lack of observability or controllability. **Example VI.4**.: **[Square MIMO System.]** Consider the square MIMO system \[G(s)=\frac{64}{s^{3}+24s^{2}+176s+384}\begin{bmatrix}1&s+4\\ s-2&s-8\end{bmatrix},\] with two inputs and two outputs. Note that \(\rho_{1}=2\) and \(\rho_{2}=2,\) and thus \(\rho=4.\) A realization of \(G\), computed with MATLAB's s s routine, is \[A =\begin{bmatrix}-24&-11&-6&0&0&0\\ 16&0&0&0&0&0\\ 0&4&0&0&0&0\\ 0&0&0&-24&-11&-6\\ 0&0&0&16&0&0\\ 0&0&0&0&4&0\end{bmatrix},B=\begin{bmatrix}2&0\\ 0&0\\ 0&0\\ 0&0&0&16&0&0\\ 0&0&0&4&0\end{bmatrix},\] \[C =\begin{bmatrix}0&0&0.5&0&1&1\\ 0&2&-1&0&1&-2\end{bmatrix}.\] Furthermore, \(\operatorname{zeros}\left[\begin{array}{c|c}A&B\\ \hline C&D\end{array}\right]=\{-1,0\},\) which is computed using the MATLAB's tzero routine. Letting \(B_{\mathsf{z}}=\begin{bmatrix}0&0&1&0&0&0\\ 0&-1&0&0&0&0\end{bmatrix},\) it follows from (6) that \[T=\begin{bmatrix}B_{\mathsf{z}}\\ C_{1}\\ C_{1}A\\ C_{2}\\ C_{2}A\end{bmatrix}=\begin{bmatrix}0&0&1&0&0&0\\ 0&-1&0&0&0&0\\ 0&0&0.5&0&1&1\\ 0&2&0&16&4&0\\ 0&2&-1&0&1&-2\\ 32&-4&0&16&-8&0\end{bmatrix},\] and thus \[\mathcal{A} =\begin{bmatrix}0&-4&0&0&0&0\\ 0&-1&-4&0.5&-2&-0.5\\ \hline 0&0&0&1&0&0\\ 48&-38&-88&-21&4&1\\ 0&0&0&0&0&1\\ -144&268&-272&-6&-88&-26\end{bmatrix},\] \[\mathcal{B} =\begin{bmatrix}0&0\\ 0&0\\ 0&0&0\\ 0&0&0\\ 0&0&64\\ 64&64\end{bmatrix},\quad\mathcal{C} =\begin{bmatrix}0&0&1&0&0&0\\ 0&0&0&1&0\end{bmatrix}.\] Note that \(\mathcal{A}_{\eta}=\begin{bmatrix}0&-4\\ 0&-1\end{bmatrix}\) and thus \(\operatorname{mspec}(\mathcal{A}_{\eta})=\{-1,0\},\) which confirms that the invariant zeros of the system are the eigenvalues of \(\mathcal{A}_{\eta}.\) **Example VI.5**.: **[Wide MIMO System.]** Consider the wide MIMO system \[G(s)=\frac{1}{d(s)}\begin{bmatrix}s^{2}+s-2&0&s^{2}-2s+1\\ s^{2}-3s+2&s^{2}-1&s^{2}-1\end{bmatrix},\] where \(d(s)=s^{3}+2s^{2}-s-2,\) with three inputs and two outputs. Note that \(\rho_{1}=1\), and \(\rho_{2}=1,\) and thus \(\rho=2.\) A realization of \(G\), computed with MATLAB's s routine, is \[A =\begin{bmatrix}0&0&2&0&0&0\\ 1&0&1&0&0&0\\ 0&1&-2&0&0&0\\ 0&0&0&0&0&2\\ 0&0&0&1&0&1\\ 0&0&0&0&1&-2\end{bmatrix},\] \[B =\begin{bmatrix}-2&0&1\\ 1&0&-2\\ 1&0&1\\ 2&-1&-1\\ -3&0&0\\ 1&1&1\end{bmatrix},\] \[C =\begin{bmatrix}0&0&1&0&0&0\\ 0&0&0&0&1\end{bmatrix}.\] Note that \(\operatorname{zeros}\left[\begin{array}{c|c}A&B\\ \hline C&D\end{array}\right]=\{1,1\}.\) It follows from (6) that \[T=\begin{bmatrix}B_{\mathsf{z}}\\ C_{1}\\ C_{2}\end{bmatrix}=\] \[\begin{bmatrix}0.642&0.423&0.205&0.406&0.187&0.406\\ -0.532&-0.033&0.465&0.167&0.665&0.167\\ -0.110&-0.390&-0.670&0.426&0.146&0.426\\ -0.5&0.5&0&0.5&-0.5&0\\ 0&0&1&0&0&0\\ 0&0&0&0&0&1\end{bmatrix},\] where \(B_{\mathsf{z}}\) is computed with the MATLAB routine null(B'). Finally, the zero-subspace form of \(G\) is \[\mathcal{A} =\] \[\mathcal{B} =\begin{bmatrix}0&0&0\\ 0&0&0\\ 0&0&0\\ 1&0&1\\ 1&1&1\end{bmatrix},\] \[\mathcal{C} =\begin{bmatrix}0&0&0\\ 0&0&0\\ 4&-0.5&-2\\ \hline 1&0&1\\ 1&1&1\end{bmatrix}.\] Note that \(\operatorname{mspec}(\mathcal{A}_{\eta})=\{0,-0.5,1,1\},\) which suggests that the invariant zeros of the system are contained in the \(\operatorname{mspec}(\mathcal{A}_{\eta}).\) To identify the invariant zero in \(\operatorname{mspec}(\mathcal{A}_{\eta}),\) we use the fact that the rank of the associated Rosenbrok system matrix \(\mathcal{Z}(s)\) drops at the invariant zero. Since \(\operatorname{rank}\!\mathcal{Z}(0)=\operatorname{rank}\!\mathcal{Z}(-0.5)=8,\) but \(\operatorname{rank}\!\mathcal{Z}(1)=7,\) it follows that the invariant zeros of the system are at \(z=\{1,1\}\) as the rank of the Rosenbrok system matrix drops only at \(z=1\). ## VII Conclusion This paper showed that similar to the poles computation, zeros of a state-space realization of a linear system can be computed by solving an eigenvalue problem instead of a generalized eigenvalue problem. By transforming the system into its zero-subspace form, it is shown that the zeros are the eigenvalues of a partition of the dynamics matrix represented in the zero-subspace form, which is also the dynamics matrix of the system's zero dynamics. Numerical examples validate the main result of the paper. Future research is focused on extending the main result of this paper to nonsquare MIMO systems. Although the technique is easily extended to the case of wide systems, where it provides spurious invariant zeros, which can be removed heuristically, it provides inconclusive results in tall systems. An alternative approach may be to consider random bordering to square the nonsquare system and thus compute invariant zeros of the MIMO system.
2308.14936
AutoProSAM: Automated Prompting SAM for 3D Multi-Organ Segmentation
Segment Anything Model (SAM) is one of the pioneering prompt-based foundation models for image segmentation and has been rapidly adopted for various medical imaging applications. However, in clinical settings, creating effective prompts is notably challenging and time-consuming, requiring the expertise of domain specialists such as physicians. This requirement significantly diminishes SAM's primary advantage - its interactive capability with end users - in medical applications. Moreover, recent studies have indicated that SAM, originally designed for 2D natural images, performs sub optimally on 3D medical image segmentation tasks. This subpar performance is attributed to the domain gaps between natural and medical images and the disparities in spatial arrangements between 2D and 3D images, particularly in multi-organ segmentation applications. To overcome these challenges, we present a novel technique termed AutoProSAM. This method automates 3D multi-organ CT-based segmentation by leveraging SAM's foundational model capabilities without relying on domain experts for prompts. The approach utilizes parameter-efficient adaptation techniques to adapt SAM for 3D medical imagery and incorporates an effective automatic prompt learning paradigm specific to this domain. By eliminating the need for manual prompts, it enhances SAM's capabilities for 3D medical image segmentation and achieves state-of-the-art (SOTA) performance in CT-based multi-organ segmentation tasks.
Chengyin Li, Prashant Khanduri, Yao Qiang, Rafi Ibn Sultan, Indrin Chetty, Dongxiao Zhu
2023-08-28T23:23:53Z
http://arxiv.org/abs/2308.14936v3
# Auto-Prompting SAM for Mobile Friendly 3D Medical Image Segmentation ###### Abstract The Segment Anything Model (SAM) has rapidly been adopted for segmenting a wide range of natural images. However, recent studies have indicated that SAM exhibits subpar performance on 3D medical image segmentation tasks. In addition to the domain gaps between natural and medical images, disparities in the spatial arrangement between 2D and 3D images, the substantial computational burden imposed by powerful GPU servers, and the time-consuming manual prompt generation impede the extension of SAM to a broader spectrum of medical image segmentation applications. To address these challenges, in this work, we introduce a novel method, AutoSAM Adapter, designed specifically for 3D multi-organ CT-based segmentation. We employ parameter-efficient adaptation techniques in developing an automatic prompt learning paradigm to facilitate the transformation of the SAM model's capabilities to 3D medical image segmentation, eliminating the need for manually generated prompts. Furthermore, we effectively transfer the acquired knowledge of the AutoSAM Adapter to other lightweight models specifically tailored for 3D medical image analysis, achieving state-of-the-art (SOTA) performance on medical image segmentation tasks. Through extensive experimental evaluation, we demonstrate the AutoSAM Adapter as a critical foundation for effectively leveraging the emerging ability of foundation models in 2D natural image segmentation for 3D medical image segmentation. ## Introduction Recently, computer vision foundation models like the Segment Anything Model (SAM) have further pushed the frontiers of image segmentation [17]. SAM has demonstrated impressive performance and generalizability on a variety of semantic segmentation tasks [14], bringing new promise to medical image segmentation where the current approaches are limited by the quantity and quality of the segmentation masks. In contrast to existing custom-designed transformer models, such as UNETR [13], SwinUNETR [12], and FocalUNETR [11], that are trained only with a few patient samples and masks, foundation models including SAM are trained with billions of images and millions of masks. Such large-sized foundation models have also been observed to generalize to medical image segmentation tasks but with relatively worse performance compared to the state-of-the-art (SOTA) models in medical image segmentation [15]. Recent efforts have attempted to extend the success of SAM to medical image segmentation tasks including [14, 16, 15, 17, 18]. However, the demonstrated performance has exhibited reduced precision and stability, particularly in more intricate segmentation tasks characterized by smaller sizes, irregular shapes, and lower contrast properties of medical images in comparison to natural images [15]. Adapting the original SAM architecture, which is rooted in 2D natural images, to effectively harness the 3D spatial information inherent in volumetric medical data poses a significant challenge. Novel approaches must be devised to bridge the gap between natural and medical image segmentation tasks, opening doors for the development of cutting-edge segmentation techniques. A few substantial issues (please see Fig. 1) that need to be addressed in developing a SAM-based medical image segmentation framework are, (A) encompassing the oversight of substantial domain disparities between natural and medi Figure 1: Challenges of using SAM for medical image segmentation, (A) T-SNE plot of embeddings encoded by SAM’s image encoder for medical image datasets AMOS [11], and BTCV [12], and for natural image datasets ADE20K [18] and COCO [11], (B) 2D image vs 3D volumetric input, and (C) heavy vs lightweight computing requirements. cal images (Fig. 1(A)), (B) extracting 3D spatial information from volumetric medical images effectively (Fig. 1(B)), and (C) the high computational demands even during inference (Fig. 1(C)). Furthermore, SAM's reliance on labor-intensive manually generated prompts [14, 15] haspers its successful application, particularly in multi-organ medical image segmentation tasks. As healthcare becomes increasingly patient-centered and portable imaging devices like Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) become more accessible, point-of-care tests (POCT) hold significant potential to enhance treatment effectiveness and efficiency by providing diagnoses at the patient's location. Especially in time-sensitive scenarios, POCT can substantially improve diagnosis and treatment processes, resulting in smoother and more efficient experiences for both patients and caregivers. Notably, portable 3D medical image segmentation techniques drive the functionality of POCTs, demanding the development of highly compressed models without compromising the segmentation performance. To address the above issues, we introduce a novel AutoSAM Adapter method for a transition of SAM from 2D to 3D for medical image segmentation. Initially, we design intricate modifications for the image encoder at the input level, enabling the original 2D transformer to adeptly accommodate volumetric inputs while optimizing the reusability of pre-trained weights with a parameter-efficient adaptation method. Subsequently, at the prompt encoder level, we design an automatic prompt encoder module that takes the extracted feature maps from the previous image encoder as input and automatically learns the required prompts for the following mask encoder. This design effectively removes the time-consuming manual prompt generation process, especially, for multi-organ medical image segmentation tasks. Additionally, we prioritize a lightweight design for the mask decoder at the output level, emphasizing multi-layer aggregation. Through extensive experimentation on CT-based multi-organ segmentation datasets, inclusive of comprehensive comparisons with state-of-the-art approaches including nn-UNet [21], as well as recent adapters in the field, our results exhibit a significant performance improvement over existing techniques. Finally, we apply knowledge distillation (KD) to transfer the learned knowledge from AutoSAM Adapter to other lightweight models like SwinUNETR [16] for the efficiency-aware POCT use scenario. The main contributions of this work are summarized below. * To tackle the domain gap between 2D and 3D inputs, we introduce a 3D adaptor to extract spatial information for volumetric segmentation in medical images. * To the best of our knowledge, this is the first work to adapt the SAM model for 3D-based multi-organ segmentation that can automatically learn prompts without the laborious manual prompting process. * To facilitate mobile-friendly use scenarios for POCT, we employ KD to train lightweight models tailored for point-of-care medical image segmentation applications. * Extensive experiments and analysis on the AMOS and BTCV CT-based multi-organ segmentation datasets demonstrate that the proposed AutoSAM Adapter and its lightweight version achieve superior performance on medical image segmentation tasks compared to SOTA. ## Related Work Foundation Computer Vision Models.With advancements in deep learning models, most contemporary vision frameworks adhere to the pre-training and fine-tuning paradigm [17]. Recently, computer vision researchers have shown substantial interest in large and adaptable foundational models, capitalizing on pre-training techniques such as self-supervised learning [18], contrastive learning [19], and language-vision pre-training [10], among others. Notably, the SAM model [13], recently pre-trained on a dataset of over 11 million images, has emerged as a versatile foundational model for natural image segmentation. SAM demonstrates impressive zero-shot capabilities in segmenting diverse subjects in real-world environments, using an interactive and prompt-driven approach. Additionally, SEEM [15], another contemporaneous effort to SAM, introduces a more comprehensive prompting scheme to facilitate semantic-aware open-set segmentation. Furthermore, DINOv2 [1] focuses on scaling up the pre-training of a ViT model in terms of data and model size. This approach aims to generate versatile visual features that simplify the fine-tuning of the downstream tasks. Parameter-efficient Model Fine-tuning.Given the extensive utilization of foundational models, the concept of parameter-efficient fine-tuning has garnered significant attention. Existing methods for efficient fine-tuning can be categorized into three groups [18]. Additional-based methods that involve incorporating lightweight adapters [17, 19] or prompts [16, 15] into the original model, with the sole focus on adjusting these parameters; Specification-based methods [15, 14] that concentrate on selecting a small subset of the original parameters for tuning; and reparameterization-based methods [13] that leverage low-rank matrices to approximate parameter updates. In recent times, a few researchers have extended pre-trained image models to encompass video comprehension [17] or volumetric segmentation [19]. Nevertheless, these methods treat the additional dimension as a "word group" and employ specialized modules to aggregate information along the word dimension. In contrast, in our work, we consider all three dimensions as isotropic and directly adapt the trained transformer block to capture 3D patterns. SAM-based Medical Image Segmentation.This line of work primarily focuses on enhancing SAM through fine-tuning for specific segmentation datasets, aiming to mitigate the noticeable performance drop of SAM on medical images. MedSAM [15] specifically concentrates on refining the SAM decoder by employing prompts generated from label masks across more than 30 medical image datasets. The outcome demonstrates improved performance compared to zero-shot predictions using prompts. Zhang et al. [14] opt for a low-rank fine-tuning approach, focusing on the SAM encoder. By combining this strategy with SAM decoder training, they tailor SAM for abdominal segmentation tasks. Junde Wu et al.[21] follow a distinct path, wherein they freeze SAM's weights and incorporate a trainable adaptation module within SAM to mitigate the need for complete re-training. Despite some progress, these methods either ignore the 3D pattern of medical images or require a laborious manual prompt generation process, which restricts the full potential of SAM from being realized in the medical image segmentation domain. Lightweight Models for POCT.Due to the limited computation power in POCT usage scenarios, directly using the SAM model for medical segmentation is not feasible. Instead of using the tailored network architectures like Mobile ViT [12], MobileNet [10], and EfficientNet [13], compressing large models during training stages into lightweight ones is a promising strategy. One popular approach for in-training model compression is knowledge distillation (KD) [15] where a full teacher model is trained on the cloud or an on-premise GPU cluster, and a student model is trained at the mobile device with the "knowledge" distilled via the soft labels from the teacher model. Thus the student model is trained to mimic the outputs of the teacher model as well as to minimize the cross-entropy loss between the true labels and predictive probabilities (soft labels). KD yields compact student models that demonstrate outstanding performance in a wide range of real-world applications, e.g., COVID-MobileXpert [11], on-device text classification [14], etc. Recently, the technique has also been applied to develop mobile SAM [14]. ## Methodology In this section, we explain how to modify the original SAM architecture developed for 2D natural images to work with 3D volumetric medical images for segmentation tasks. We first provide a brief overview of the SAM framework (as shown in Fig. 2), followed by a detailed explanation of the adjustments made to the image encoder, auto prompt generator, and mask decoder. ### The SAM Architecture SAM [16] is a prompt-driven image segmentation framework, known for its impressive performance and generalization ability in segmenting natural images. The SAM architecture comprises an image encoder, prompt encoder, and mask decoder. The image encoder utilizes the Vision Transformer (ViT) to transform original images into discrete embeddings. The prompt encoder converts diverse prompts into compact embeddings, achieved by combining fixed positional encoding and adaptable prompt-specific embedding. The mask decoder integrates a prompt self-attention module and bidirectional cross-attention modules for prompt-to-image and image-to-prompt attention. After attention processes, the feature map undergoes upsampling and passes through a multi-layer perceptron (MLP) to generate segmentation masks. However, the model's design suits 2D natural image segmentation, struggling with 3D volumetric medical imagery due to slice-wise predictions that disregard inter-slice spatial context. Performance on medical images also falters due to domain disparities between medical and natural images. Therefore, achieving an effective performance with SAM on medical-imaging tasks requires tailored adaptation and fine-tuning. ### Handling 3D Volumetric Inputs The original SAM model is built upon the 2D Vision Transformer (ViT), excelling in capturing 2D image patterns but facing limitations with 3D medical imaging like CT and MRI. These modalities produce volumetric 3D data, challenging the 2D ViT's processing ability. Common medical imaging workflows analyze images slice by slice, integrating information using spatial adaptors or temporal modules, yet the core architecture remains 2D-centric. However, for medical image analysis, the 2D-centric approach falls short due to the inherent 3D nature of volumetric medical images, which have uniform spatial resolutions across dimensions. To address this, we propose an adaptation strategy (Fig. 2A and Fig. 2B) with two aims: enabling the model to directly learn 3D spatial patterns and maintaining continuity by inheriting most parameters from the pre-trained model, introducing easily adjustable incremental parameters: * **Positional Encoding:** The pre-trained ViT model has a lookup table of size \(C\times H\times W\) with positional encoding. Furthermore, we initialize a tunable lookup table of size \(C\times D\) with zeros. To obtain the positional encoding of a 3D point \((d,h,w)\), we add the embedding from the frozen lookup table with \((h,w)\) to the embedding from the tunable lookup table with \((d)\). * **Patch Embedding:** We utilize a combination of \(1\times k\times k\) and \(k\times 1\times 1\) 3D convolutions to approximate the effect of a \(k\times k\times k\) convolution (e.g., \(k=14\)). The \(1\times k\times k\) convolution is initialized with the weights from a pre-trained 2D convolution and remains unaltered during the fine-tuning phase. As for the newly introduced \(k\times 1\times 1\) 3D convolution, we apply depth-wise convolution to decrease the number of parameters that need adjustment. This approach helps in managing the complexity of the model. * **Attention Block:** The attention blocks can be directly adjusted to accommodate 3D features. In the case of 2D inputs, the size of the queries is \([B,HW,C]\), which can be effortlessly modified to \([B,DHW,C]\) for 3D inputs, while retaining all the pre-trained weights. We implement sliding-window mechanisms akin to those in the SwinUNETR [14] to mitigate the memory impact resulting from the increase in dimensions. This approach aids in optimizing the model's performance while managing memory requirements. * **Bottleneck:** Given that convolution layers are generally easier to optimize than transformers, we replace 2D convolutions in the bottleneck with 3D counterparts and train them from scratch to improve performance. By making the above adjustments, we can smoothly transition the 2D ViT into a 3D ViT, reusing most parameters. However, fully fine-tuning the 3D ViT can be resource-intensive. To address this, we propose using a lightweight adapter approach for efficient fine-tuning. The adapter comprises a down-projection linear layer and an up-projection linear layer, represented as \(Adapter(\mathbf{X})=\mathbf{X}+Act(\mathbf{X}W_{down})W_{up}\). Here, \(\mathbf{X}\in\mathbb{R}^{N\times C}\) is the original feature representation, \(W_{down}\in\mathbb{R}^{C\times N^{\prime}}\) and \(W_{up}\in\mathbb{R}^{N^{\prime}\times C}\) are down-projection and up-projection layers, and \(Act(\cdot)\) is the activation function (e.g., \(ReLu\)). To enhance 3D spatial awareness, we include a depth-wise 3D convolution after the down-projection layer, as shown in Fig. 2B. This enhancement improves the adapter's utilization of 3D spatial cues. Throughout the training phase, we exclusively adjust the parameters of convolutions, spatial adapters, and normalization layers, while maintaining all other parameters in a frozen state. This frozen approach enhances memory efficiency during training. Fine-tuning the adapter and normalization layers aids in bridging the gap between natural images and medical images, enabling the model to adapt more effectively to the medical image domain. ### Auto Prompt Generator The original SAM model utilizes positional embedding to represent the prompt, applying it to both the prompt and the image. This guarantees that prompt and image embeddings for the same position share identical positional encoding. Subsequently, the prompt embedding engages in cross-attention with the image embedding, evolving from positional to semantic attributes. However, this cross-attention, though effective in 2D settings, can trigger over-smoothing issues when extended to 3D feature maps. Adapting to 3D can significantly inflate token numbers, leading to a uniform probability distribution. Prompt-based segmentation might not be suitable for real-world applications due to two main reasons. Firstly, it becomes time-consuming for multi-class prompts. In situations involving multiple classes, generating prompts becomes a time-intensive task. Many public medical image segmentation challenges necessitate simultaneous segmentation of multiple classes. Precisely specifying prompts for each class can be challenging, especially for small or closely located organs or tissues. Additionally, note that segmentation performance heavily relies on the quality of provided prompts, however, prompt quality is difficult to control since crafting accurate prompts demands domain-specific expertise, which might not be universally available. This limitation hampers the effectiveness of prompt-based approaches, particularly for non-expert users. In pursuit of these objectives, we propose to use an Auto Prompt Generator instead of positional encoding to represent the prompt. The whole process is illustrated in Fig. 2C. Instead of using manually generated points or bounding Figure 2: (A) The overall AutoSAM Adapter design, (B) the architecture of spatial adapter module, (C) the architecture of Auto Prompt Generator, and (D) the pipeline of deriving lightweight SwinUNETR from AutoSAM Adapter through the knowledge distillation process. boxes, we directly take the output feature map after the last block of attention and spatial adapter operation. This Auto Prompt Generator follows a fully convolutional neural (FCN) based encoder-decoder design that resembles 3D UNet [12]. This generator boasts a lightweight structure, leveraging 3D-based convolution operations, and can be effortlessly learned from scratch. This enables precise prompt generation tailored to different medical segmentation tasks. Notably, it eliminates the need for additional manually generated prompts, simplifying and expediting the multi-class medical image segmentation tasks. ### Lightweight Mask Decoder The mask decoder in SAM is intentionally lightweight, employing stacks of convolution layers. We update this design by replacing all 2D convolutions with 3D convolutions, enabling direct 3D mask generation. The initial decoder, devoid of progressive upsampling or skip connections, is effective for natural images where object sizes are generally substantial, and boundaries are distinct. Nonetheless, in the context of volumetric medical image segmentation, it's widely recognized that U-shaped networks featuring skip connections at multiple levels are crucial [13]. This is due to the fact that medical image objects are often diminutive, and their boundaries are frequently indistinct. Consequently, such images demand networks capable of preserving higher-resolution details for improved discrimination, making the adoption of U-shape architectures with skip connections imperative. To tackle this challenge while maintaining a lightweight design, we utilize a multi-layer aggregation mechanism [14] in our decoder. Here, the encoder's intermediate outputs are concatenated, enriching the mask feature map without compromising model efficiency. For enhanced resolution information, we upsample the mask feature map to match the original resolution. This upsampled map is concatenated with the original image and fused using another 3D convolution, generating the final mask. This strategy seamlessly integrates high-resolution details and original image data into the mask-generation process. We simplify the original SAM by removing multi-masks generation and ambiguity awareness, aiming to fine-tune it for a specific downstream task. The mask decoder's backbone predominantly consists of lightweight 3D convolutional layers, known for their optimization-friendliness. Hence, we train all parameters from scratch. ### Knowledge Distillation Despite our efforts to design significantly lightweight modules compared to the original SAM, there remains a challenge in reducing the initial weight complexity within SAM's ViT encoder segment. This encoder component holds a significant portion of the model's parameters, making it difficult to seamlessly integrate the AutoSAM Adapter into POCT. Inspired by the simplicity of KD techniques and the availability of medical segmentation-specific model designs, we take an additional step forward. Our objective is to transfer the accumulated knowledge from the larger AutoSAM Adapter (around 600 million parameters) to a much smaller SwinUNETR model (around 10 million parameters). This approach aims to bridge the gap between complex models and resource-efficient solutions, fostering advancements in practical medical image segmentation within the academic realm. ### Loss Function For training the AutoSAM Adapter (as shown in Fig. 2A), a combination of Dice loss and Cross-Entropy loss is used to assess the alignment between the predicted mask and the ground truth on a pixel-wise basis. The objective function for the segmentation head is defined as follows: \[\mathcal{L}_{seg}=\mathcal{L}_{dice}(\hat{p}_{i},g_{i})+\mathcal{L}_{ce}(\hat{p }_{i},g_{i}), \tag{1}\] where \(\hat{p}_{i}\) represents the predicted probabilities from the main task, and \(g_{i}\) represents the ground truth mask for an input volume \(i\). The predicted probabilities, \(\hat{p}_{i}\), result from applying the AutoSAM Adapter to the input 3D volume for the main task. Regarding the KD process (as illustrated in Fig. 2D), we adopt the following formulation: \[\mathcal{L}_{tol}=\lambda\mathcal{L}_{seg}+(1-\lambda)\mathcal{L}_{mse}, \tag{2}\] where \(\lambda\) serves as a hyperparameter regulating how much the lightweight SwinUNETR model should learn from both the prediction mask generated by the AutoSAM Adapter and the ground truth and \(\mathcal{L}_{mse}=\frac{1}{N}\sum_{i=1}^{N}(\hat{p}_{i},g_{i})^{2}\). This approach enables the transfer of knowledge from the AutoSAM Adapter to the SwinUNETR model while striking a balance between the two information sources. ## Experiments ### Datasets and Evaluation Metrics **BTCV Dataset.** Beyond the Cranial Vault (BTCV) abdomen challenge dataset [1] includes 30 subjects with abdominal CT scans. In this dataset, 13 organs are annotated by interpreters under the supervision of radiologists at Vanderbilt University Medical Center. Each CT scan is acquired during the portal venous contrast enhancement phase and consists of 80 to 225 slices. These slices have dimensions of 512x512 pixels, and the slice thickness ranges from 1 to 6 \(mm\). The multi-organ segmentation task is framed as a 13-class segmentation challenge with 24 scans for training and 6 scans for testing. **AMOS Dataset.** We also employ the publicly accessible AMOS2022 dataset [11]. It consists of 200 multi-contrast abdominal CT scans for training and 100 scans for testing, sourced from AMOS 2022. These scans are annotated for sixteen anatomies, enabling assessment of abdominal multi-organ segmentation. #### Evaluation Metrics. We utilize the Dice coefficient and the Normalized Surface Distance (NSD) [15] as metrics to evaluate the segmentation performance. The NSD metric quantifies the agreement between ground truth and predicted surfaces, considering a fixed tolerance. Unlike comparing two volumes, this metric assesses the overlap between surface structures. ### Implementation Details We implement our approach and establish baseline comparisons using both PyTorch and MONAI frameworks. All experiments and comparisons employ SAM-B, utilizing ViT-B as the backbone for the image encoder. Model training is conducted with a batch size of 1 on NVIDIA A100 GPUs, utilizing the AdamW optimizer [10]. A learning rate scheduler with exponential decay, incorporating 5 epochs of warmup and a maximum of 200 epochs, is employed. The initial learning rate is set at \(5e^{-4}\), with a momentum of 0.9 and weight decay of \(1e^{-5}\). A Houndsfield unit (HU) range of \([-125,275]\) is normalized to the interval \([0,1]\) for the BTCV dataset [13]. Following the procedure outlined in [12], HU values for each scan in the AMOS dataset are clipped to the range \([-991,362]\). Subsequently, truncated voxel values are normalized by subtracting 50 and dividing by 141. For both datasets, all CT scans are interpolated into an isotropic voxel spacing of \([1.0\times 1.0\times 1.5]\) mm, and each CT scan is then cropped to a \(128\times 128\times 128\) input patch for 3D models. Data augmentation includes random flip, rotation, and intensity scaling with probabilities of 0.1, 0.1, and 0.2, respectively. During training, foreground and background patches are randomly sampled at a \(1:1\) ratio. Our method's performance is evaluated by comparing it against SOTA volumetric segmentation approaches. ### Comparison with SOTA We extensively compare our model with the SOTA 3D medical image segmentation approaches, including the most recent Transformer-based methods including UNFTR [15], SwinUNETR [13], and nnFormer [14], as well as CNN-based methods such as nnUNet [10]. As reported in Table 1, we observe that the proposed AutoSAM Adapter outperforms all other SOTA methods in both Dice and NSD metrics for both BTCV and AMOS datasets. Distinct improvements can be specifically observed for the BTCV dataset, e.g., \(1\%\sim 3\%\) improvement for the average Dice score and \(3\%\sim 7\%\) improvement for the average NSD metric. With the increase of training samples for the AMOS dataset compared with the BTCV dataset, the proposed AutoSAM Adapter can even achieve better overall performance, i.e., \(1\%\sim 14\%\) improvement for Dice and \(2\%\sim 19\%\) improvement for NSD. This is also evident from Fig. 3, which shows qualitative comparisons over the predicted masks for different segmentation models. The AutoSAM Adapter demonstrates visually better mask prediction results with more accurate boundaries over the competing SOTA approaches. In contrast to the original SAM, which demonstrates subpar performance on medical image segmentation tasks compared to the SOTA [13], our design shows significant improvements. ### Comparison with Existing Adapters We further compare our adaptation strategy with existing adaptation methods for multi-class medical image segmentation, which include 2D adaptations such as MedSAM [15], \begin{table} \begin{tabular}{l|c|c c|c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Tuned Params.} & \multicolumn{2}{c|}{BTCV} & \multicolumn{2}{c}{AMOS} \\ & & mDice(\%) & mNSD(\%) & mDice(\%) & mNSD(\%) \\ \hline \hline nnUNet & 31.18M & 84.34 & 73.21 & 87.43 & 77.12 \\ nnFormer & 150.14M & 83.51 & 71.65 & 84.52 & 70.06 \\ UNETR & 93.02M & 85.47 & 74.35 & 77.24 & 60.58 \\ SwinUNETR & 62.83M & 86.58 & 75.26 & 86.19 & 74.83 \\ \hline Ours & 26.53M & **87.15** & **78.83*** & **88.65*** & **79.41*** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of the overall performance of four SOTA approaches to AutoSAM ADapter (ours) on BTCV and AMOS datasets, respectively. The best results are presented in bold font. (*: \(p<0.01\), with Wilcoxon signed-rank test to all SOTA approaches) Figure 3: Qualitative visualizations of the proposed AutoSAM Adaptor (ours) and baseline methods. Three representative subjects are demonstrated. Regions of evident improvements are enlarged to show better details of spleen (light green), left kidney (light red), and pancreas (beige). and Wang (2023). Other methods are not considered since they either do not have publicly available code or are not designed for multi-organ segmentation. MedSAM can be implemented with full fine-tuning or can do partial fine-tuning by only updating the prompt encoder and mask decoder. We also compare our AutoSAM Adapter with the original SAM. All the pre-trained weights are from SAM-B. The outcomes are meticulously outlined in Table 2, underscoring how our adaptation strategy surpasses all existing methods. Notably, our approach outshines the second-best technique by a margin of 3% in terms of BTCV segmentation Dice, 5% for NSD, and 6% concerning AMOS segmentation Dice, accompanied by a substantial 9% for NSD. Impressively, it even outperforms the complete fine-tuning variant of MedSAM, even when considering parameter-efficient fine-tuning. These results effectively validate our hypothesis that parameters pre-trained on 2D images can be effectively harnessed to grasp 3D spatial features with only minor adjustments. Moreover, our approach of treating all dimensions equivalently emerges as a superior strategy compared to interpreting the depth dimension as a distinct group in the context of medical image segmentation. ### Lightweight Models via Knowledge Distillation To address the requirement for lightweight models in the context of POCT, we have taken a further step towards compressing the AutoSAM Adapter into lightweight SwinUNETRs (specifically, the tiny or small versions) using a straightforward KD process, as illustrated in Fig. 2D. For the experiments, the tuning parameter \(\lambda\) has been set to 0.5 for the KD learning process. Other training strategies remain consistent with those employed in optimizing the AutoSAM Adapter. The outcomes pertaining to BTCV's average Dice scores, both with and without the utilization of KD, are shown in Fig. 4. When compared to their KD-absent counterparts, models incorporating KD demonstrate a marked improvement in the average Dice scores. Specifically, SwinUNETR-Tiny (with a feature size of 12 and 4.0M parameters) displays an approximate 4% enhancement, while SwinUNETR-Small (with a feature size of 24 and 15.7M parameters) exhibits a comparable advancement. ### Ablation Study **Effects of Auto Prompt Generator.** Given that the Auto Prompt Generator leverages feature maps from the final stage of the attention block in the image encoder, these feature maps can be employed as direct inputs for the mask decoder. We conducted a comparative analysis of performance with and without the utilization of the Auto Prompt Generator on the BTCV dataset. The outcomes are presented in Table 3, revealing noteworthy enhancements in both Dice (approximately 2%) and NSD (around 3%) metrics when the Auto Prompt Generator is employed. **Effects of \(\lambda\) for Knowledge Distillation.** By tuning \(\lambda\) in Equation (2), we can change the weight for the SwinUNETR-small learn from the ground truth of BTCV dataset or from the AutoSAM Adapter teacher. With the increase of \(\lambda\), the performance increases first and decreases after \(\lambda=0.5\) (Table 4). ## Conclusion In this study, we proposed AutoSAM Adapter architecture for 3D-based multi-organ medical image segmentation. Adapting the Segment Anything Model (SAM) from 2D to 3D medical images poses challenges in domain differences, spatial disparities, computation demands, and manual prompt generation complexities. Our approach overcomes these hurdles through parameter-efficient adaptation techniques and an automatic prompt generation framework to simplify SAM's application for 3D-based multi-organ segmentation tasks. The knowledge gained through a KD process also enhances the performance of other lightweight 3D medical image segmentation models. With comprehensive experiments, we validated the effectiveness of the proposed AutoSAM Adapter, thereby establishing a sturdy foundation for the advancement of image segmentation within the intricate landscape of medical imaging. \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline \(\lambda\) & 0 & 0.2 & 0.5 & 0.8 & 1.0 \\ \hline \hline mDice (\%) & 78.54 & 82.45 & **83.27** & 81.14 & 79.63 \\ \hline \end{tabular} \end{table} Table 4: The impact of \(\lambda\) for the KD process. Figure 4: The comparison of Dice metric on BTCV dataset with/without using the KD for lightweight models. \begin{table} \begin{tabular}{l|c|c c|c c} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Tuned Option} & \multicolumn{2}{c|}{BTCV} & \multicolumn{2}{c}{AMOS} \\ & & mDice(\%) & mNSD(\%) & mDice(\%) & mNSD(\%) \\ \hline \hline SAM point & None & 54.86 & - & 49.31 & - \\ MedSAM point & P\&M & 80.32 & 66.31 & 70.0 & 58.45 \\ \hline MedSAM point & Full & 84.32 & 73.63 & 82.65 & 70.13 \\ Ours & P\&M & **87.15** & **78.83** & **88.65** & **79.41** \\ \hline \end{tabular} \end{table} Table 2: Comparison with existing existing adaptation methods. The best results are bolded. P&M: only fine-tuning the prompt encoder and mask decoder, Full: fully fine-tuning, and None: no fine-tuning.
2302.01541
Contrastive Learning with Consistent Representations
Contrastive learning demonstrates great promise for representation learning. Data augmentations play a critical role in contrastive learning by providing informative views of the data without necessitating explicit labels. Nonetheless, the efficacy of current methodologies heavily hinges on the quality of employed data augmentation (DA) functions, often chosen manually from a limited set of options. While exploiting diverse data augmentations is appealing, the complexities inherent in both DAs and representation learning can lead to performance deterioration. Addressing this challenge and facilitating the systematic incorporation of diverse data augmentations, this paper proposes Contrastive Learning with Consistent Representations CoCor. At the heart of CoCor is a novel consistency metric termed DA consistency. This metric governs the mapping of augmented input data to the representation space, ensuring that these instances are positioned optimally in a manner consistent with the applied intensity of the DA. Moreover, we propose to learn the optimal mapping locations as a function of DA, all while preserving a desired monotonic property relative to DA intensity. Experimental results demonstrate that CoCor notably enhances the generalizability and transferability of learned representations in comparison to baseline methods.
Zihu Wang, Yu Wang, Zhuotong Chen, Hanbin Hu, Peng Li
2023-02-03T04:34:00Z
http://arxiv.org/abs/2302.01541v2
# Contrastive Learning with Consistent Representations ###### Abstract Contrastive learning demonstrates great promise for representation learning. Data augmentations play a critical role in contrastive learning by providing informative views of the data without needing the labels. However, the performance of the existing works heavily relies on the quality of the employed data augmentation (DA) functions, which are typically hand picked from a restricted set of choices. While exploiting a diverse set of data augmentations is appealing, the intricacies of DAs and representation learning may lead to performance degradation. To address this challenge and allow for a systemic use of large numbers of data augmentations, this paper proposes Contrastive Learning with Consistent Representations (CoCor). At the core of CoCor is a new consistency measure, DA consistency, which dictates the mapping of augmented input data to the representation space such that these instances are mapped to optimal locations in a way consistent to the intensity of the DA applied. Furthermore, a data-driven approach is proposed to learn the optimal mapping locations as a function of DA while maintaining a desired monotonic property with respect to DA intensity. The proposed techniques give rise to a semi-supervised learning framework based on bi-level optimization, achieving new state-of-the-art results for image recognition. ## 1 Introduction Data augmentation is widely used in supervised learning for image recognition, achieving excellent results on popular datasets like MNIST, CIFAR-10/100, and ImageNet [7, 22, 23, 26, 16]. AutoAugment [8] and RandAugment [9] use strong augmentations composed of a combination of different transformations. It has been shown that these manually designed data augmentations can effectively increase the amount of training data and enhance the model's ability to capture the invariance of the training data, thereby boosting supervised training performance. Data augmentation (DA) is also a key component in recent semi-supervised learning techniques based on contrastive learning [5]. A base encoder that learns good latent representations of the input data is trained with a contrastive loss. The contrastive loss is reflective of the following: in the latent space a pair of two views of a given data example transformed by two different data augmentation functions are correlated (similar) while transformed views of different input examples are dissimilar. The quality of the encoder trained using unlabeled data as such is essential to the overall contrastive learning performance, and is contingent upon the employed data augmentations. [27] interprets the minimization of the contrastive loss is to achieve alignment and uniformity of the learned representations. To learn powerful and transferable representations, many studies have aimed at improving contrastive learning by choosing proper data augmentations [5, 25, 11] have shown that combining different augmentations with a proper magnitude may improve performance. In these works, data augmentations are restricted to be a random composition of four or five types of augmentations with a very limited range of magnitude. To further improve contrastive learning in downstream tasks, PIRL [20] adopts two additional data augmentations, SwAV [4] introduced multiple random resized-crops to provide the encoder with more views of data and CLSA [28] adopts combinations of stronger augmentations. While exploiting a diverse set of data augmentations is appealing, thus far, there does not exist a systematic approach for incorporating a large number of augmentations for contrastive learning. Importantly, the intricacies of data augmentation and representation learning may result in performance degradation if data augmentation functions are not properly chosen and used. To address the above challenges, this paper proposes **C**ontrastive **L**earning with **C**onsistent **R**epresentations (CoCor). First, we define a set of _composite augmentations_, denoted by \(\mathbf{A}\) and composed by combining multiple basic augmentations. We represent each composite augmentation using a _composition vector_ that encodes the types and numbers of basic augmentations used and their augmentation magnitude. A large number of diverse augmentations can be drawn from this composite set \(\mathbf{A}\) with widely varying transformation types and overall strength. Critically, we propose a notion of _consistency_ for data augmentations, called _DA consistency_, which imposes a desiring property on the latent representations of views produced by different composite data augmentations. For a given input example, a stronger data augmentation shall push its latent-space view further away from the latent representation of the original example. That is, the latent-space similarity between the transformed view and the original input decreases with the strength of the data augmentation, a very desiring characteristic, as illustrated in Fig. 1(b). In other words, a base encoder that does not satisfy the above property is deemed _inconsistent_ from a data augmentation point of view. We consequently propose to add a new _consistency loss_ to impose _DA consistency_ on the base encoder. Different from the contrastive loss that is agnostic to the type and overall strength of employed data augmentations [5; 25; 11], our _consistency loss_ places additional finer-grained constraints on the encoder learning such that the resulted latent-space views are consistent with the strength of data augmentations applied. Enforcing _DA consistency_ is key to circumventing inconsistencies among diverse data augmentations, which may result in performance degradation. The proposed consistency loss is defined upon the optimal similarity between the latent-space representations of an input example and a transformed view, which is a function of the strength of the composite data augmentation. However, this optimal similarity is not known _a prior_. To cope with this difficulty, we take a data-driven approach to train a black-box neural network that maps from the composition vector of each data augmentation to the desired similarity using bi-level optimization. Recognizing that the similarity monotonically decreases with the strength of augmentation, we define and impose a partial monotonicity constraint to the black-box neural network such that strictly stronger composition data augmentations correspond to a strictly smaller valued similarity. The main contributions to contrastive learning from this paper are as follows: * Systematically explore a large set of diverse composite data augmentations for which we define a partial monotonicity concept. * Propose a new notion of consistency called _DA consistency_ for data augmentation, quantifying the monotonic dependency of the latent-space similarity between an input example and its transformed view as a function of the strength of the augmentation. Figure 1: (a): Contrastive learning aims at learning well-clustered representations on the latent hypersphere [27]. (b)-Left: Encoder trained with standard contrastive loss is inconsistent since different views of an instance are encouraged to be represented namely in the feature space, regardless of the actual difference between them. (b)-Right: For a consistent encoder, the latent view of stronger augmented data should be further away from the latent view of the raw data. * Introduce a new _consistency loss_ for training the base encoder towards satisfying DA consistency and for making use of diverse augmentations without inconsistency-induced performance loss. * Propose a data-driven approach to learn the desired latent-space similarities with properly defined partial monotonicity based on bi-level optimization. The proposed techniques give rise to a semi-supervised learning framework based on bi-level optimization, achieving new state-of-the-art results for image recognition. ## 2 Related works ### Contrastive Learning Recently, a lot of contrastive learning methods have shown impressive results in visual representation learning. SimCLR [5] takes advantage of manually designed data augmentation to produce a pool of negative and positive data pairs in minibatches. A momentum updated encoder and a large representations memory bank were introduced to further improve the performance in MOCO [11]. Alignment and uniformity [27] were introduced to describe the quality of representations. It was shown that by minimizing contrastive loss alignment and uniformity of features can be asymptotically achieved. ### Data Augmentation In supervised deep learning, data augmentation has already been applied to natural image datasets while training, which helps improve the generalization and robustness of learnt models. Works like AutoAugment [8] and RandAugment [9] adopted combinations of various transformations to produce strong data augmentation. In self-supervised learning [5; 25; 11], data augmentations are carefully designed by sampling from a highly restricted set of augmentations to maintain identity of instances and avoid noise. SwAV [4] used multiple random resized-crops to produce more views of data. CLSA [28] utilized combinations of augmentations as stronger augmentation, in which the encoder is trained to match the distribution of the representations of weakly and strongly augmented instances. ### Bi-level Optimization In deep learning, bi-level optimization is used to adaptively adjust the learning rate during training [1]. DARTS [19] proposed an approach of architecture search based on bi-level optimization. In semi-supervised and self-supervised learning, Meta Pseudo Labels [21] adopted bi-level optimization to simultaneously update a student model and a teacher model. JOAO [30] used bi-level optimization in adaptively adjusting data augmentation applied to graph-structured data. ## 3 Preliminaries ### Contrastive Learning with Memory The goal of contrastive representation learning is to learn a powerful encoder parameterized by \(\mathbf{\theta}_{\mathbf{e}}\) that maps an input data \(\mathbf{x}\in\mathbb{R}^{n}\) to an \(\ell_{2}\) normalized feature vector \(\mathbf{z}\) of dimension \(m\), i.e., \(f_{\mathbf{\theta}_{\mathbf{e}}}(\cdot):\mathbb{R}^{n}\rightarrow\mathcal{S}^{m-1}\). The encoder is trained by minimizing a contrastive loss. Specifically, it defines two different augmented versions of the same input example as a positive pair, which are expected to have similar representations in the feature space. Meanwhile, the encoder shall be trained to discriminate any two instances augmented from different input examples, i.e., a negative pair, in the latent feature space. Empirically, a queue \(\mathcal{Q}\) of negative embeddings is maintained for forming negative pairs with augmented data from each minibatch \(\mathcal{B}\). In this paper, we consider the wide adopted loss that has been shown to be effective in representation learning [5; 11; 24]: \[\mathcal{L}_{\text{contrast}}=\mathbb{E}_{i\in\mathcal{B}}[-\log\frac{e^{\pi_{i }^{T}\mathbf{z}_{i}^{+}/\tau}}{e^{\pi_{i}^{T}\mathbf{z}_{i}^{+}/\tau}+\sum_{z_{j}\in \mathcal{Q}}e^{\pi_{i}^{T}\mathbf{z}_{j}/\tau}}] \tag{1}\] where \(\mathbf{z}_{i}\) and \(\mathbf{z}_{i}^{+}\) denote the representations of a positive pair, \(\mathbf{z}_{i}\) and \(\mathbf{z}_{j}\) are representations of a negative pair. \(\mathbf{z}^{T}\mathbf{z}\) denotes the cosine similarity of feature vectors, which is scaled by temperature \(\tau\). ### Inconsistency in Contrastive Learning Data augmentation plays a pivotal role in contrastive learning and needs to be carefully designed to maintain the latent structure of the original data. Despite that a variety of augmentations have been successfully applied to supervised settings, contrastive learning benefits from diverse augmentations to a lesser degree, and only a subset of "weak" augmentations have been empirically shown to be effective. For instance, in the popular contrastive learning framework of [5; 11; 6; 25], the adopted data augmentations are limited to a combination of three to five types of basic transformations, which are chosen empirically according to the experience. Adding to or removing from this pre-selected basic transformations can deteriorate the performance, a phenomenon that is not well understood. Moreover, although a few works have attempted to introduce "stronger" augmentations[4; 28; 20], there still lacks a systematic view of how augmentations with a widely varying strength affect the learning process and how to leverage a larger set of diverse augmentations to improve, as opposed to, degrade the performance. Our key observation is that the vanilla contrastive learning framework[5; 11] does not imply _data augmentation (DA) consistency_, a key property proposed by this work, which should be maintained during contrastive learning. Without the consistency in place, the optimized encoder simply pulls positive pairs to the same position in the latent space \(\mathcal{S}^{m-1}\), regardless of the actual difference between them. However, a weakly augmented view and a strongly augmented view of an input example do not necessarily share the same latent structure, and it may be desirable to encode them to different regions in the feature space. An illustrated example is shown in Fig. 1 (b). In this case, the encoder that tries to pull the positive pairs together learns an incorrect latent representation. We refer to this phenomenon as data augmentation _inconsistency_ in contrastive learning. To prevent this inconsistency, the encoder should be trained to not only cluster positive pairs together, but also map them to their respective optimal location on \(\mathcal{S}^{m-1}\), based on the strength of the applied augmentations. We thereby define this property as _data augmentation consistency_, shortly _DA consistency_, an additional constraint for contrastive learning. The proposed DA consistency takes into account the critical role played by the augmentation strength in the learning process, removes potential inconsistencies across different augmentations, and facilities effective use of large numbers of diverse augmentations. ## 4 Method ### Composite Augmentations **Definition 4.1** (Composite Augmentations).: Given a set of \(k\) basic augmentations \(\mathbf{T}=\{T_{1},T_{2}\cdots T_{k}\}\), a composite augmentation with a length of \(l\), namely, \(A^{l}\) is defined as the composition of \(l\) randomly sampled augmentations from \(\mathbf{T}\): \[\begin{split} A^{l}=T^{1}\circ T^{2}\circ\cdots T^{l}\\ \text{where}\;\;T^{i}\in\mathbf{T},\;i=1,2\cdots l\end{split} \tag{2}\] We denote the set that contains all composite augmentations with a length of \(l\) by \(\mathbf{A}^{l}\), and denote the set that contains all composite augmentations with different lengths by \(\mathbf{A}=\mathbf{A}^{1}\cup\mathbf{A}^{2}\cdots\). Empirical studies showed that the ordering of the basic augmentations in a composite augmentation does not have a significant impact on performance. To simplify the discussion, we do not consider the effect of the ordering of the basic transformations in each composite augmentation \(A\in\mathbf{A}\) and hence assume that \(A\) is order invariant, e.g., \(T^{1}\circ T^{2}=T^{2}\circ T^{1}\). Thereby, we represent a composite augmentation with an unique composition vector. **Definition 4.2** (Composition Vector).: The composition vector of a composite augmentation \(A^{l}\) is defined as a \(k\) dimensional vector \(\mathbf{V}(A^{l})\in\mathbb{R}^{k}\), where the \(i_{th}\) entry \(V_{i}\) is the number of times that the basic augmentation \(T_{i}\) is used in \(A^{l}\). Specifically, \(\sum_{i=1}^{k}V_{i}=l\). Given an encoder \(f\), the location in \(\mathcal{S}^{m-1}\) to which an encoded augmented view \(A(\mathbf{x})\) is mapped can be characterized by the cosine similarity between \(f(\mathbf{x})\) and \(f(A(\mathbf{x}))\), which we define as latent deviation below. **Definition 4.3** (Latent Deviation).: Given a raw input example \(\mathbf{x}\), an encoder \(f\), and a composite augmentation \(A\in\mathbf{A}\), the latent deviation \(\Omega(\cdot;f,A)\) of \(A(\mathbf{x})\) is defined as the cosine similarity between \(\mathbf{x}\) and \(A(\mathbf{x})\) in the latent space \(\mathcal{S}^{m-1}\): \[\Omega(\mathbf{x};f,A)=f(\mathbf{x})^{T}\cdot f(A(\mathbf{x})). \tag{3}\] We assume that encoder \(f\) maps its input to a normalized vector in the latent space. ### Data Augmentation Consistency Defining the aforementioned _DA consistency_ in contrastive learning requires quantifying the strength of different augmentations and learning a mapping from the strength to the optimal mapped location in \(\mathcal{S}^{m-1}\) with respect to a given raw input example \(\mathbf{x}\). We specify a mapped latent space location using the corresponding latent deviation. To be DA consistent, latent deviations of different augmentations shall be monotonically decreasing with the augmentation strength. However, a straightforward definition of augmentation strength is challenging. This is because that two composite augmentations may consist of different basic augmentation types and it is not immediately evident how to define and compare their strength levels. To tackle this problem, we take a black-box data-driven approach to directly learn a mapping \(g^{*}\) that takes a composite augmentation as input and map it to the optimal mapped latent space location of the augmented view of each given raw input \(\mathbf{x}\), relaxing the necessity of modeling the strength of different augmentations. Since a key objective of contrastive learning is to find an optimal encoder \(f^{*}\) that delivers the best performance, the optimal mapped location outputted by mapping \(g^{*}\) is specified as the corresponding latent deviation of an optimal encoder \(f^{*}\). Under this optimal \(f^{*}\), the mapped latent space location \(f(A(\mathbf{x}))\) in \(\mathcal{S}^{m-1}\) corresponds to the optimal latent deviation. The optimal encoder \(f^{*}\) is defined to be _fully data-augmentation consistent_, or _fully DA consistent_ in short, and its latent derivation for any given composite augmentation is considered optimal. For other encoders that are not fully DA consistent, we further define DA consistency level, as the degree at which the learned representations are consistent in terms of data augmentation. **Definition 4.4** (Consistency Level).: Given an encoder \(f\), a set of composite augmentation \(\mathbf{A}\), a raw input \(\mathbf{x}\), and a fully DA consistent encoder \(f^{*}\), the DA consistency level (DACL) is define as: \[\text{DACL}(\mathbf{x},f;f^{*},\mathbf{A})=\mathbb{E}_{A\sim\mathbf{A}}[|\Omega(\mathbf{x};f,A )-\Omega(\mathbf{x};f^{*},A)|] \tag{4}\] DACL measures the level of consistency of a given encoder \(f\) by comparing the latent deviation of the augmented view of \(\mathbf{x}\) with the corresponding optimal latent deviation \(\Omega(\mathbf{x};f^{*},A^{l})\) over all possible composite augmentations. With DACL, an encoder that is not fully DA consistent can be trained to be more DA consistent by minimizing the _consistency loss_ defined below: \[\mathcal{L}_{\text{consistent}}(\mathbf{\theta}_{\mathbf{e}}) \tag{5}\] \[=\mathbb{E}_{\mathbf{x}\sim\mathcal{B}}[\text{DACL}(\mathbf{x},f;f^{*}, \mathbf{A})]\] \[=\mathbb{E}_{\mathbf{x}\sim\mathcal{B},A\sim\mathbf{A}}[|\Omega(\mathbf{x};f, A)-\Omega(\mathbf{x};f^{*},A)|]\] where \(\mathbf{\theta}_{\mathbf{e}}\) is the parameter of the encoder \(f\). ### Learning Optimal Latent Deviations with a Partially Monotonic Neural Network Improving the consistency of an encoder by minimizing the proposed consistency loss needs to determine the optimal latent deviations \(\Omega^{*}=\Omega(\mathbf{x};f^{*},A)\) in Eq. (5). In practice, we adopt a data-driven method to approximately learn the optimal latent deviations. We introduce a neural network \(g_{\theta_{d}}(\cdot)\), parameterized by \(\mathbf{\theta}_{\mathbf{d}}\), to predict the optimal latent deviation for a given composite augmentation specified by its composite vector \(\mathbf{V}\), i.e., \(g_{\theta_{d}}(\mathbf{V})\cong\Omega^{*}(\mathbf{V})\). Although it is possible to approximate the optimal latent deviation \(\Omega^{*}\) with a blackbox neural network with no prior information, doing so may violate the important partially monotonic constraint discussed next. As the overall strength of a composite augmentation increases, e.g., by incorporating additional basic augmentations, the similarity between the augmented data and the raw data is expected to decrease. For instance, even though it is difficult to directly compare the impact of _rotation_ with _saturation_ on the input data, applying _saturation_ twice would certainly distort the data more than applying it once. Thus, we enforce partial monotonicity of the network's output w.r.t. the input vector \(\mathbf{V}\), i.e., the output is monotonically decreasing w.r.t. every entry in the input vector. We consequently call the mapping network as a partially monotonic neural network (PMNN). We replace \(\Omega^{*}\) in the original consistency loss (Eq. (5)) by the value estimated by PMNN and rewrite the consistency loss as: \[\mathcal{L}_{\text{consistent}}(\mathbf{\theta}_{\mathbf{e}},\mathbf{\theta }_{\mathbf{d}}) \tag{6}\] \[=\mathbb{E}_{\mathbf{x}\sim\mathcal{B},A\sim\mathbf{A}}[|\Omega(\mathbf{x};f, A)-g_{\theta_{d}}(\mathbf{V}(A)|]\] The overall loss function to optimize the encoder can be written as: \[\mathcal{L}_{u}(\mathbf{\theta}_{e},\mathbf{\theta}_{d})=\mathcal{L}_{\text{contrast}}(\mathbf{ \theta}_{e})+\mathcal{L}_{\text{consist}}(\mathbf{\theta}_{e},\mathbf{\theta}_{d}) \tag{7}\] PMNN aims to learn the optimal latent deviation based on the composition of augmentation which has been applied. However, to obtain the accurate optimal latent deviation is challenging since in an unsupervised learning framework, we lack the information of latent deviations' optimality in the feature space. We hence introduce a small amount of supervision to help PMNN more accurately predict the optimal latent deviation. Note that in Eq. (7), the update of the encoder's parameters \(\mathbf{\theta}_{e}\) always depends on \(\mathbf{\theta}_{d}\) via the optimal latent deviation prediction in consistency loss. We can then express this dependency as \(\mathbf{\theta}_{e}(\mathbf{\theta}_{d})\), i.e., viewing \(\mathbf{\theta}_{e}\) as a "function" of \(\mathbf{\theta}_{d}\). Therefore, we could further optimize \(\mathbf{\theta}_{d}\) on labeled data: \[\begin{split}&\min_{\mathbf{\theta}_{d}}\quad\text{CE}(\mathbf{x}_{t},y; \mathbf{\theta}_{e}^{*}(\mathbf{\theta}_{d}))\\ \text{s.t.}&\mathbf{\theta}_{e}^{*}(\mathbf{\theta}_{d})= \operatorname*{arg\,min}_{\mathbf{\theta}_{e}}\mathcal{L}_{u}(\mathbf{\theta}_{e}, \mathbf{\theta}_{d})\end{split} \tag{8}\] where \(\mathbf{x}_{t}\) is labeled data, \(y\) is the corresponding label, and CE denotes the cross entropy criterion. Additionally, in order to update \(\theta_{d}\) by evaluating the encoder's performance on labeled data. We adopt a linear classifier that is trained on the fly with the PMNN on the small amount of labeled data. When calculating the cross entropy loss in Eq. (8), the classifier is connected to the stop-gradient backbone of the encoder. Intuitively, in Eq. (8), by updating encoder's parameter with \(\mathcal{L}_{u}\) on unlabeled data, the encoder can approach optimal DA consistency predicted by PMNN. And by adjusting PMNN's parameter \(\mathbf{\theta}_{d}\) according to the encoder's current performance on labeled data, PMNN can learn a more accurate mapping from augmentations to latent deviation. With a learning rate \(\eta_{e}\) and PMNN's prediction of optimal latent deviation, the update of the encoder can be rewritten as: \[\mathbf{\theta}_{e}^{\prime}=\mathbf{\theta}_{e}-\eta_{e}\nabla_{\mathbf{\theta}_{e}} \mathcal{L}_{u}(\mathbf{\theta}_{e},\mathbf{\theta}_{d}) \tag{9}\] And with the current encoder, PMNN's parameters \(\theta_{d}\)'s update can be derived by applying chain rule. We rewrite its update as: \[\mathbf{\theta}_{d}^{\prime}=\mathbf{\theta}_{d}-\eta_{d}\nabla_{\mathbf{\theta}_{d}} \text{CE}(\mathbf{x}_{t},y;\mathbf{\theta}_{e}(\mathbf{\theta}_{d})) \tag{10}\] The detailed derivation of the dependency of \(\mathbf{\theta}_{e}\) on \(\mathbf{\theta}_{d}\) and the gradient of \(\mathbf{\theta}_{d}\) are included in Appendix A. ### CoCor Framework Overview The proposed CoCor framework consists of two parameterized neural network to be trained, i.e., the encoder and the PMNN. The encoder training is based purely on unlabeled data, while small amount of labeled data is introduced for monotonic neural network training. We utilized alternating gradient descent algorithm that alternates between Eq. (9) and Eq. (10). We summarized the overall algorithm flow of CoCor in Algorithm 1. ``` Input: initial parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\) and \(\mathbf{\theta}_{d}^{(\mathbf{0})}\), number of training epochs \(N\). fori=1 to Ndo 1.Call Eq. (9) to update the encoder's parameter \(\mathbf{\theta}_{e}^{(i)}\) on unlabeled data with monotonic neural network parameters \(\mathbf{\theta}_{d}^{(i-1)}\) fixed. 2.Call Eq. (10) to update the monotonic neural net \(\mathbf{\theta}_{d}^{(i)}\) on labeled data with the encoder parameters \(\mathbf{\theta}_{e}^{(i)}\) fixed. Output: Trained encoder with parameters \(\mathbf{\theta}_{e}^{(N)}\) ``` **Algorithm 1**Algorithm flow of CoCor **Input:** Initial parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\) and \(\mathbf{\theta}_{d}^{(\mathbf{0})}\), number of training epochs \(N\). fori=1 to Ndo 1.Call Eq. (9) to update the encoder's parameter \(\mathbf{\theta}_{e}^{(i)}\) on unlabeled data with monotonic neural network parameters \(\mathbf{\theta}_{d}^{(i-1)}\) fixed. 2.Call Eq. (10) to update the monotonic neural net \(\mathbf{\theta}_{d}^{(i)}\) on labeled data with the encoder parameters \(\mathbf{\theta}_{e}^{(i)}\) fixed. Output: Trained encoder with parameters \(\mathbf{\theta}_{e}^{(N)}\) ``` **Algorithm 2**Algorithm flow of CoCor **Input:** Initial parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\) and \(\mathbf{\theta}_{d}^{(\mathbf{0})}\), number of training epochs \(N\). fori=1 to Ndo 1.Call Eq. (9) to update the encoder's parameter \(\mathbf{\theta}_{e}^{(i)}\) on unlabeled data with monotonic neural network parameters \(\mathbf{\theta}_{d}^{(i-1)}\) fixed. 2.Call Eq. (10) to update the monotonic neural net \(\mathbf{\theta}_{d}^{(i)}\) on unlabeled data with the encoder parameters \(\mathbf{\theta}_{e}^{(i)}\) fixed. Output: Trained encoder with parameters \(\mathbf{\theta}_{e}^{(N)}\) ``` **Algorithm 3**Algorithm flow of CoCor **Input:** Initial parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\) and \(\mathbf{\theta}_{d}^{(\mathbf{0})}\), number of training epochs \(N\). fori=1 to Ndo 1.Call Eq. (9) to update the encoder's parameter \(\mathbf{\theta}_{e}^{(i)}\) on unlabeled data with monotonic neural network parameters \(\mathbf{\theta}_{d}^{(i-1)}\) fixed. 2.Call Eq. (10) to update the monotonic neural net \(\mathbf{\theta}_{d}^{(i)}\) on unlabeled data with the encoder parameters \(\mathbf{\theta}_{e}^{(i)}\) fixed. Output: Trained encoder with parameters \(\mathbf{\theta}_{e}^{(N)}\) ``` **Algorithm 4**Algorithm flow of CoCor **Input:** Initial parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\) and \(\mathbf{\theta}_{d}^{(\mathbf{0})}\), number of training epochs \(N\). fori=1 to Ndo 1.Call Eq. (9) to update the encoder's parameter \(\mathbf{\theta}_{e}^{(i)}\) on unlabeled data with monotonic neural network parameters \(\mathbf{\theta}_{d}^{(i-1)}\) fixed. Output: Trained encoder with parameters \(\mathbf{\theta}_{e}^{(i)}\) ``` **Output:** Initial parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\) and \(\mathbf{\theta}_{d}^{(\mathbf{0})}\), number of training epochs \(N\). fori=1 to Ndo 2.Call Eq. (10) to update the monotonic neural net \(\mathbf{\theta}_{d}^{(\mathbf{0})}\) on unlabeled data with the encoder parameters \(\mathbf{\theta}_{e}^{(i)}\) fixed. Output: Trained encoder with parameters \(\mathbf{\theta}_{e}^{(N)}\) ``` **Algorithm 5** flow of CoCor **Input:** Initial parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\) and \(\mathbf{\theta}_{d}^{(\mathbf{0})}\), number of training epochs \(N\). fori=1 to Ndo 3.Call Eq. (9) to update the encoder's parameter \(\mathbf{\theta}_{e}^{(i)}\) on unlabeled data with monotonic neural network parameters \(\mathbf{\theta}_{d}^{(i-1)}\) fixed. Output: Trained encoder with parameters \(\mathbf{\theta}_{e}^{(i)}\) ``` **Output:** Initial parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\) and \(\mathbf{\theta}_{d}^{(\mathbf{0})}\), number of training epochs \(N\). fori=1 to Ndo 4.Call Eq. (10) to update the monotonic neural net \(\mathbf{\theta}_{d}^{(\mathbf{0})}\) on unlabeled data with the encoder parameters \(\mathbf{\theta}_{e}^{(i)}\) fixed. Output: Trained encoder with parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\) ``` **Output:** Initial parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\), number of training epochs \(N\). fori=1 to Ndo 5.Call Eq. (10) to update the monotonic neural net \(\mathbf{\theta}_{d}^{(\mathbf{0})}\) on unlabeled data with the encoder parameters \(\mathbf{\theta}_{e}^{(i)}\) fixed. Output: Trained encoder with parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\) ``` **Output:** Initial parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\), number of training epochs \(N\). fori=1 to Ndo 6.Call Eq. (10) to update the monotonic neural net \(\mathbf{\theta}_{d}^{(\mathbf{0})}\) on unlabeled data with the encoder parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\) fixed. Output: Trained encoder with parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\) ``` **Output:** Initial parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\), number of training epochs \(N\). fori=1 to Ndo 7.Call Eq. (10) to update the monotonic neural net \(\mathbf{\theta}_{d}^{(\mathbf{0})}\) on unlabeled data with the encoder parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\) fixed. Output: Trained encoder with parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\) ``` **Output:** Initial parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\), number of training epochs \(N\). fori=1 to Ndo 8.Call Eq. (10) to update the monotonic neural net \(\mathbf{\theta}_{d}^{(\mathbf{0})}\) on unlabeled data with the encoder parameters \(\mathbf{\theta}_{e}^{(\mathbf{0})}\) fixed. Output: Trained encoder with parameters \(\mathbf{\theta}_{e}^{(\bm ## 5 Experiments We evaluate the proposed CoCor framework with different sets of experiments and benchmarks. We perform pretraining of the encoder using this proposed method and evaluate the pretrained encoder with several different downstream tasks. Its performance is compared with other unsupervised learning methods. ### Pretraining on ImageNet A ResNet-50 [12] is adopted as the backbone of the encoder [5; 11] for pretraining on ImageNet. We adopt a 128-dim projection head, which contains two layers with a 2048-dim hidden layer and ReLU. The projected features are \(\ell_{2}\) before their cosine similarity are calculated. An SGD [2] optimizer with initial \(\text{lr}=0.03\), weight decay \(=10^{-4}\), and momentum \(=0.9\) is used as the optimizer. The learning rate decay is completed by a cosine scheduler. The batch size is set to 256. When running on our cluster of 8 A100 GPUs, the actual size of the batch allocated to each GPU is 32. Following the setup of MOCO [11], we adopt Shuffling Batch Normalization to prevent the encoder from approaching trivial solutions. For the sake of fair comparison with SwAV [4], we adopt multi-crop as it has been shown effective in providing the encoder with more information. In unsupervised pretraining, we apply five different crops \((224\times 224,192\times 192,160\times 160,128\times 128,96\times 96)\). In contrastive loss, the data augmentation we utilize follows [6], which includes \(224\times 224\)_cropping_, _Gaussian Blur_, _Color Jittering_, _Grayscale_, and _HorizontalFlip_. While in consistency loss, the composite augmentation is formed by randomly sampling augmentations from a pool of size 14, which includes _Autocontrast_, _brightness_, _Color_, _Contrast_, _Equalize_, _Identity_, _Posterize_, _Rotate_, _Sharpness_, _ShearX_, _ShearY_, _TranslateX_, _TranslateY_, and _Solarize_. Our preliminary experiments show that composite augmentations of any intensity can help boost the performance. However, we observed that lower intensity composite augmentation has better effects on downstream tasks. Thus, we report the results by applying composite augmentation that contains 1 sample from the augmentation pool. ### Linear Evaluation on Imagenet We pretrained the encoder for 200 epochs. Then a fully connected classifier is trained on top of the average pooling features of the frozen pretrained backbone. The linear classifier is trained for 100 epochs, with a starting learning rate of 10.0, a cosine learning rate decay scheduler, and a batch size of 256. Tab. 2 reports the linear evaluation results on ImageNet of different methods with the ResNet-50 backbone and 200 epochs pretraining. Our method outperforms all of the other listed methods. The fact that our method outperforms SwAV, which adopts exactly the same multi-crop as ours, indicates that the way by which we introduce more diverse augmentations indeed provides the encoder with additional beneficial information. These information helps the encoder to better explore the representations space and achieve better performance in linear classification. ### Transferring Representations An essential characteristic of general and robust representations is their transferability. We hence evaluate the backbone pretrained on ImageNet by transferring to some other downstream tasks. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & & Classification & \multicolumn{2}{c}{Object Detection} \\ \cline{3-6} & & VOC07 & VOC07+12 & COCO & \\ Method & Epoch & mAP & \(AP_{50}\) & \(AP\) & \(AP_{S}\) \\ \hline Supervised & - & 87.5 & 81.3 & 40.8 & 20.1 \\ \hline MoCo [11] & 800 & 79.8 & 81.5 & - & - \\ PIRL [20] & 800 & 81.1 & 80.7 & - & - \\ PCL v2 [17] & 200 & 85.4 & 78.5 & - & - \\ SimCLR [5] & 800 & 86.4 & - & - & - \\ MoCo v2 [6] & 800 & 87.1 & 82.5 & 42.0 & 20.8 \\ SWAV [4] & 800 & 88.9 & 82.6 & **42.1** & 19.7 \\ \hline CoCor & **200** & **93.4** & **83.1** & 41.9 & **24.6** \\ \hline \hline \end{tabular} \end{table} Table 1: Transfer learning results on cross-dataset classification and object detection tasks. Firstly, we conduct the transfer learning experiments on VOC07 [10]. We freeze the pretrained ResNet-50 backbone, and fine-tune a linear classifier connected to it on VOC07trainval. The testing is completed on VOC07test. Secondly, we fine-tune the pretrained backbone for object detection. We adopt detectron2, which has been utilized by other works[11; 5; 14], for evaluating on VOC and COCO [18] datasets. For object detection in VOC, we fine-tune the network on VOC07+12 trainval set and test it on VOC07 test set. For COCO, we fine-tune the network on train2017 set (118k images) and test it on val2017 datatset. The results of transfer learning is reported in Tab. 1. The performance of our method with 200 epochs ImageNet pretraining is compared with other methods with 800 epochs pretraining. CoCor achieves outstanding results compared with other models' state-of-the-art performance. CoCor's good performance in transfer learning implies that this proposed method learns more robust and general representations than other listed methods. With only 200 epochs pretraining, CoCor outperforms other SOTAs with 800 epochs a lot on VOC in classification and object detection. In COCO, the CoCor achieved a result of 24.6% of \(APs\), which stands for the accuracy of small object detection. The performance on the well known difficult challenge in COCO has been notably improved by 3.8% from MoCo v2's performance of 20.8%. The diversity in views produced by various augmentations provides the encoder with the power of detecting and distinguishing these challenging small objects. ### Benefits of the Monotonic Neural Network We further conduct ablation study on CIFAR-10 to show the benefits of introducing the monotonic neural network. We experiment with single-crop situation, and pick for consistency loss the composite augmentation which contains two transformations from the pool. Firstly, we select the optimal latent deviation \(\Omega^{*}\) in consistency loss by repeatedly running experiments minimizing the overall loss function in Eq. (7) with different value of \(\Omega\). We then fix the pre-tuned optimal \(\Omega^{*}\) in further training, which means the variance caused by the transformations' type difference is neglected. Only the number of transformation sampled to form composite augmentation is considered to be able to influence the value \(\Omega\). The optimal result by fixing the constant \(\Omega^{*}\) is reported in Tab. 3. Then we introduce the monotonic neural network to adaptively learn \(\Omega\) w.r.t. the composition of the composite augmentation with a length of 2 and trained with the bi-level optimization framework in Algorithm 1. The encoders with same backbone structure, learning rate, optimizer, batch size, and learning rate scheduler is trained with/without the monotonic neural network. They are pretrained for 20 epochs on Cifar-10. Linear classifiers connected to the frozen pretrained backbone are then trained for 20 epochs to evaluate their performance. Tab. 3 reports the results of training with/without the monotonic neural network. The encoder trained with the monotonic neural network outperforms the one without it. With this neural network, the variance of transformations are taken into consideration. Meanwhile, the small amount of labeled data for updating the neural network can deliver information of label to the encoder under the bi-level optimization update. \begin{table} \begin{tabular}{l c c} \hline \hline Method & Batch size & Top-1 acc \\ \hline InstDisc [29] & 256 & 58.5 \\ MoCo [11] & 256 & 60.8 \\ SimCLR [5] & 256 & 61.9 \\ CPC v2 [13] & 512 & 63.8 \\ PCL v2 [17] & 256 & 67.6 \\ MoCo v2 [16] & 256 & 67.5 \\ MoCHi [15] & 512 & 68.0 \\ PIC [3] & 512 & 67.6 \\ SwAV [4] & 256 & 72.7 \\ \hline CoCor & 256 & **72.8** \\ \hline \hline \end{tabular} \end{table} Table 2: Linear evaluation results of different methods with ResNet-50 backbone over 200 epochs pretraining. \begin{table} \begin{tabular}{l c c} \hline \hline & without PMNN & with PMNN \\ \hline Top1 acc & 50.20 & 52.04 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study of the partially monotonic neural network on CIFAR-10 dataset.
2303.16314
A multifractional option pricing formula
Fractional Brownian motion has become a standard tool to address long-range dependence in financial time series. However, a constant memory parameter is too restrictive to address different market conditions. Here we model the price fluctuations using a multifractional Brownian motion assuming that the Hurst exponent is a time-deterministic function. Through the multifractional Ito calculus, both the related transition density function and the analytical European Call option pricing formula are obtained. The empirical performance of the multifractional Black-Scholes model is tested by calibration of option market quotes for the SPX index and offers best fit than its counterparts based on standard and fractional Brownian motions.
Axel A. Araneda
2023-03-28T21:25:38Z
http://arxiv.org/abs/2303.16314v2
# A multifractional option pricing formula ###### Abstract Fractional Brownian motion has become a standard tool to address long-range dependence in financial time series. However, a constant memory parameter is too restrictive to address different market conditions. Here we model the price fluctuations using a multifractional Brownian motion assuming that the Hurst exponent is a time-deterministic function. Through the multifractional Ito calculus, both the related transition density function and the analytical European Call option pricing formula are obtained. The empirical performance of the multifractional Black-Scholes models is tested and appears superior to its fractional and standard counterparts. _Keywords_: Multifractional Brownian motion, Hurst exponent, Long-range dependence, European option pricing. ## 1 Introduction Since the Black and Scholes seminal paper [1], diffusion processes driven by standard Brownian motions have been the cornerstone of financial engineering. However, the long-range dependence or long memory has been established as'stylized fact' in the analysis of financial time series [2, 3]. In order to address this issue, many approaches based on fractional Brownian motion (fBm) [4, 5, 6, 7] but also alternative process as the sub-fractional Brownian motion [8, 9, 10, 11] have been proposed. However, the assumption of a constant Holder regularity (Hurst exponent) in financial time-series seems to be too rigid to address some particularities of a market beyond tranquil periods, namely bull or bear markets, and both memory and memory-less can be present in the same financial data [12, 13]. Indeed, some scholars empirically state a time-varying behavior for the memory parameter [14, 15]. In terms of modelling, the mathematical compatible with this behavior is called multifractional Brownian motion (mBm) [16, 17]. This centered Gaussian process acts as a generalization of fBm in the sense that it allows to the Hurst exponent becomes a time-deterministic1 local quantity. The implications of using mBm as the driven process in price fluctuations are listed in ref. [20], and among them is the compatibility with Lo's adaptive market hypothesis [21] which dismiss the efficiency/inefficiency dichotomy arguing that the level of efficiency changes on time around the efficient state. Footnote 1: Some developments [18, 19] extend this local behavior to non-deterministic cases or stochastic processes. The literature offers some examples of the uses of mBm in option pricing. For instance, Wang [22] addresses the problem under transaction cost, where a discrete-time setting obtains the minimal value for a European Call by delta hedging arguments. In addition, Mattera and Sciorio [23] elaborated a numerical procedure to value a European Call option in a multifractional environment, considering an autoregressive behavior for the Hurst exponent. On the other hand, Corlay et al. [24] arise a multifractional version for both Hull & White and log-normal SABR stochastic volatility models with the aim to fit the shape of the smile at different maturities. Similarly, Ayache and Peng [25] discuss parameter estimation for the integrated volatility driven by mBm. Our insight here is slightly different and focused on the analytical results for option pricing in a continuous-time setting. First, we assume that the noise behavior of the asset dynamics can be modeled using an mBm with Hurst exponent described by a time-deterministic function, and second, taking borrow some results based on stochastic calculus related to mBm, the respective effective Fokker-Planck equation is derived and solved, and consequently, the option pricing formula is addressed. The paper is organized as the following. First, we listed some general properties and auxiliary results for the mBm, particularly the multifractional Ito's lemma and the obtention of the related Fokker-Planck equation. Later, we deal with the pricing procedure, focusing on an analytic solution for the transition density and the proper pricing formula using the actuarial approach. Finally, the main conclusions are listed. ## 2 On the Multifractional Brownian motion **Covariance:** Let \(h:[0,\infty)\rightarrow[l,m]\subset(0,1)\) and \(W_{h(t)}\) a standard mBm; i.e., \(\mathrm{var}\left[W_{h(1)}\right]=1\). Then: \[\mathbb{E}\left[W_{h(t)}\cdot W_{h(s)}\right]=D\left(t,s\right)\left[t^{h(t)+h (s)}+s^{h(t)+h(s)}+\left|t-s\right|^{h(t)+h(s)}\right]\] where \[D\left(t,s\right)=\frac{\sqrt{\Gamma\left(2t+1\right)\Gamma\left(2s+1\right) \sin\left(\pi t\right)\sin\left(\pi s\right)}}{2\Gamma\left(t+s+1\right)\sin \left[\frac{\pi}{2}\left(t+s\right)\right]}\] It should be noted that this covariance structure exhibits long-range dependence [26]. Moreover: \[\mathbb{E}\left[\left(W_{h(t)}\right)^{2}\right]=t^{2h(t)}\] **Multifractional Ito lemma [27]:** Let \(F\in C^{2}\left(\mathbb{R}\right)\) and \(y_{t}\) a generic stochastic process driven by a multifractional Brownian motion: \[\mathrm{d}y_{t}=a\left(y_{t},t\right)\mathrm{dt}+b\left(t,y_{t}\right) \mathrm{d}W_{h(t)} \tag{1}\] Then, the following equality holds: \[\mathrm{d}F\left(t,y_{t}\right) = \frac{\partial F}{\partial t}\mathrm{d}t+\frac{\partial F}{ \partial y_{t}}\mathrm{d}y_{t}+\frac{1}{2}\left\{\frac{\mathrm{d}}{\mathrm{d }t}\left[t^{2h(t)}\right]\right\}b^{2}\frac{\partial^{2}F}{\partial t^{2}} \mathrm{d}t \tag{2}\] \[= \left\{\frac{\partial F}{\partial t}+a\frac{\partial F}{\partial y _{t}}+b^{2}t^{2h(t)-1}\left[h^{\prime}\left(t\right)t\ln t+h\left(t\right) \right]\frac{\partial^{2}F}{\partial t^{2}}\right\}\mathrm{d}t+b\frac{ \partial F}{\partial y_{t}}\mathrm{d}W_{h(t)}\] For a constant function \(h(t)=H\), the above theorem is reduced to the Fractional Ito formula addressed by Bender [28], while for the fixed value \(h(t)=H=1/2\), the standard Ito's lemma is recovered. **Effective Fokker-Planck equation:** Let \(g\left(y_{t}\right)\)a twice-differentiable scalar function and \(y_{t}\) the generic process (1). Using the multifractional Ito calculus and taking expectations, we get: \[\frac{\mathrm{d}\mathbb{E}\left(g\right)}{\mathrm{d}t}=\mathbb{E}\left(a \frac{\partial g}{\partial y}\right)+\mathbb{E}\left\{b^{2}t^{2h(t)-1}\left[h ^{\prime}\left(t\right)t\ln t+h\left(t\right)\right]\frac{\partial^{2}g}{ \partial y^{2}}\right\}\] Recalling the definition of expectations by means of the transition density function \(P\), and after some calculus, the effective Fokker-Planck related to the process (1) emerges: \[\frac{\partial P}{\partial t}=t^{2h(t)-1}\left[h^{\prime}\left(t\right)t\ln t +h\left(t\right)\right]\frac{\partial^{2}\left(b^{2}P\right)}{\partial y^{2} }-a\frac{\partial\left(aP\right)}{\partial y} \tag{3}\] ## 3 The multifractional Black-Scholes model and its transition density function The geometric Brownian motion, real-world physical measure \(\mathbb{P}\) \[\mathrm{d}S_{t}=\mu S_{t}\mathrm{d}t+\sigma S_{t}W_{t}\] where the constant values \(\mu\) and \(\sigma\) represent the yearly drift and volatility for the instantaneous return, and \(W_{t}\) an standard Gauss-Wiener process. In order to address..., we replace \(W_{t}\) by a multifractional Brownian motion \(W_{t\left(h\right)}\): \[\mathrm{d}S_{t}=\mu S_{t}\mathrm{d}t+\sigma S_{t}W_{h\left(t\right)}\] The Holderian function of the mBm \(W_{t\left(h\right)}\); i.e., \(H\left(t\right)\), is assumed known and time-deterministic (see for example ref. [24] for the case of a time-dependent sinusoidal function). By the substitution \(x_{t}=\ln S_{t}-\mu t\), he multifractional Ito's lemma (Eq. 2) leads to: \[\mathrm{d}x_{t}=-\sigma^{2}t^{2h\left(t\right)-1}\left[h^{\prime}\left(t \right)t\ln t+h\left(t\right)\right]\mathrm{d}t+\sigma W_{h\left(t\right)} \tag{4}\] According to the multifractional Fokker-Planck equation (Eq. 3), the transition density \(P=P\left(x_{t},t\right)\) related to the process (4) obeys: \[\frac{\partial P}{\partial t}=\sigma^{2}t^{2h\left(t\right)-1}\left[h^{\prime }\left(t\right)t\ln t+h\left(t\right)\right]\left[\frac{\partial P}{ \partial x}+\frac{\partial^{2}P}{\partial x^{2}}\right] \tag{5}\] Using the time substitution: \[\bar{t}=\sigma^{2}t^{2h\left(t\right)}\] and defining the moving frame of reference: \[\bar{x}=x+\frac{\bar{t}}{2}\] Eq. (5) goes to: \[\frac{\partial P}{\partial\bar{t}}=\frac{1}{2}\frac{\partial^{2}P}{\partial \bar{x}^{2}}\] The fundamental solution for the above equation (heat kernel with constant thermal difussivity equal to \(1/2\)) is given by: \[P\left(\bar{x},\bar{t}\right)=\frac{1}{\sqrt{2\pi t}}\exp\left[-\frac{\left( \bar{x}-\bar{x}_{0}\right)}{2\bar{t}}\right]\] where \(P\left(\bar{x},0\right)=P\left(\bar{x}_{0}\right)=\delta\left(\bar{x}_{0}\right)\). The initial condition is given by knowing the state of the asset at the inception time; i.e., \(S\left(t=0\right)=S_{0}=\mathrm{e}^{x_{0}}=\mathrm{e}^{\bar{x}_{0}}\). Coming back to the variable \(x\)and the original time \(t\), the transition density is expressed as: \[P\left(x,t\right)=\frac{1}{\sqrt{2\pi\sigma^{2}t^{2h\left(t\right)}}}\exp \left[-\frac{\left(x-x_{0}+\frac{1}{2}\sigma^{2}t^{2h\left(t\right)}\right)^{2 }}{2\sigma^{2}t^{2h\left(t\right)}}\right]\] From the previous result, we can compute the first moment for asset price in a future time \(t=T\) subject to its value at the inception \(t=0\): \[\mathbb{E}^{\mathbb{P}}\left(S_{T}\right) = \int_{0}^{\infty}S_{T}P\left(S_{T},T\right)\mathrm{d}S_{T} \tag{6}\] \[= \int_{0}^{\infty}\mathrm{e}^{x_{T}+\mu T}P\left(x_{T},T\right) \mathrm{d}x_{T}\] \[= S_{0}\mathrm{e}^{\mu T}\] where no differences appear concerning the expectation in the classical Black Scholes world, while in the second central moment, they differ2: Footnote 2: In the standard Geometric Brownian motion, the variance for the price at time \(T\) is equal to \(S_{0}\mathrm{e}^{2\mu T}\left(\mathrm{e}^{\sigma^{2}T}-1\right)\). \[\mathbb{E}^{\mathbb{P}}\left[\left(S_{T}-\mathbb{E}^{\mathbb{P}} \left(S_{T}\right)\right)^{2}\right] = \int_{0}^{\infty}S_{T}^{2}P\left(S_{T},T\right)\mathrm{d}S_{T}-S_{ 0}\mathrm{e}^{2\mu T} \tag{7}\] \[= \int_{0}^{\infty}\mathrm{e}^{2x_{T}+2\mu T}P\left(x_{T},T\right) \mathrm{d}x_{T}-S_{0}\mathrm{e}^{2\mu T}\] \[= S_{0}\mathrm{e}^{2\mu T}\left(\mathrm{e}^{\sigma^{2}T^{2h\left(T \right)}}-1\right)\] The option pricing formula Under mBm diffusion there is no equivalent martingale measure, so the risk risk-neutral pricing is not available [23]. However, we can apply the actuarial approach [29] in order to get a fair option pricing formula without the semi-martingale assumption. Let \(\mathrm{e}^{\mu T}=\frac{\mathbb{E}^{p}\left(S_{T}\right)}{S_{0}}\) the expected rate of return for the asset \(S\) at time \(T\) (see Eq. 6). By the actuarial approach, the fair premium for a vanilla European Call option with maturity \(T\) and exercise price \(K\) is given by [29]: \[C\left(K,T\right)=\mathbb{E}^{p}\left[\left(\mathrm{e}^{-\mu T}S_{T}-\mathrm{ e}^{-\nu T}K\right)^{+}\right]\] Since: \[\mathrm{e}^{-\mu T}S_{T}>\mathrm{e}^{-\tau T}K \Longleftrightarrow \mathrm{e}^{\varepsilon_{T}}>\mathrm{e}^{-\tau T}K\] \[\Longleftrightarrow x_{T}>\ln K-rT\] we have: \[C\left(K,T\right) = \int_{\ln K-rT}^{\infty}\left(\mathrm{e}^{\varepsilon_{T}}- \mathrm{e}^{-rT}K\right)P\left(x_{T},T\right)\mathrm{d}x_{T} \tag{8}\] \[= \int_{\ln K-rT}^{\infty}\mathrm{e}^{\varepsilon_{T}}P\left(x_{T },T\right)\mathrm{d}x_{T}-K\mathrm{e}^{-rT}\int_{\ln K-rT}^{\infty}P\left(x_{ T},T\right)\mathrm{d}x_{T}\] Given that, \[\int_{\ln K-rT}^{\infty}\mathrm{e}^{\tau_{T}}P\left(x_{T},T\right) \mathrm{d}x_{T} = \frac{\mathrm{e}^{\varepsilon_{T}}}{\sqrt{2\pi\sigma^{2}T^{2h \left(t\right)}}}\exp\left[-\frac{\left(x-x_{0}+\frac{1}{2}\sigma^{2}T^{2h \left(T\right)}\right)^{2}}{2\sigma^{2}T^{2h\left(T\right)}}\right]\] \[= \frac{\mathrm{e}^{\varepsilon_{0}}}{\sqrt{2\pi\sigma^{2}T^{2h \left(T\right)}}}\int_{0}^{\infty}\exp\left[-\frac{\left(x-x_{0}-\frac{1}{2} \sigma^{2}T^{2h\left(T\right)}\right)^{2}}{2\sigma^{2}T^{2h\left(T\right)}} \right]\mathrm{d}x_{T}\] \[= -\frac{\mathrm{e}^{\varepsilon_{0}}}{\sqrt{2\pi}}\int_{\frac{x_{ 0}-\ln K+rT+\frac{1}{2}\sigma^{2}T^{2h\left(T\right)}}{\sqrt{\sigma^{2}T^{2h \left(T\right)}}}}^{\infty}\mathrm{e}^{-\frac{\kappa^{2}}{2}}\mathrm{d}v\] \[= \mathrm{e}^{\varepsilon_{0}}N\left(d_{1}\right)\] \[\int_{\ln K-rT}^{\infty}P\left(x_{T},T\right)\mathrm{d}x_{T} = \frac{1}{\sqrt{2\pi\sigma^{2}T^{2h\left(t\right)}}}\int_{\ln K- \mu T}^{\infty}\exp\left[-\frac{\left(x-x_{0}+\frac{1}{2}\sigma^{2}T^{2h \left(t\right)}\right)^{2}}{2\sigma^{2}T^{2h\left(t\right)}}\right]\mathrm{d}x _{T}\] \[= -\frac{1}{\sqrt{2\pi}}\int_{\frac{x_{0}-\ln K+rT-\frac{1}{2} \sigma^{2}T^{2h\left(T\right)}}{\sqrt{\sigma^{2}T^{2h\left(T\right)}}}}^{ \infty}\mathrm{e}^{-\frac{\kappa^{2}}{2}}\mathrm{d}w\] \[= N\left(d_{2}\right)\] where \(N\left(\cdot\right)\) stands for the standard normal cumulative density and: \[d_{1}=\frac{x_{0}-\ln K+rT+\frac{1}{2}\sigma^{2}T^{2h\left(T\right)}}{\sqrt{ \sigma^{2}T^{2h\left(T\right)}}}=\frac{\ln\left(S_{0}/K\right)+rT+\frac{1}{2} \sigma^{2}T^{2h\left(T\right)}}{\sqrt{\sigma^{2}T^{2h\left(T\right)}}}\] \[d_{2}=d_{1}-\sqrt{\sigma^{2}T^{2h\left(T\right)}}\] Consequently, after replace the above computations into Eq. (8), we can arrive at the pricing for a European Call under multifractional diffusion: \[C\left(K,T\right)=S_{0}N\left(d_{1}\right)-K\mathrm{e}^{-rT}N\left(d_{2}\right) \tag{9}\] The formula (9) is also a generalization of the previous approaches. If \(h(t)=H\) is fixed to some value in its domain, the above result is equivalent to the fractional Black-Scholes formula [4], while for \(h\left(t\right)=1/2\), the standard Black-Scholes premium is recovered. With respect to market quotes, the performance of the multifractional BS model is clearly superior to its fractional and classical counterparts. This can be observed by calibrating the model prices to the at-the-money market European Call values, by means of the minimization of the squared residuals. Fig. 1 shows a set of market quotes for SPX at-the-money call option (stock price=$3970.99, strike price=3970) written on March 24, 2023, considering maturities from the range of 1 day to 5 months (data retrieved from Yahoo! Finance). We use the 252 yearly days convention and set \(r\) equal to the 13-week T-Bill rate quoted on the inception time (4.5013%). For the multifractional Black-Scholes, as in ref. [24], we select a 6-week (\(\sim 30\) trading days) periodic sinusoidal function for the point-wise regularity, particularly, \(\tilde{h}(t)=A\cos\left[2\pi\left(\frac{252}{30}\right)t+B\right]+C\); where \(A\), \(B\), and \(C\), in addition to \(\sigma\), are parameters that should be estimated. The mean square errors of the market quotes compared with model prices are lower for the multifractional approach (456.8), followed by the fractional (493.7), and the standard BS (555.5). ## 5 Summary We have modeled the price fluctuation by means of a Geometric Brownian motion driven by a multifractional Brownian motion where the Hurst exponent is an exclusive function of time. Our main result here is the obtention of the analytical multifractional Black-Scholes formula by means of the multifractional Ito calculus, the related Fokker-Planck equation, and the actuarial approach to price option under the physical measure \(\mathbb{P}\). Since mBm is a generalization for both Bm and fBm, the classical and fractional Black Scholes option pricing are recovered. Empirical fits over SPX ATM European Call options show better performance using a time-varying Hurst exponent. The option pricing under different extensions of the multifractional Brownian motion is a field of further research. Figure 1: Performance of the pricing models
2305.05780
Enhancing Gappy Speech Audio Signals with Generative Adversarial Networks
Gaps, dropouts and short clips of corrupted audio are a common problem and particularly annoying when they occur in speech. This paper uses machine learning to regenerate gaps of up to 320ms in an audio speech signal. Audio regeneration is translated into image regeneration by transforming audio into a Mel-spectrogram and using image in-painting to regenerate the gaps. The full Mel-spectrogram is then transferred back to audio using the Parallel-WaveGAN vocoder and integrated into the audio stream. Using a sample of 1300 spoken audio clips of between 1 and 10 seconds taken from the publicly-available LJSpeech dataset our results show regeneration of audio gaps in close to real time using GANs with a GPU equipped system. As expected, the smaller the gap in the audio, the better the quality of the filled gaps. On a gap of 240ms the average mean opinion score (MOS) for the best performing models was 3.737, on a scale of 1 (worst) to 5 (best) which is sufficient for a human to perceive as close to uninterrupted human speech.
Deniss Strods, Alan F. Smeaton
2023-05-09T21:58:54Z
http://arxiv.org/abs/2305.05780v1
# Enhancing Gappy Speech Audio Signals with Generative Adversarial Networks ###### Abstract Gaps, dropouts and short clips of corrupted audio are a common problem and particularly annoying when they occur in speech. This paper uses machine learning to regenerate gaps of up to 320ms in an audio speech signal. Audio regeneration is translated into image regeneration by transforming audio into a Mel-spectrogram and using image in-painting to regenerate the gaps. The full Mel-spectrogram is then transferred back to audio using the Parallel-WaveGAN vocoder and integrated into the audio stream. Using a sample of 1300 spoken audio clips of between 1 and 10 seconds taken from the publicly-available LJSpeech dataset our results show regeneration of audio gaps in close to real time using GANs with a GPU equipped system. As expected, the smaller the gap in the audio, the better the quality of the filled gaps. On a gap of 240ms the average mean opinion score (MOS) for the best performing models was 3.737, on a scale of 1 (worst) to 5 (best) which is sufficient for a human to perceive as close to uninterrupted human speech. Gappy audio, Mel-spectrograms, image inpainting, GANs + Footnote †: publication: pubid: 979-8-3503-4057-0/23/531.00 ©2023 IEEE ## I Introduction Spoken audio can suffer from dropouts, gaps and short clips of corrupted data when transmitted over networks, including cellular networks. This paper examines how generative adversarial networks (GANs), a form of machine learning can enhance the quality of spoken audio by filling such gaps in real time. While there are classical machine learning approaches to enhance the quality of speech audio based on Principal Component Analysis or others that can clean an audio signal, there is no good approach for real-time gap-filling. Our approach is to transfer audio regeneration into image in-painting by converting gappy audio into to Mel-spectrograms, similar to work presented in [22]. We examine data transmission packet loss conditions that produce gaps in audio varying from 40ms to 320ms, simulating a sequence of network packet losses of up to 8 packets. The next section reviews relevant research covering GAN applications and variant architectures, and speech enhancement in noisy domains. Following that we present our experimental setup and then our results followed by conclusions. ## II Related Work ### _Speech Enhancement in Noisy Audio_ Speech enhancement is an improvement task to the perceptual and aesthetic aspects of a speech signal which has been degraded by noise. This task is performed in mobile communications, hearing aids and robust speech recognition [12, 16, 17]. Even if the minimum required quality to understand what a person is saying is met, speech enhancement is still desirable as it can reduce listener fatigue. The aesthetic enjoyment of listening to speech can be taken away due to low fidelity of the speech in audio. Simply increasing the fidelity of the speech signal may also boost the performance of speech-to-text algorithms [14]. Noise in a speech signal may come from a noisy communication channel or the speech signal may originate in a noisy location. In cases of voice-over-IP transmission, network packet loss is an issue that causes gaps in transmission and reduces the perceptual and aesthetic features of a speech signal. Today, there is still no good solution available to the issue of regenerating gaps in audio signals in real-time communications. There are several approaches to speech _enhancement_ including using principal component analysis, statistical model-based algorithms, spectral subtraction and Wiener filtering. Recently speech enhancement has been addressed using GANs [12, 16, 17], though in those works they train on either 462,880 utterances or 224,000 sentences, much greater than what is done here. GANs that work with audio and/or speech enhancement typically use Mel-spectrograms, an image representation of an audio signal as shown later in Figure 1. A Mel-spectrogram captures how humans perceive sound better on lower frequencies compared to higher, and the spectrogram is a visualisation of the frequency composition of a signal over time. Features of this can then be adjusted in order to improve the aesthetic or quality of the regenerated speech audio. The existence of generative deep learning architectures, such as GANs, allows us to address the problem of _gappy_ speech. A GAN's capability to generate from any complex data distribution suggests that a GAN may be trained to regenerate missing audio in real-time. The data required to train such a model in a real setting may be collected from a speaker's previous speech and a model trained to regenerate gappy audio signals for that speaker. As part of a protocol among speakers, speech models could be exchanged that would be used to enhance an incoming speech signal by resolving gaps in communication due to packet loss. ### _Generative Adversarial Networks (GANs)_ Generative Adversarial Networks (GANs) are an approach to generative modelling using deep learning first introduced by Goodfellow _et al._ in 2014 [7]. Generative models allow learning a distribution of data without the need for extensively annotated training data. Based on training data, GANs allow generating new data similar to its training set. GAN architecture is based on game theory, where back-propagation signals are derived through a competitive process. Two neural networks, a Generator \((G)\) and Discriminator \((D)\) compete with each other. \(G\) learns to model distributions of data by trying to deceive \(D\) to recognise the generated samples as real [8]. What is particularly useful is that GAN models can be trained to mimic any distribution of data, so there are many practical applications yet to be discovered [1]. The application of GANs was initially limited to image enhancement tasks like producing high-quality images, until about 2017 when the first GAN capable of facial image _generation_ was created. GANs attracted attention and now we see GANs used where synthetic data generation is required including natural language Processing [3], computer vision [2] and audio generation [4]. For some applications, it is difficult to train a GAN using the original GAN architecture as some generators do not learn the distribution of training data well enough and so the Deep Convolutional GAN (DCGAN) was proposed in 2015 [18]. In this architecture, instead of the fully connected multi-layer perceptron NNs, CNNs were used. The authors in [18] identified a sub-set of CNNs that were suitable for use in the GAN framework. To stabilise the training process, the generator used the ReLU activation function across the layers, except in the final layer, where the Tanh function was used. Some specific constraints on the model identified during the development of the DCGAN laid the foundation of many further GAN architectures based on DCGAN. These include the Conditional Generative Adversarial Network (cGAN) [15] which can include labelling, WaveGAN which is used for audio synthesis [4] and Parallel WaveGAN [20] also used in audio and which uses auxiliary input features in the form of the Mel-spectrogram. For the speech _enhancement_, the Speech Enhancement GAN (SEGAN) architecture was introduced in [16]. The generator network is used to perform enhancement of the signal, its input being the noisy signal and latent representation, and its output is the enhanced signal. The generator is structured in the same way as the auto-encoder. Encoding involves a number of strided convolutional layers followed by parametric rectified linear units (PReLUs), where the result of every N steps of the filter is a convolution. The discriminator plays the role of expert classifier and conveys if the distribution is real or fake and the generator adjusts the weights towards the realistic distribution. As GANs are well developed in the areas of image-to-image translation and image in-painting [9, 11, 19, 22], which are similar to gap regeneration tasks, we propose to transform an audio signal into a Mel-spectrogram and use an image inpainter to fill the image gap. Mel-spectrograms can then be in-painted and transferred back to audio via a neural vocoder, such as Parallel-WaveGAN [20]. We propose to train a model to regenerate the gap in the fixed position at the end of the Mel-spectrogram, which would make this problem simpler to tackle for a GAN as it would always know where to in-paint the image. ## III Experimental Setup ### _Dataset_ The public domain LJSpeech data-set [10] is used which consists of 13,100 single-speaker short audio clips where the speaker reads passages from 7 non-fiction books in English. The entire duration of the clips, which range in length from 1 to 10 seconds, is approximately 24 hours, and the dataset consists of 13,821 distinct words. For our experiments a random subset of 1,300 clips was used with 1,000 used for training and 300 for testing. Our reason for using a sub-set is to more closely represent a real world use case where less training data is available for a given voice requiring gap-filling in audio telephony and video conferencing. As mentioned earlier, related work such as [12, 16, 17] trains on either 462,880 utterances or 224,000 sentences. ### _Data Pre-Processing_ Before the original 22kHz audio clips were converted via short-time Fourier transform (STFT) into Mel-spectrograms [5], the signal was trimmed at the start and end to remove silence. Thereafter STFT was performed on the audio with a frame length of 1024 points (corresponding to 46ms) and a hop size of 256 points (11ms). STFT peaks were elevated by square function, to highlight voice pitch and then transformed to Mel scale using 80-channel characteristics. An additional parameter of Mel filterbank as frequency was set to include audio in the range from 80Hz to 7.6kHz. Mel-spectrograms were scaled to have an approximately constant energy per channel, followed by log10 dynamic range compression. They were normalised by subtracting the global mean (\(\mu\)) of the dataset then dividing by the standard deviation \((\sigma)\). As a final step, values were normalised to the range [-1,1]. In order to perform normalisation, statistics from the overall dataset were collected. The length of the clips was standardised to 256 frames in the time domain (corresponding to 2.8s). To mimic faulty communications typical of packet-based IP, Mel-spectrograms with audio gaps from 40ms to 320ms were created at the end of the Mel-spectrogram as the real-time nature of audio communication requires regeneration to be applied as quickly as possible. Thus a trailing audio signal following the gap is not available. The 40ms to 320ms gaps allow mimicing of packet loss of up to 8 packets in a row, with the assumption that audio compression captures 40ms of audio in one packet. Gaps longer than 320ms introduce a risk of generating words that were not said because typical word rate for fast speech is up to 160 words per minute [21] (375ms each) so this sets the upper target for our gap-filling. The complete dataset is formed from Mel-spectrogram pairs of source (Mel-spec with a gap) and target (ground truth) images. An example of a training pair is shown in Figure 1. The input to the model is the masked image and the model tries to generate a complete image similar to the ground truth. ### _Model and Loss Function_ The starting point for our in-painting was Pix2Pix GAN [9]. This was previously used on multiple image-to-image transition tasks and also used in similar work [22]. In that related work the authors studied the creation of a joint feature space based on synchronised audio and video where the video consisted of spectrograms from the audio. That work focused on in-painting of the spectrogram to re-generate noisy or corrupted audio though their experiments were on music audio rather than on speech, which is our focus here. To form a baseline for this work, a standard U-Net-based 5-layer generator presented in Figure 2 was used with L1 pixel-wise loss and input dimensions of 256x256. As part of the Pix2Pix architecture, the Patch-GAN discriminator was used for adversarial loss, which creates scalar adversarial loss and Mean squared error (MSE) comparison of small patches of an image that form a grid and produce scores from 0 to 1, where each piece is classified as real or fake. of the Parallel-WaveGAN vocoder was based on PyTorch, and weights were fetched from the public git repository at [https://github.com/kan-bayashi/ParallelWaveGAN](https://github.com/kan-bayashi/ParallelWaveGAN) Initial experimental models were trained for 40 epochs on the subset of 1,300 exemplars, with a fixed learning rate of \(1e-4\) and beta of 0.5 set in the Adam optimiser. The default gap size was set to 240ms corresponding to 6 network packets. The gap was not variative in order to objectively assess different model performances in the same setting though later we present experiments with variative gap sizes using the best performing model. ### _Evaluation Metrics_ We approach evaluation from the image aspect of the Mel-spectrograms and from the audio aspect of the reconstructed WAV audio. Three evaluation metrics are used. As a first measure, we compute the mean squared error (MSE) of the pixels of the reconstructed image vs. the target image. As a second metric, we measure the MSE of the VGG19 CNN feature extraction layers of the Mel-spectrograms and compare ground truth and generated data structures. We favour the VGG19 feature MSE metric over the L1 loss metric as it is more descriptive visually, except in Table IV, which presents results in full. Because an image comparison metric does not clearly indicate how close to realistically sounding audio the generated in-painting actually generates, a third metric measures the quality of generated audio using the Perceptual Evaluation of Speech Quality (PESQ) [13] which is calculated for each test model. PESQ is a widely used standard for automated assessment of speech in telecommunication systems. It takes 2 audio samples as input and produces a Mean Opinion Score (MOS) from 1 (worst) to 5 (best). ## IV Experimental Results The first results reported are related to data normalisation. Our first test runs on the U-net architecture indicate that without normalisation of the Mel-spectrograms, the model fails to learn valid patterns and fails to produce meaningful results. A set of normalisation techniques were applied that were described in Section III-B. Evaluation results for the models are summarised in Table I. The baseline approach of in-painting with normalised data shows the algorithm is capable of learning the structure of the Mel-spectrogram and in-painting missing pieces with an MOS score of 2.348. However, its performance does not give the required result for a real life application. We tried to match Mel-spectrogram dimensions to be closer to 256x80 pixels by changing the U-Net stride to 1 in the encoder-decoder connecting layers, however a rapid drop in map shrinking in the earlier layers caused a performance drop with MOS falling to 2.138. we identified that with minimal structural alteration the input size of 256x128 gave in-painting performance in line with the original 256x256. Thus all subsequent U-Net models had an input size of 256x128. Following recent in-painting approaches such as [11, 19], we identified that newer approaches use more sophisticated loss functions and that loss function alterations may boost performance. Enhancements to our loss function, specifically VGG19 feature match loss for the whole image in addition to L1 loss were then applied. The in-painted image became closer to real data distribution and our VGG19 feature match error decreased substantially from 6.056 to 2.896 and MOS increased from 2.348 up to 3.657. We also implemented an idea from [11] where additional loss of the in-painted area was applied to the overall error. Therefore, the in-painted area loss was added to concentrate the attention of the algorithm more specifically on the in-painted area. That decreased VGG19 loss further to 2.721 and increased MOS to 3.737. To understand whether the length of the Mel-spectrogram plays a role in predicting the masked segment, input size was reduced from 256px (2.8s) to 125px (1.4s) by cutting Mel-spectrograms in half thus reducing the complexity of the problem as well as computational cost. Training and testing found that reduction in data input significantly reduced the performance of the model. The VGG19 feature match score degraded from 6.056 (the baseline) to 9.962 indicating that the baseline algorithm used information from the whole of the Mel-spectrogram. Experiments were performed around increasing the dimensionality of the data, but as that would have added additional computational cost, it was out of scope. In addition to the U-Net generator, we conducted experiments with the GMCCN CNN architecture. The performance of GMCCN after our standard 40 training epochs was disappointing, with a VGG19 feature match loss of 3.402 and significant drop in MOS to 2.465. In addition, GMCCN has increased computational cost as the architecture includes 3 networks running in parallel as shown earlier in Figure 2. To investigate the significance of the masked gap size, we conducted experiments based on the assumption that the algorithm would need to regenerate gaps from 40ms up to 320ms. Thus models were trained for different gap sizes. Results showed that reducing the gap size required less training time to achieve good performance as seen in Figure 3. An interesting finding was that when Mel-spectrograms were generated on the 320ms gap model and others on the 160ms gap model, the error on the 320ms gap model Mel-spectrogram was the same if we had taken the first 160ms of the sample. This tells us that models perform the same if we for the same gap window. Our subsequent experiments were carried out on segments with gaps of 320ms. Results also showed that performance degrades linearly and the model regenerates Mel-spectrograms with good confidence at the start, however the further into the time domain, the less accuracy results as shown in Table II. We also experimented with training models on variative gap sizes. In the data processing pipeline, random gap selection was performed in the range 40ms to 320ms. After training the model for the default 40 epochs, the models that performed significantly worse than the fixed gap models were identified and the performance dropped by 70% compared to fixed-size models. We trained the model on a full data set of 13,000 samples (increasing the step amount from \(40\times 1000\) to \(40\times 13,000\)). The model still did not perform as well as those with fixed gap sizes, the VGG19 feature loss was 6.785, which is significantly higher than the 2.721 produced by the fixed gap model, even though trained on a substantially larger dataset. A series of tests of the inference speed on both the Parallel-WaveGAN and U-Net based generator Were performed to identify if the model is usable in real-time. The U-Net based model generates an in-painted Mel-spectrogram in approximately 50ms on a GPU, in line with results presented in [6]. Parallel-WaveGAN converts a Mel-spectrogram to audio in 5ms on a GPU in line with results presented in [20]. Finally, we examined the worst performing in-painted Mel-spectrograms and best performing Mel-spectrograms, identified by VGG19 feature loss and MOS. A summary of the results is shown in Table III and Figure 4 shows some representative examples. A sample model output comparison may be seen in Table IV, along with ground truth and the Mel-spectrogram used as its input. WaveGan vocoder [20] and following the use of an enhanced U-net generator with a more advanced loss function similar to one in [11, 19], we generated audio fragments that are structurally similar to the real distribution with a MOS from 3.214 for gaps of 320ms up to a MOS of 4.514 for gaps of 40ms. The total time taken to regenerate a gap is approximately 105ms on a GPU, an acceptable performance for real-time communications. For larger regenerated gaps our model is capable of almost exactly regenerating the missing area in the Mel-spectrogram. The model uses information from all of the Mel-spectrogram, as reducing the size of the input Mel-spectrogram leads to a large drop in performance. We found that fixed gap size models are capable of learning distributions from smaller datasets as the complexity of the problem is reduced and the most efficient way to address radiative gap sizes is to train a model capable of filling large gaps and use it for all gap sizes. The performance of such an approach is similar to that of models trained on smaller gap sizes. We conclude that it is possible to use our in-painter-Vocoder pipeline to regenerate audio gaps in real-time on systems equipped with a GPU and that the result can be perceived by humans as good quality. Further work should identify if there are reduced sized models similar to SD-UNET [6] that could perform well enough on CPU-only systems.
2310.10885
Calibrating the role of entanglement in variational quantum circuits
Entanglement is a key property of quantum computing that separates it from its classical counterpart, however, its exact role in the performance of quantum algorithms, especially variational quantum algorithms, is not well understood. In this work, we utilise tensor network methods to systematically probe the role of entanglement in the working of two variational quantum algorithms, the Quantum Approximate Optimisation Algorithm (QAOA) and Quantum Neural Networks (QNNs), on prototypical problems under controlled entanglement environments. We find that for the MAX-CUT problem solved using QAOA, the fidelity as a function of entanglement is highly dependent on the number of layers, layout of edges in the graph, and edge density, generally exhibiting that a high number of layers indicates a higher resilience to truncation of entanglement. This is in contrast to previous studies based on no more than four QAOA layers which show that the fidelity of QAOA follows a scaling law with respect to the entanglement per qubit of the system. Contrarily, in the case of QNNs, trained circuits with high test accuracies are underpinned by higher entanglement, with any enforced limitation in entanglement resulting in a sharp decline in test accuracy. This is corroborated by the entanglement entropy of these circuits which is consistently high suggesting that, unlike QAOA, QNNs may require quantum devices capable of generating highly entangled states. Overall our work provides a deeper understanding of the role of entanglement in the working of variational quantum algorithms which may help to implement these algorithms on NISQ-era quantum hardware in a way that maximises their accuracies.
Azar C. Nakhl, Thomas Quella, Muhammad Usman
2023-10-16T23:36:40Z
http://arxiv.org/abs/2310.10885v2
# Calibrating the role of entanglement in variational quantum circuits ###### Abstract Entanglement is a key property of quantum computing that separates it from its classical counterpart, however, its exact role in the performance of quantum algorithms, especially variational quantum algorithms, is not well understood. In this work, we utilise tensor network methods to systematically probe the role of entanglement in the working of two variational quantum algorithms, the Quantum Approximate Optimisation Algorithm (QAOA) and Quantum Neural Networks (QNNs), on prototypical problems under controlled entanglement environments. We find that for the MAX-CUT problem solved using QAOA, the fidelity as a function of entanglement is highly dependent on the number of layers, layout of edges in the graph, and edge density, generally exhibiting that a high number of layers indicates a higher resilience to truncation of entanglement. This is in contrast to previous studies based on no more than four QAOA layers which show that the fidelity of QAOA follows a scaling law with respect to the entanglement per qubit of the system. Contrarily, in the case of QNNs, trained circuits with high test accuracies are underpinned by higher entanglement, with any enforced limitation in entanglement resulting in a sharp decline in test accuracy. This is corroborated by the entanglement entropy of these circuits which is consistently high suggesting that, unlike QAOA, QNNs may require quantum devices capable of generating highly entangled states. Overall our work provides a deeper understanding of the role of entanglement in the working of variational quantum algorithms which may help to implement these algorithms on NISQ-era quantum hardware in a way that maximises their accuracies. ## I Introduction The development and benchmarking of quantum algorithms have attracted significant attention in recent years because their in-principle capability to solve classically intractable problems[1] when coupled with advances in quantum hardware is anticipated to provide a quantum advantage in a range of real-world applications[2]. Among a variety of quantum algorithm classes that are being developed, one particular class known as Variational Quantum Algorithms (VQA) has been the subject of intense research due to the possibility of combining the manipulation of quantum systems with classical optimisation techniques[3], allowing one to take advantage of the high dimensionality of the state space of quantum systems along with sophisticated optimisation algorithms. In particular, these quantum algorithms with classically optimised variational parameters have found applications in chemistry[4; 5; 6], finance[7; 8], operations research[9; 10; 11; 12] and more recently in quantum machine learning models[13; 14; 15; 16; 17; 18; 19; 20; 21]. However, unlike conventional quantum algorithms, such as those for dictionary search[22], or semi-prime number factorisation[23], VQAs rely on heuristic approaches. Importantly this means that even without quantum hardware noise there is no guarantee of these algorithms' performance on any given instance. As a result, there is a significant amount of research on the performance analysis of these algorithms[24; 25; 26; 27; 28]. Entanglement is one of the key metrics that may be used to analyse the performance of VQAs, but as of yet there has only been limited investigation of the role of entanglement with regard to these algorithms[29; 30; 31; 32]. Our work aims to fill this knowledge gap by providing a systematic and comprehensive calibration of the role of entanglement in the functionality of two widely used variational quantum algorithms: (1) the Quantum Approximate Optimisation Algorithm (QAOA)[33] used for combinatorial optimisation tasks and; (2) Quantum Neural Networks (QNNs)[14] used in a variety of machine learning applications including in image classification[14; 16; 17], drug response prognosis[20] and breast cancer prediction[18; 19; 21], among others. This work will use Matrix Product State[34; 35] (MPS) simulations to analyse how these two algorithms perform when their entanglement is restricted. We will also track the entanglement entropy of these algorithms as the system size and depth of the quantum circuit increase. For QAOA, in the context of the paradigmatic combinatorial optimisation problem MAX-CUT, it has been shown that there is a volume law entanglement barrier between the initial and final state of the algorithm[30], suggesting that tensor network simulation methods[34; 35], whose memory and computational resource requirements scale exponentially with the entanglement of the quantum system are not able to simulate QAOA efficiently. This is corroborated by Ref. [29] which shows that for complete graphs with random edge weights and 3-regular graphs with random edges, the fidelity of QAOA with few number of layers follows a scaling law with respect to the entanglement per qubit such that the fidelity is the same regardless of the size of the system. It remains open as to whether such a scaling law holds for circuits with greater than four QAOA layers or different classes of graphs, such as grid graphs or \(k\)-regular graphs (where \(k\neq 3\)), a question that we will address in this work. Similar studies of the fidelity with respect to entanglement per qubit have also been conducted for Grover's Algorithm, the Quantum Fourier Transform and the quantum counting algorithm[36]. A second class of VQAs that has attracted significant attention in recent years are Quantum Neural Networks (QNNs) used to perform machine learning tasks. In general, Quantum Machine Learning (QML) models can range from quantum subroutines of an overall classical process[14; 15; 16; 17; 18; 19; 20; 21] to completely quantum analogues of established machine learning models[37; 38]. Contrary to other VQAs however there has only been limited investigation as to how entanglement grows within these QML models. For example, in the case of parameter optimisation-based methods such as QNNs, it is recognised that entanglement-induced barren plateaus will arise for models that satisfy volume-law growth in their entanglement entropy[39]. Additionally, there has also been investigation of the entanglement entropy of QNN architectures with random parameters for up to 50 qubits[31]. However, we are not aware of any previous work which has studied the entanglement of trained QNNs. This will be the second focus of our work. Tensor Networks provide an efficient method to quantitatively assess and control the entanglement of a quantum state. In particular, MPS have found significant application in the simulation of quantum algorithms on classical computers in general[40; 41], and specifically for the study of entanglement in quantum algorithms [36; 29; 31]. MPS are characterised by one freely adjustable parameter, their bond dimension, which translates into an upper bound on the entanglement entropy of the quantum system. Moreover, the entanglement entropy can be efficiently determined from an MPS[42]. The bond dimension \(\chi\) of an MPS is the primary driver of memory utilisation, scaling as \(O(N\chi^{2})\) for a system with \(N\) sites. As a result, states of high entanglement are difficult to represent, whereas systems with relatively low entanglement may be represented in an efficient manner. This may be compared to state-vector simulations which generally have a memory requirement of \(O(2^{N})\)[1]. Unlike state-vector simulation methods, quantum states represented as MPS may be approximated in such a way that their bond dimension is reduced systematically, providing a way to control the entanglement of the state at the cost of fidelity. This, along with the ability to perform local operations on the state locally in memory, are the key benefits of using MPS to study quantum systems and quantum algorithms. In this work, we study the properties of QAOA when applied to the solution of the MAX-CUT problem on 3-regular, 4-regular, complete and grid graphs for a varying number of QAOA layers. We demonstrate that the entanglement of the resulting QAOA circuits is highly dependent on both the type of graph and the number of QAOA layers. As a result, we find that previously proposed scaling laws for 3-regular and complete graphs[29] do not hold true for larger-depth QAOA circuits or graphs with a structured edge layout such as the grid graph. This observation has implications for the use of QAOA to solve tasks, such as the vehicle routing problem[11] which has an underlying graph that is low density and regular, and some spin-glass models[43] where the underlying graph is a grid graph. We find that the entanglement per qubit required to achieve high fidelity for such types of underlying graphs is low. This is contrasted with tasks such as aircraft tail assignment[12] which have underlying graphs which are of high density, where we conclude that the entanglement per qubit required to achieve high fidelity is relatively high. Importantly, our findings demonstrate that the entanglement scaling of QAOA on average is not indicative of its scaling for specific tasks and furthermore that the entanglement at low depth, where one is unlikely to find an appropriate so Figure 1: The various QNN circuits considered in this work with labels above. Each block may be repeated \(p\) times with new parameter vectors \(\mathbf{\theta}_{i}^{j}\) each taking three real values. For an \(n\) qubit system, each layer has \(n\)\(U(\theta)\) arbitrary one-qubit gates as defined in Ref. [1]. The number of entangling gates per layer is a) \(n(n-1)/2\), b) \(n\), c) \(n+1\), d)-e) \(n\). All circuits have the first and last layers shown, except for the full circuit for brevity. The different circuits represent a different entanglement structure, except for the Linear and Full circuits which can be shown to be identical as per Ref. [31]. lution is fundamentally different to that at high depth, where our work suggests that the entanglement per qubit required to achieve high fidelity is significantly reduced. We then consider QNNs which are a type of VQA that have a circuit structure which is substantially different from that of QAOA. We show that various QNN ansatze, each corresponding to a different entangling gate layout as per FIG. 1, have an entanglement entropy that depends on the dataset that is considered. In general, we find that the QNNs are highly entangled, with simpler datasets having a relatively lower entropy of entanglement throughout the QNN. We show that the test accuracy scaling as a function of the entanglement per qubit is strongly correlated with the accuracy of the training, with poorly trained circuits being more resilient to a truncation in the entanglement of the circuit. This indicates a high degree of entanglement present in the trained circuits, suggesting that a pathway to demonstrating quantum advantage for QNNs may be available. ## II Methods In this section, the theory underpinning the MPS representation of quantum systems will be introduced, focusing on how its key characteristic, the bond dimension is related to a quantum state and how it can be used as a metric for entanglement. The bond dimension and control thereof will be central to our analysis of QAOA and QNNs. QAOA will also be introduced, along with the specific problems that will be considered, namely MAX-CUT on varying types of regular, complete and grid graphs. Lastly QNNs, and specifically their application to image classification will be considered, where different types of circuits will be introduced, along with different datasets on which our circuits will be trained. QAOA and QNNs have a wide variety of applications and encapsulate two of the most common circuit architectures for VQAs, where for QAOA the circuit structure is necessarily one that is dependent on the problem instance, whereas for QNNs the circuit structure may be generic or tailored to the quantum device which the algorithm will be executed on. ### Matrix Product States MPS are a tensor network representation of one-dimensional many-body quantum states. We will assume that the total system consists of \(N\) individual \(d\)-level subsystems whose Hilbert space is spanned by orthonormal vectors \(|i_{s}\rangle\) with \(i_{s}=0,\ldots,d-1\) and \(s=1,\ldots,N\). For qubits, the relevant unit in quantum computing applications, one has \(d=2\). Each many-body quantum state can then be represented in the form, \[|\psi\rangle=\sum_{i_{1},i_{2},\ldots,i_{N}}A^{(1)}_{i_{1}}A^{(2)}_{i_{2}} \ldots A^{(N)}_{i_{N}}|i_{1}i_{2}\ldots i_{N}\rangle, \tag{1}\] where the \(A^{(s)}_{i_{s}}\) are suitably sized matrices that encode its entanglement. It is custom to choose these matrices to be square matrices of \(s\)-independent size \(\chi\times\chi\), where \(\chi\) is referred to as the bond dimension. Only the matrices at the two ends of the matrix product are \(1\times\chi\) and \(\chi\times 1\) matrices, respectively. This ensures that the matrix product evaluates to a complex number in Eq. (1). Alternatively, for fixed \(s\) the set of matrices \(A^{(s)}_{i_{s}}\) can be viewed as a rank-3 tensor, with one physical index \(i_{s}\) and two auxiliary indices corresponding to the matrix rows and columns. Using Penrose graphical notation[42] MPS are then represented as, \[|\psi\rangle=\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{includegraphics[width=14.226378pt]{includegraphics[width=14.26378pt]{ \includegraphics[width=14.226378pt]{includegraphics[width=14.226378pt]{includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{includegraphics[width=14. This metric is the same as that used in Ref. [29], rescaled such that it is in the interval \([0,1]\). This is done so that the value measured by this metric is more intuitively understood without the need to consider the underlying MPS representation. To quantify the performance of truncated MPS simulations, we define a quantity that we call the simulation fidelity which is identical to the "fidelity" metric used in Ref. [29] as, \[\text{Simulation Fidelity}=\frac{\text{Cost}(\chi)}{\text{Cost}(2^{N/2})}. \tag{5}\] We call the metric the simulation fidelity to avoid confusion with the conventional definition of fidelity used for quantum states. The \(\text{Cost}(\chi)\) is the cost function associated with the VQA evaluated using an MPS circuit simulation which has a maximum bond dimension of \(\chi\). This will be explicitly defined for the VQAs where this metric is used in the sections below. A further discussion on the simulation fidelity, particularly as it pertains to this work can be found in Appendix A. ### Quantum Approximate Optimisation Algorithm QAOA prepares a quantum circuit as a sequence of time-evolution operators of some problem Hamiltonian \(H_{P}\), representing some combinatorial optimisation problem, and a mixer Hamiltonian \(H_{M}\)[33]. For an \(n\)-qubit system, a \(p\) layered QAOA takes the following form, \[\ket{\psi(\alpha_{i},\beta_{i})}=\prod_{k=1}^{p}(e^{-i\alpha_{k}H_{P}}e^{-i \beta_{k}H_{M}})\ket{+}^{\otimes n}, \tag{6}\] where the product is ordered in the following way, \(\prod_{k=1}^{p}A_{k}=A_{p}A_{p-1}\dots A_{1}\). The initial state is defined by \(\ket{+}=\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})\), and the mixer Hamiltonian is defined as \(H_{M}=\sum_{i=1}^{n}X_{i}\), where \(X\) is the Pauli \(X\) gate. Note that the problem Hamiltonian \(H_{P}\) is assumed to be diagonal in the computational basis, and is, for the purposes of this work, quadratic at most. As such the state described by Equation (6) may be prepared in a straightforward manner using standard one and two-qubit operators. The parameters \(\alpha_{i}\), \(\beta_{i}\) in Equation (6) may then be optimised using the following cost function, \[\text{Cost}(\alpha_{i},\beta_{i},\chi)=\,\bra{\psi_{\chi}(\alpha_{i},\beta_{i })}\ket{H_{P}}\ket{\psi_{\chi}(\alpha_{i},\beta_{i})}, \tag{7}\] where \(\ket{\psi_{\chi}}\) indicates an approximation of the state \(\ket{\psi}\) with bond dimension of at most \(\chi\) performed using truncation as per Section II.1. One of the most common combinatorial optimisation tasks studied using QAOA is the MAX-CUT problem[49, 50, 51, 27, 52] which seeks to partition a graph \(G\) with vertices \(V\) and edges \(E\) into two non-overlapping sets with the maximal number of edges between the two sets. The MAX-CUT problem can be represented by the following problem Hamiltonian[52], \[H_{P}=\sum_{(i,j)\in E}w_{ij}Z_{i}Z_{j}, \tag{8}\] where \(w_{ij}\) is the weight of the edge between nodes \(i\) and \(j\), and \(Z\) is the Pauli \(Z\) gate. This Hamiltonian may be solved using QAOA in a straightforward manner[33]. It has been established that a number of common optimisation problems may be formulated in terms of Hamiltonians of the form in Equation (8) on specific graphs[52]. For example, spin-glass models such as the Sherrington-Kirkpatrick model[53] and Edwards-Anderson model[43] can be reduced to MAX-CUT problems on complete and grid graphs respectively, both with edge weights \(w_{ij}\in\{1,-1\}\). Additionally, MAX-CUT has applications in circuit design[54] where the underlying graph is planar, as well as machine scheduling[55] where the underlying graph is complete. In this work, complete graphs with random uniform weights \(w_{ij}\in[0,1]\) and \(3\)-regular graphs with unit weights, i.e. graphs with nodes each of degree three, will initially be considered similar to Ref. [29]. Examples of such graphs can be seen in FIG. 2(a-b). We will be simulating these systems up to \(p=6\) QAOA layers for all graphs, which is greater than the \(p\leq 2\) and \(p\leq 4\) simulations undertaken for complete and \(3\)-regular graphs respectively in previous works[29]. In doing so we seek to determine whether higher-depth QAOA has similar characteristics to low-depth QAOA. Such a distinction is important as in general one requires higher depth circuits in order to determine the solution state to high accuracy[56]. The accuracy of QAOA may be measured using a common metric known as the approximation ratio which is introduced in Appendix B. This metric is used to track the performance of the QAOA simulations for up to six layers for the types of graph considered in the main body of this work in Appendix C. Additionally, grid graph simulations will be performed, as unlike random regular graphs this class has a structured edge layout and is planar making it similar to the underlying graphs of the common applications of MAX-CUT introduced above. Firstly, in order to draw comparisons between grid graphs and other graphs of the same uniform degree, \(4\)-regular graphs with random uniform edge weights will also be simulated as per FIG. 2(c). Then, grids with two, three and four columns and random uniform edge weights will be considered as per FIG. 2(d-f). Simulation of all types of graphs considered in this work will be undertaken for \(1,\dots,6\) layers. Another component of our study is to systematically explore the intermediate regime between grid graphs of two rows and complete graphs at two QAOA layers by starting with a grid graph and growing it by adding an edge with a randomly assigned edge weight between two nodes chosen at random that are otherwise not connected. This is repeated until the graph is complete. This process can be visualised in FIG. 2(g). We anticipate that the characteristics of the simulations, that is the entanglement entropy and simulation fidelity, will be dependent on the fundamental properties of the graph, that is the number of nodes, the number of edges and the layout of the edges. A metric that encapsulates the number of edges relative to the number of nodes is called the edge density and is defined on a graph \(G=(V,E)\) as[57], \[\text{Density}=\frac{2|E|}{|V|(|V|-1)}. \tag{9}\] The density will be used as a point of comparison between different graphs when analysing how QAOA performs on these instances. For all optimisations the SLSQP optimiser[58] is utilised to minimise the QAOA parameters \(\alpha_{i}\), \(\beta_{i}\) with the following hyperparameters; maxiter=500, ftol=1e-13, tol=1e-13. The optimisation is repeated \(10^{3}\) times with different ini tial \(\alpha_{i}\), \(\beta_{i}\) selected uniformly at random from \([-\pi,\pi]\). IBM's Qiskit **aer** simulator is used to find an initial set of optimal parameters, feeding this result into the quimb MPS simulator for the restricted entanglement simulations. Truncation of singular values to adhere to bond dimensions \(\chi<2^{N/2}\) is performed after each QAOA layer. For each type of graph defined, an average of over 100 random instances is taken. ### Quantum Machine Learning QNNs[59, 31], sometimes referred to as QVCs[16, 17] are machine learning models characterised by multiple layers of parameterised unitary transformations. They have found significant applications over recent years in image classification[14, 17], drug response predictions[20], and breast cancer prediction[21], amongst other tasks[59, 60, 61]. Unlike QAOA, QNNs have a flexible circuit structure, allowing for circuits to be structured in such a way that they may be efficiently executed on a quantum device[62]. In that sense they are similar to the Variational Quantum Eigensolver (VQE)[4], with the primary distinguishing feature being that QNNs must first receive an input state which they will attempt to classify, as opposed to VQE and QAOA which simply optimise a given ansatz according to some cost function which completely defines the problem. One may also consider circuit ansatze which have a specified entanglement layout as defined by their two-qubit operators which is what will be considered in this work. In particular linear (both with and without cyclic boundary conditions) complete, single-qubit control and alternating-control layouts will be considered, each of which differs in their structure of CNOT gates. The circuits are shown in FIG. 1. All ansatze have arbitrary parameterised single qubit rotations on all qubits. These single qubit rotations and entangling CNOT gates form a block that constitutes a single layer of the QNN which may then be repeated many times. The total number of gates for all the ansatze is primarily driven by the CNOT gates, with all but the full circuit ansatz having a gate count scaling of \(O(pn)\) where \(p\) is the number of layers and \(n\) is the number of qubits. The full ansatz has a gate count scaling of \(O(pn^{2})\). The precise number of CNOT gates per layer is detailed in FIG. 1. We will be analysing QNNs which are trained to solve image classification problems, a task that has garnered particular attention in the QML community over recent years[63, 64, 16]. We will be using the MNIST[65], FMNIST[66] and CIFAR[67] datasets. The MNIST and FMNIST datasets are \(28\times 28\) greyscale images with 10 labels (or classes). The images are amplitude encoded[68] onto 10 qubits. Additionally, a simple binary classification task over the 0 and 1 classes of MNIST is performed. This is undertaken so that we have a model that can achieve very high test accuracy, ideally 100%. We call this modified dataset BMNIST. The CIFAR dataset contains \(32\times 32\) RGB images with 10 classes which are amplitude encoded onto 12 qubits with the three colours encoded onto three separate channels. The task is reduced to a binary clas Figure 2: The various graph classes considered in this work are: a) Complete graphs with random edge weights (not shown), b) 3-regular graphs with random edge assignment and unit edge weights, c) 4-regular graphs with random edge assignment and unit edge weights, d)-f) grids of d) two, e) three and f) four columns with random edge weights, these are distinct from other 3 and 4-regular graphs as they have a specific edge structure. g) Beginning with a two-row grid with random edge weights, evolve by randomly assigning new edges with random weights selected from a uniform distribution over \([0,1]\) until a complete graph with \(L\times 2\) nodes is formed. h) A single layer QAOA circuit for a four-node complete graph, with variational parameters \(\alpha_{1}\) and \(\beta_{1}\) as per Equation (6). The QAOA circuit may be executed using an MPS simulator. sification problem over the cars and boats classes in order to simplify training such that it can be performed on a classical simulator. For all datasets, we utilise 32,000 training examples and 1,000 test examples. Classification is performed by assigning a qubit to each class and measuring the \(Z\) expectation values of each qubit. The qubit with the largest \(Z\) expectation value corresponds to the classification for the given input. In order to perform optimisation over this classification we assign a conditional probability that we find some class \(c\) given some input \(\mathbf{i}\) using softmax normalisation, \[p(c|\mathbf{i})=\frac{\exp(\,\langle\psi(\mathbf{i})|Z_{c}|\psi(\mathbf{i}) \rangle)}{\sum_{j}\exp(\,\langle\psi(\mathbf{i})|Z_{j}|\psi(\mathbf{i})\rangle )}, \tag{10}\] which is fed into a cross entropy loss function which is minimised over. The ADAM optimiser[69] with a learning rate of 5e-3 is utilised for training which is undertaken for a single epoch with a batch size of 16. As per the QAOA simulations, truncation is performed after each layer, with normalisation of the state performed once at the end of the circuit. ## III Results and Discussion In order to understand the role of entanglement in QAOA and QNNs, the entanglement entropy of each VQA is analysed throughout the execution of the algorithm with the untruncated ansatz. Additionally, an analysis of the fidelity of the simulation upon truncation is undertaken. These metrics provide an insight as to the role of entanglement in these algorithms and more specifically provide a benchmark that can be used to determine whether or not NISQ devices have sufficient capability to generate the entanglement required[70; 71]. ### Quantum Approximate Optimisation Algorithm QAOA on complete, 3-regular and 4-regular graphs as defined in Section II.2 is evaluated for up to six QAOA layers (\(p\leq 6\)) with the number of nodes varying between 8 to 14. The fidelity of the restricted entanglement simulations and the entanglement entropy of the middle cut at the end of the circuit are shown in FIG. 3. It is found that for complete graphs and for 3-regular graphs at \(p\leq 2\), the simulation fidelity is consistent with the \(\mathcal{F}(\ln(\chi)/N)\) scaling law established in Ref. [29]. As the number of QAOA layers increases however we find that for 3-regular graphs the simulation fidelity remains consistently high as the entanglement per qubit is decreased. It is observed that in this case the simulation fidelity no longer follows a consistent scaling law as the number of nodes in the graph is varied. The entanglement entropy of the final state (that is at the end of the QAOA circuit) is also tracked for circuits of varying depth where it is found that entanglement entropy approaches a peak before starting to decrease for deeper circuits. Recognising that the solution to combinatorial optimisation tasks such as MAX-CUT are inherently product states, this can be interpreted as an indication that the QAOA circuit is preparing a state that is close to that of the desired ground state. Counter-intuitively this finding suggests that for such prob Figure 3: The simulation fidelity with restricted entanglement (top row), and entanglement entropy at the end of the circuit (bottom row) of complete, 3-regular and 4-regular graphs for up to six QAOA layers and up to 14 nodes. Note that after the peak entanglement entropy is reached for the regular graphs, the simulation fidelity with respect to the entanglement per qubit no longer follows a consistent scaling law as the number of graph nodes is varied. Additionally one can observe that the entanglement per qubit required to maintain high simulation fidelity decreases considerably. lems increasing the number of QAOA layers may result in a greater approximation ratio on devices incapable of creating high entanglement states. Regarding 4-regular graphs, it is found that the entanglement entropy peaks at a lower depth, two QAOA layers for all \(N\leq 14\), compared to three layers for \(N=10\) and \(N=14\) in the 3-regular case. Furthermore, the scaling law that holds for two layers in the 3-regular case does not appear to be true for 4-regular graphs. This appears to be tied to the fact that the entanglement entropy reaches a peak with fewer QAOA layers. At higher depth, a similar trend to the 3-regular case is observed, where the state becomes more resilient to truncation of entanglement, with the overall entanglement entropy remaining low. Overall, the 4-regular case is found not to be significantly different to the 3-regular case. In the case of complete graphs, we find that the scaling law remains consistent as the number of QAOA layers is increased. Note that unlike the regular graphs considered above, there does not appear to be a peak in the entanglement entropy. This is consistent with our observations that the scaling law only holds below a certain number of QAOA layers where the entanglement entropy has not yet reached a maximum. Given the relatively lower approximation ratio for these states, as per Appendix C, the monotonic increase in entanglement entropy is likely an indication that such graphs are more difficult for QAOA to solve rather than necessarily being an indicator that QAOA circuits for complete graphs exhibit inherently different characteristics. This is especially the case given that the true ground state is a product state, hence there must be a peak in entanglement entropy given that in the limit that the number of QAOA layers approaches infinity then the QAOA circuit can determine the ground state energy exactly[33]. Despite these observations, it is still not clear why the scaling law observed holds and how it is specifically related to the entanglement of the state. A more analytical study would need to be conducted to determine the theoretical basis for the behaviour of QAOA under these conditions. Investigating if imposing structure on the graphs has an effect on the simulation fidelity of QAOA, grid graphs are simulated using QAOA with results presented in FIG. 4. Results are split by number of columns in the grid graph. For the two-column case, our results closely resemble that of the 3-regular graph, noting that two-column (or equivalently two-row) grids are 3-regular. The primary distinction found is that the entanglement entropy remains low compared to similarly sized 3-regular graphs. We hypothesise that this is due to the fact that the edge layout in grid graphs is such that nodes that are situated far away from each other are in general less likely to be connected via an edge, given a node layout as per FIG. 4, which is the case for the simulations performed. This is compared to the random edges of random 3-regular graphs. Increasing the size to three and four-column grid graphs, which are now 4-regular, the results again appear similar to their random regular graph counterparts. The primary distinctions being as above, a decrease in entanglement entropy resulting from a node layout where nearby nodes are more likely to have edges compared to distant nodes. These grid graph results highlight the importance of the mapping of nodes to qubits. Recognising in particular that by arranging nodes in such a way that their edges are preferentially between nodes that are near adjacent one can achieve a far lower entanglement entropy. This is of course not always possible. Continuing to consider grids whose nodes are labelled counting from left to right across each row, then top to bottom down each row, it is found that the exchange between rows and columns, in particular in the \(3\times 4\) case compared to Figure 4: The simulation fidelity at two QAOA layers (top row), and the entanglement entropy at the end of the circuit (bottom row) for grid graphs with two (left column), three (middle column) and four columns (right column) respectively. It is noted that despite the similarity to 4-regular graphs, grids of three and four columns appear to behave qualitatively differently from those of random 4-regular graphs. The simulation fidelity at four and six QAOA layers is presented in Appendix D. the \(4\times 3\) case, appears to result in a significant difference in entanglement entropy along the middle cut. This is an indication that the entanglement is not evenly distributed for grid graphs, and hence node ordering may affect the optimisation step of QAOA, in particular in cases where entanglement is limited. Ultimately however for a given circuit with optimised parameters, this appears to only have a minimal effect on the simulation fidelity upon truncation. In addition to graph topology, graph density is another key characteristic of graphs that has an effect on the performance of QAOA circuits, with denser graphs resulting in circuits with a greater number of two-qubit gates and an overall greater depth. In order to investigate the effect of graph density, and specifically the transition from low-density graphs with structured edge layouts to high-density graphs, the procedure described in Section II.2 is followed to build a spectrum of graphs with varying density. The simulation fidelity for various graphs with approximately 40% density, 60% density and 80% density at two QAOA layers is reported in FIG. 5. It is found that the simulation fidelity decreases with the overall graph density, a result that appears consistent with that for the random regular and complete graphs. Likewise, there does not appear to be a universal scaling law for the simulation fidelity of the low-density graphs, with the scaling law only becoming more apparent as the density increases and one approaches a complete graph. Note in particular that the QAOA circuits are not trained using truncated MPS. This is because any further optimisation at a given maximal entanglement per qubit would simply exacerbate the observations of high simulation fidelity at lower entanglements per qubit which fundamentally does not alter the principal result of this work. Furthermore, it has been found that for one QAOA layer, the parameters corresponding to the minimum of an exact QAOA simulation are unchanged upon truncation [29]. ### Quantum Machine Learning The QNNs are trained on all datasets introduced in Section II.3 using the 5 different ansatze on a full state-vector simulator for up to 100 layers. The test accuracy and entanglement entropy of the final state for the various-sized QNNs trained on the MNIST dataset are shown in FIG. 6. Similar to QAOA it is found that the entanglement entropy at the end of the circuit increases as the number of QNN layers increases until a threshold entropy is reached, \(<20\) layers for all but the alternating ansatz, after which the entanglement entropy appears to slightly decline with the test accuracy continuing to increase. We consider the structure of the alternating ansatz in Appendix E and conclude that the Hilbert Space in which the state, prepared using one layer of the ansatz, lives is greatly reduced compared to the other ansatze considered in this work. As a result, it takes many more layers for the alternating ansatz to be able to produce near-maximally entangled states. We note that a perfectly classified input would result in the final state of the circuit being a product state corresponding to the classification. This state would be of low entanglement. As a result, one can conclude that increasing the number of layers of the QNN results in a classification that is reproducible given the same input. This outcome is highly desirable. However, it is important to recognise that for systems where the number of qubits required for the input is equal to that required for the classification such a perfect outcome is not possible. This is because such an outcome would break the unitarity of the circuit, with in general many inputs leading to the same output. It is recognised that this is the case for the MNIST and FMNIST datasets which require 10 qubits to be amplitude encoded and under a one-hot encoding scheme[72] for classes will also require all of the same 10 qubits for classification. Possible workarounds, if such a perfect classification is needed, are to separate the encoding and classification qubits or otherwise utilise another class encoding strategy that requires fewer qubits than is required for embedding the input. FIG. 6 reveals a decrease in the entanglement entropy at the end of the circuit as the number of QNN layers is increased past a certain threshold. In order to hypothesise whether the QNNs will be resilient towards truncation, one must also consider the entanglement entropy throughout the evolution of the state. As such, the entanglement entropy throughout the execution of various-sized QNNs is tracked for the MNIST and binary MNIST datasets. The entropy at the end of each QNN layer for the periodic ansatz is shown in FIG. 7, with the remaining ansatze shown in Appendix F. It is observed that the decline in entanglement entropy occurs sharply at the end of the circuit and is not a gradual decre Figure 5: The simulation fidelity of graphs with density a) 40%, b) 60% and c) 80% at two QAOA layers. It is apparent that for less dense graphs the resilience towards truncation is greater than that for graphs of higher density. Also, note that the scaling of the fidelity becomes more regular as the density of the graph increases and the edges become less structured. in the analysis above. Hence, despite a decrease in the entanglement entropy of the final state, it is likely not the case that deeper circuits result in a greater resilience towards truncation similar to that which was observed for QAOA on regular graphs. This observation would also suggest that training a QNN layer by layer is unlikely to be successful as the final state of some QNN with \(a\) layers appears to be substantially different to a QNN with \(b>a\) layers at layer \(a\). This is in contrast to QAOA, where it is possible to see improvement in the cost by training a system layer by layer up until \(N\) total layers where \(N\) is the number of qubits[73]. To test this hypothesis, MPS simulations with truncation are run across the entire test set with test accuracy for each entanglement per qubit shown in FIG. 8. For the monochromatic \(28\times 28\) pixel MNIST and FMNIST datasets, it is found that all but the alternating ansatz appear to not be resilient to any truncation at 20, 60 or 100 QNN layers. This is in agreement with the observation made above that resilience towards truncation is unlikely even when the final state is of relatively lower entanglement. Counter-intuitively this means that provided a trained QNN and a quantum device or simulator incapable of achieving high entanglement, it may be more beneficial to use the alternating ansatz at low depth as it is capable of achieving higher test accuracies with less entanglement compared to the other ansatze. As the number of QNN layers is increased however this advantage disappears with all ansatze not showing any resilience towards truncation. Additionally, factoring in the gate requirements of each ansatz as per TABLE I, it is noted that despite the vastly Figure 6: The test accuracy (blue dots) and average entanglement entropy (red crosses) at the end of the circuit for the circuits as per FIG. 1. Each circuit is a 10 qubit QNN with varying layers trained on the MNIST dataset. Note a rapid increase in test accuracy as the QNNs approach the maximum Haar entanglement entropy at \(\approx\) 4.5 (red dashed line)[31]. As the number of QNN layers increases beyond what is needed to reach the maximum entanglement entropy, a decrease in the entropy as the test accuracy steadily increases is observed. Note that the alternating ansatz in e) takes longer to reach the maximum entanglement entropy and performs worse than all other ansätze, which perform comparably. Figure 7: The evolution of entanglement throughout the circuit for 20, 60 and 100 layer periodic ansatz QNNs trained on a) the binary MNIST dataset and b) the 10 class MNIST dataset. Test accuracies for a) are 0.998, 0.998 and 0.999 respectively and 0.728, 0.774 and 0.807 for b). Note that the entanglement entropy drops off towards the end of the circuit, with larger circuits showing a more significant drop in entanglement entropy towards the end of the circuit. The results for all other ansätze are provided in Appendix F. greater number of gates the full circuit ansatz does not appear to perform substantially better than the ansatze which only have a CNOT gate count which scales linearly with the number of qubits per layer. For the simple binary MNIST dataset, it is found that the alternating and single control ansatze show a significant resilience towards truncation at 20 QNN layers, and to a lesser extent at 60 QNN layers. All other ansatze show some resilience at 20 layers and 60 layers, but in general, appear to have a lower test accuracy for a given entanglement per qubit compared to the single control and alternating circuits. It is noted however that the entanglement entropy in FIG. 7(a) continues to decrease for QNNs with a greater number of layers. This is not inconsistent with the findings here of high test accuracy at low depth, noting in particular that classification is performed by taking the largest \(Z\) expectation value taken over many shots. Hence very high test accuracy does not necessarily correspond to a very high probability of measuring the output corresponding to the correct classification. Despite this, as the primary goal of QNNs is accurate classification it remains beneficial to use circuits of low depth that are resilient towards truncation which are able to achieve high classification accuracy. For simple datasets such as binary MNIST, this can be achieved with \(<20\) layers with the alternating ansatz. Analysing the more complex \(32\times 32\) RGB CIFAR dataset, it is found that the resilience towards truncation as circuit depth increases follows a similar pattern to the greyscale datasets. Interestingly, however, it is found that for the 20 layer QNN, the alternating ansatz maintains a complete resilience towards truncation. We believe this is a result of the imbalanced nature of the CIFAR dataset, with some classes being represented more frequently than others. As a result, one may simply guess the more frequent class for all inputs and get a test accuracy \(>0.5\). It is apparent that the alternating ansatz is capable of encoding this simple classification model at low depth in such a way that is completely resilient to truncation. As the depth increases however a more sophisticated classification is performed and this resilience is lost. At this high depth there also appears to be a slight decrease in the overall test accuracy for all but the alternating circuit. It is possible this is a result of entanglement-induced barren plateaus[74; 75], however, one would need to determine whether the entanglement entropy of the ansatze satisfies a volume law for this to be the case. Additionally, at high depth, there is a minor increase in the test accuracy as we perform moderate truncation for the alternating ansatz. Further investigation is required to determine the underlying mechanism here, but we hypothesise that the act of truncation, which is inherently a non-unitary evolution of the state, is such that the system has access to a part of the Hilbert space that was not available to it with just the standard unitary operations of the circuit ansatz. A similar phenomenon is observed for QAOA on grid graphs at higher depths as discussed in in Appendix D. Additionally, an overall decrease in the test accuracy across all ansatze for high-depth circuits is observed, which is likely due to entanglement-induced barren plateaus. Like QAOA, the QNNs are not trained using truncated MPS simulations. However, training the QNNs with a truncated MPS simulation may result in a marked improvement in the overall test accuracy. This hypothesis follows from Ref. [76] where the authors found that training QNNs with noisy inputs results in a notable improvement in test accuracy when tested on similarly noisy inputs. This is compared to QNNs that are trained using exact inputs but tested with noisy inputs where they found that the test accuracy was reduced. As the truncated MPS simulations are significantly slower than training using pennylane this hypothesis was not tested as it was not feasible to perform adequate training with the computational resources available to us. ### Summary of comparison between QAOA and QNN Our work has revealed that the entanglement of VQAs is highly dependent on the structure of the problem. In general, we find that for combinatorial optimisation problems solved using QAOA, high-depth circuits do not necessarily result in higher entanglement in circuits, with many problem instances showing some resilience towards truncation. This is compared to image classification tasks solved using QNNs, where we find that those circuits are much more entangled at high depth, and are far less resilient to any truncation of entanglement. Our results suggest that compared to QNN, generally QAOA is more likely to be within the capability of NISQ devices due to lower demand on the entanglement in the circuits. However, for some simple datasets such as binary MNIST, QNN models using alternating ansatz may also be within the reach of near-term devices due to relatively low-depth circuits requiring low entanglement. For sufficiently complex datasets, however, it is found that higher-depth and highly entangled QNNs are required to maintain high test accuracies. ## IV Conclusion and discussion Variational quantum algorithms show significant promise in the NISQ era of quantum computing as a result of their \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(p\) & Circuit & Accuracy & \(\#\)Gates & \(S\) \\ \hline 20 & Full & 0.731 & 1,100 & 2.254 \\ & Linear & 0.724 & 400 & 4.242 \\ & Periodic & 0.728 & 420 & 4.256 \\ & Single Control & 0.738 & 400 & 4.251 \\ & Alternating & 0.583 & 400 & 2.673 \\ \hline 60 & Full & 0.778 & 3,300 & 4.182 \\ & Linear & 0.749 & 1,200 & 4.182 \\ & Periodic & 0.774 & 1,260 & 4.165 \\ & Single Control & 0.751 & 1,200 & 4.222 \\ & Alternating & 0.760 & 1,200 & 4.209 \\ \hline 100 & Full & 0.791 & 5,500 & 4.086 \\ & Linear & 0.785 & 2,000 & 4.089 \\ & Periodic & 0.807 & 2,100 & 4.077 \\ & Single Control & 0.786 & 2,000 & 4.140 \\ & Alternating & 0.777 & 2,000 & 4.173 \\ \hline \end{tabular} \end{table} Table 1: The test accuracy, number of gates and entanglement entropy, \(S\), of the different QNN circuit ansätze trained on the MNIST dataset at \(p=20\), 60 and 100 layers. Despite requiring substantially more gates, the fully connected ansatz does not result in any substantial improvement in the test accuracy nor is the entanglement entropy substantially different to the other ansätze, except the alternating ansatz. ability to perform classically difficult tasks using relatively few qubits and circuits of shallow depth. In this work, we find that two of the leading VQCs, QAOA and QNNs, show a varying degree of resilience towards the truncation of entanglement. Importantly though, we find that the resilience of any given circuit is highly dependent on the depth of the circuit and the variational ansatz, recognising that different problems have inherently different requirements regarding circuit depth and structure. As a result, a previously observed scaling law[29] does not necessarily hold when considering specific problem instances. For QAOA, we relate the simulation fidelity to the entanglement entropy of the final states produced, finding that monotonic scaling laws with respect to the size of the system for each type of graph being solved are more likely to hold in the regime where the entanglement entropy has not yet reached a peak. We also find that structured regular graphs, such as the grid graph appear to exhibit scaling relations that are inherently distinct from random regular graphs, ultimately finding that the scaling relation approaches that of the complete graph upon increasing the density when interpolating from an initial grid graph. This has implications for problems such as Figure 8: Test accuracy for various datasets under limited entanglement simulations. Note that deeper QNNs are more able to utilise entanglement in a useful manner. We recognise that this is not necessarily a result of shallower circuits being less entangled as shallower circuits are found to have as high an entanglement entropy for QNN above 20 layers. In general, for the MNSIT binary classification task, there appears to be no reason that one needs to extend beyond a shallow, low entanglement circuit. For the binary CIFAR classification task it is observed that there is a notable decrease in the test accuracy for deeper QNNs, this is possibly a result of entanglement-induced barren plateaus. the vehicle routing problem map onto low-density graphs of a given structure. It still remains an open question as to the underlying mechanics behind the scaling relations observed. However one can relate the eventual increase in resilience to truncation to the simple nature of the final solution state of combinatorial optimisation problems being a product state. We recognise that any of the combinatorial optimisation problems considered can be solved exactly with a trivial product ansatz with \(3N\) parameters, where \(N\) is the number of qubits, but it remains open as to how efficiently such an optimisation is compared to a QAOA ansatz with fewer variational parameters. For QNNs, we find that all but the simplest datasets require highly entangled ansatze in order to achieve high test accuracy, with high-depth circuits exhibiting higher test accuracies and low resilience towards truncation of entanglement. We find that, unlike QAOA, despite a decrease in entanglement entropy for higher-depth models, resilience towards truncation decreases. For more complicated datasets such as the RGB CIFAR dataset, we sometimes observe a slight increase in test accuracy upon truncation, which is similar to an observation made for QAOA on certain problems at high depths. It is not clear what the underlying cause of this is, and as such this remains an open question. Given the relatively low test accuracies at which this phenomenon occurred, however, it is not clear whether or not this would continue to hold for models of interest that achieve a high test accuracy. Overall, our results based on a systematic set of simulations have provided important new insights into the role of entanglement in the performance of two widely used variational quantum algorithms. In the current NISQ era of quantum computing where noise or errors in quantum devices limit the entanglement that can be generated in quantum circuits, our work will enable their implementations with optimal accuracy within the constraint of affordable entanglement. ## Acknowledgements The authors acknowledge useful discussions with Charles D. Hill in the early stages of the project, and with Maxwell T West on the QNN part of the work. ACN acknowledges the support of the Australian Government Research Training Program Scholarship. The authors acknowledge the support from IBM Quantum Hub at the University of Melbourne. The computational resources were provided by the National Computing Infrastructure (NCI) and Pawsey Supercomputing Center through the National Computational Merit Allocation Scheme (NCMAS). ## Code availability The code used to generate all results in this paper is available upon reasonable request.
2303.06107
The connection between stellar mass, age and quenching timescale in massive quiescent galaxies at $z \simeq 1$
We present a spectro-photometric study of a mass-complete sample of quiescent galaxies at $1.0 < z < 1.3$ with $\mathrm{log_{10}}(M_{\star}/\mathrm{M_{\odot}}) \geq 10.3$ drawn from the VANDELS survey, exploring the relationship between stellar mass, age and star-formation history. Within our sample of 114 galaxies, we derive a stellar-mass vs stellar-age relation with a slope of $1.20^{+0.28}_{-0.27}$ Gyr per decade in stellar mass. When combined with recent literature results, we find evidence that the slope of this relation remains consistent over the redshift interval $0<z<4$. The galaxies within the VANDELS quiescent display a wide range of star-formation histories, with a mean star-formation timescale of $1.5\pm{0.1}$ Gyr and a mean quenching timescale of $1.4\pm{0.1}$ Gyr. We also find a large scatter in the quenching timescales of the VANDELS quiescent galaxies, in agreement with previous evidence that galaxies at $z \sim 1$ cease star formation via multiple mechanisms. We then focus on the oldest galaxies in our sample, finding that the number density of galaxies that quenched before $z = 3$ with stellar masses $\mathrm{log_{10}}(M_{\star}/\mathrm{M_{\odot}}) \geq 10.6$ is $ 1.12_{-0.72}^{+1.47} \times 10^{-5} \ \mathrm{Mpc}^{-3}$. Although uncertain, this estimate is in good agreement with the latest observational results at $3<z<4$, tentatively suggesting that neither rejuvenation nor merger events are playing a major role in the evolution of the oldest massive quiescent galaxies within the redshift interval $1<z<3$.
M. L. Hamadouche, A. C. Carnall, R. J. McLure, J. S. Dunlop, R. Begley, F. Cullen, D. J. McLeod, C. T. Donnan, T. M. Stanton
2023-03-10T17:46:59Z
http://arxiv.org/abs/2303.06107v1
The connection between stellar mass, age and quenching timescale in massive quiescent galaxies at \(z\simeq 1\) ###### Abstract We present a spectro-photometric study of a mass-complete sample of quiescent galaxies at \(1.0<z<1.3\) with \(\log_{10}(M_{\star}/\mathrm{M}_{\odot})\geq 10.3\) drawn from the VANDELS survey, exploring the relationship between stellar mass, age and star-formation history. Within our sample of 114 galaxies, we derive a stellar-mass vs stellar-age relation with a slope of \(1.20^{+0.28}_{-0.27}\) Gyr per decade in stellar mass. When combined with recent literature results, we find evidence that the slope of this relation remains consistent over the redshift interval \(0<z<4\). The galaxies within the VANDELS quiescent display a wide range of star-formation histories, with a mean star-formation timescale of \(1.5\pm 0.1\) Gyr and a mean quenching timescale of \(1.4\pm 0.1\) Gyr. We also find a large scatter in the quenching timescales of the VANDELS quiescent galaxies, in agreement with previous evidence that galaxies at \(z\sim 1\) cease star formation via multiple mechanisms. We then focus on the oldest galaxies in our sample, finding that the number density of galaxies that quenched before \(z=3\) with stellar masses \(\log_{10}(M_{\star}/\mathrm{M}_{\odot})\geq 10.6\) is \(1.12^{+1.47}_{-0.72}\times 10^{-5}\) Mpc\({}^{-3}\). Although uncertain, this estimate is in good agreement with the latest observational results at \(3<z<4\), tentatively suggesting that neither rejuvenation nor merger events are playing a major role in the evolution of the oldest massive quiescent galaxies within the redshift interval \(1<z<3\). keywords: galaxies: evolution - galaxies: star formation - galaxies: high-redshift ## 1 Introduction It is now well established that the local galaxy population is bi-modal in terms of colour, morphology and star-formation rate (SFR). The colour bi-modality was first observed using data from the Sloan Digital Sky Survey (SDSS, York et al., 2000), with galaxies falling into two categories: a star-forming 'blue cloud' and a quiescent'red sequence' (e.g. Strateva et al., 2001; Baldry et al., 2004). In general, more-massive galaxies tend to be red spheroids with little ongoing star formation, whilst less-massive galaxies are mainly blue, star-forming discs. Over the past few decades, many studies have aimed to quantify the mechanisms responsible for producing the bi-modality in the galaxy population (e.g. Dekel and Birnboim, 2006; Peng et al., 2010; Gabor and Dave, 2012; Schawinski et al., 2014). However, despite the wealth of ground-based and space-based data available, understanding exactly how the shutting down of star formation relates to this distinction between galaxy types remains hugely challenging. Observations have shown that, even within the quiescent population, galaxies demonstrate a range of characteristics. For example, more-massive galaxies are known to have formed earlier in cosmic time and much more rapidly, with clear evidence for younger stellar populations in less-massive galaxies compared to their more-massive counterparts (e.g. Cowie et al., 1996; Thomas et al., 2005; Fontana et al., 2006; Fontanot et al., 2009; Pacifici et al., 2016). This phenomenon, referred to as 'downsizing', is commonly used to describe the relationship between quiescent galaxy stellar mass and age, and indicates that quenching varies as a function of stellar mass and redshift. Empirical spectral age indicators have provided strong constraints on this downsizing trend, with features such as the D\({}_{\mathrm{n}}\)4000 index demonstrating positive correlations with stellar mass at redshift, \(z\lesssim 1\)(e.g. Bruzual, 1983; Balogh et al., 1999; Kauffmann et al., 2003; Brinchmann et al., 2004; Moresco et al., 2011, 2010, 2016). In addition to the bi-modality of the galaxy population, one of the most important observational results of the past few decades is the differing evolution of the star-forming and quiescent galaxy stellar mass functions (GSMF) across cosmic time, with the number density of quiescent galaxies apparently increasing by almost an order of magnitude since \(z\simeq 2\)(e.g. Cimatti et al., 2002; Abraham et al., 2004; Baldry et al., 2012; Muzzin et al., 2013; Davidzon et al., 2017; McLeod et al., 2021). However, recent studies also point to a substantial population of massive quiescent galaxies out to \(z>3\)(e.g. Schreiber et al., 2018; Valentino et al., 2020; Carnall et al. 2020, 2022a). Together, these results allow us to quantify the quiescent galaxy fraction across cosmic time, an important observational constraint on galaxy evolution models (e.g., Somerville & Dave, 2015). Another key result was the discovery that the sizes of quiescent galaxies have evolved much more rapidly than their star-forming counterparts since \(z\sim 2\), and that quiescent galaxies follow a steeper stellar mass-size relation than star-forming galaxies at all redshifts (e.g. Shen et al., 2003; Trujillo et al., 2006; McLure et al., 2013; van der Wel et al., 2014; Mowla et al., 2019). The physical processes driving the size growth of quiescent galaxies are still not fully understood, although it is widely accepted that minor mergers play an important role in explaining the observed growth from \(z\sim 2\) to the local Universe (e.g. see Hopkins et al., 2010; Trujillo et al., 2011; Cimatti et al., 2012; Ownsworth et al., 2014). Considerable effort has been devoted to understanding which physical mechanisms are required to explain the observed differences in the properties of the star-forming and quiescent galaxy populations. Our understanding of quenching mechanisms relies heavily on simulations of galaxy formation. At \(z<2\), simulations have been able to reproduce the observed bi-modality (see Dave et al., 2017, 2019; Nelson et al., 2018; Akins et al., 2022). However, the situation becomes more complicated at higher redshifts, and it is much more difficult to identify the key physical drivers of quenching. The main mechanisms thought to cause quenching can be categorised into two distinct pathways:'mass' (or 'internal', see Somerville & Dave, 2015) quenching, and 'environmental' quenching. Locally, these two pathways are clearly distinguishable, suggesting that multiple mechanisms quench galaxies (e.g., Peng et al., 2010). Mass quenching is often thought to be associated with feedback processes such as radiative- or jet-mode active galactic nucleus (AGN) feedback (e.g. Croton et al., 2006; Gabor et al., 2011; Choi et al., 2018). Quenching attributed to galaxy-galaxy interactions (often referred to as 'environmental' or'satellite' quenching) is thought to be the result of ram-pressure stripping, caused by satellite galaxies falling into larger dark matter halos, or virial shock-heating of the circum-galactic medium (see Dekel & Birnboim, 2006, also referred to as 'halo' quenching). These mechanisms can be further categorised as'slow' and 'fast' quenching pathways, respectively (Schawinski et al., 2014; Schreiber et al., 2016; Carnall et al., 2018; Belli et al., 2019). Shorter quenching timescales are thought to be linked with quasar-mode AGN feedback, which is thought to be more prevalent at high redshift (Wild et al., 2016). In contrast, it appears that the key process responsible for quenching at low redshift is the halting of gas accretion, taking place on much longer timescales of several Gyr (e.g. Peng et al., 2015; Trussler et al., 2020). Large spectroscopic surveys have facilitated increasingly sophisticated, statistical studies of galaxy physical properties at high redshift, with the aim of placing tighter constraints on the physical origins of quenching. The recently completed LEGA-C (van der Wel et al., 2016) and VANDELS (McLure et al., 2018) surveys provide ultra-deep spectroscopy for hundreds of quiescent galaxies at \(0.6<z<2.5\). These data sets, coupled with improved spectral energy distribution (SED) fitting methods (e.g. Carnall et al., 2019; Leja et al., 2019), have already enabled more-precise measurements of galaxy stellar masses, star-formation histories (SFHs) and stellar metallicities, unveiling significant correlations between these physical properties (e.g. Wu et al., 2018, 2021; Beverage et al., 2021; Carnall et al., 2022b). A key emerging result is the finding that the observed stellar mass vs stellar age relationship for \(z\sim 1\) quiescent galaxies is steeper than is predicted by the most recent generation of cosmological simulations (e.g. Carnall et al., 2019; Tacchella et al., 2022). These new spectroscopic analyses build upon a corpus of earlier work aiming to quantify these relationships, much of which was founded upon the use of elemental abundances as empirical proxies for formation and quenching timescales (e.g. Thomas et al., 2005; Conroy et al., 2014; Kriek et al., 2019). Despite these advances in the field, continued, in-depth investigation into the physical properties of quiescent galaxies is still needed to build a thorough understanding of quenching and passive galaxy evolution. In Hamadouche et al. (2022), we investigated the links between stellar mass, age, size and metallicity using quiescent galaxy samples from the LEGA-C (van der Wel et al., 2016) and VANDELS (McLure et al., 2018) spectroscopic surveys at \(z\simeq 0.7\) and \(z\simeq 1.1\), respectively. We examined stellar mass-age trends using the D\({}_{\rm x}\)4000 index as a proxy for the stellar population age, finding that more-massive galaxies exhibit higher D\({}_{\rm n}\)4000 values at both redshift ranges, consistent with prior evidence for the downsizing scenario at lower redshifts. In this work, we return to the VANDELS spectroscopic sample, building upon our previous results by employing full spectral fitting to probe the ages and SFHs of massive quiescent galaxies at \(z\gtrsim 1\) in detail. This study makes use of the fully completed VANDELS DR4 sample (Garilli et al., 2021), which includes more than twice the number of quiescent galaxy spectra studied in the initial analysis of Carnall et al. (2019). Moreover, in this study we implement an improved physical model, along with additional metallicity constraints for the VANDELS sample from Carnall et al. (2022b), to better constrain star-formation histories, stellar masses and formation and quenching times. Motivated by the ongoing challenges in quantifying the correlations between key quiescent galaxy physical properties, we begin by examining the relationship between stellar mass and age in our quiescent sample at \(1.0<z<1.3\), and discuss these results in the context of downsizing. The structure of this paper is as follows. We introduce the VANDELS survey in Section 2, before providing details of our sample selection and spectral fitting technique using the Bagpipes code (Carnall et al., 2018) in Section 3. We present our main results in Section 4 and discuss them in Section 5. Finally, we present our conclusions in Section 6. Throughout this paper, we assume a Kroupa (2001) initial mass function and the Asplund et al. (2009) Solar abundance of Z\({}_{\odot}=0.0142\). We assume cosmological parameters \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\rm m}=0.3\) and \(\Omega_{\Lambda}=0.7\) throughout. All magnitudes are quoted in the AB system. ## 2 The VANDELS survey VANDELS is a large ESO Public Spectroscopy Survey (McLure et al., 2018; Pentericci et al., 2018; Garilli et al., 2021) targeting the CDFS and UDS fields, and covering a total area of 0.2 deg\({}^{2}\). The survey data were obtained using the Visible Multi-Object Spectrograph (VIMOS, Le Fevre et al., 2004) on the ESO VLT. The final data release (DR4; Garilli et al., 2021) provides spectra for a sample of 2087 galaxies, the vast majority of which (87 per cent) are star-forming galaxies in the redshift range \(2.4<z<6.2\). However, in this study, we focus on the remaining 281 targets (13 per cent) selected as quiescent galaxies in the redshift range \(1.0<z<2.5\). ### VANDELS sample selection The VANDELS spectroscopic sample was originally drawn from a combination of four separate photometric catalogues. Two of these are the CANDELS GOODS South and UDS catalogues (Guo et al., 2013; Galametz et al., 2013), whilst the other two are custom ground-based catalogues (described in McLure et al., 2018), covering the wider VANDELS area outside of the CANDELS footprints. The parent quiescent sample was selected from these photometric catalogues as follows. Objects were required to have \(H\)-band magnitudes of \(H\leq 22.5\), corresponding to stellar masses of \(\log_{10}(M_{*}/\mathrm{M}_{\odot})\gtrsim 10\) over the redshift range of \(1.0\leq z_{\mathrm{spec}}\leq 1.3\) we focus on in this work (approximately 98 per cent of the full VANDELS quiescent sample has \(z_{\mathrm{spec}}<1.5\)), as well as \(i\)-band magnitudes of \(i\leq 25\). To separate star-forming and quiescent galaxies, rest-frame \(UVJ\) criteria were applied following Williams et al. (2009). These criteria result in a sample of 812 galaxies, which we refer to as the VANDELS photometric parent sample. ### VANDELS spectroscopy Here we briefly summarise the VANDELS spectroscopic observations, while referring the reader to Pentericci et al. (2018) for a full description. From the parent sample of 812 quiescent galaxies described in the previous section, 281 were randomly assigned slits and observed as part of the VANDELS survey. Objects were observed for 20, 40 or 80 hours depending on their \(i\)-band magnitudes. The observations were obtained using the MR grism, providing a median resolution of \(R\sim 600\) across a wavelength range from \(\lambda=4800-9800\) A. The VANDELS team manually measured spectroscopic redshifts, assigning redshift quality flags according to Le Fevre et al. (2013). In this paper we only use those galaxies with spectroscopic redshift flag 3 or 4, which has subsequently been shown to correspond to a \(\simeq 99\) per cent probability of being correct (Garilli et al., 2021). ## 3 Methodology and sample selection The VANDELS observations described in Section 2 produce an initial sample of 269 quiescent galaxies with robust spectroscopic redshifts, of which 87 per cent have \(1<z_{\mathrm{spec}}<1.5\). In this section, we describe the selection of the final quiescent sample that we use for our analysis. ### Spectro-photometric fitting We use Bagpipes(Carnall et al., 2018) to simultaneously fit the available spectroscopic and photometric data for our initial sample of 269 quiescent galaxies. We incorporate several improvements to the model used to fit the VANDELS photometric catalogues in Hamadouche et al. (2022) (based on Carnall et al., 2019), which we briefly describe below. We use a double-power-law star-formation history model, employing the updated 2016 versions of the BC03 stellar population synthesis models (Bruzual and Charlot, 2003; Cheval-Izand and Charlot, 2016). We also vary the stellar metallicity from \(Z_{*}=0.2-2.5Z_{\odot}\) using a logarithmic prior. We use the Salim et al. (2018) dust attenuation law, which parameterises the dust-curve shape through a power-law deviation, \(\delta\), from the Calzetti et al. (2000) law. Nebular continuum and emission lines are modelled using the Cloudy photoionization code (Ferland et al., 2017), using a method based on that of Byler et al. (2017). We assume a fixed ionization parameter of \(\log_{10}(U)=-3\). Full details of the free parameters and priors used in our fitting are provided in Table 1. We take into account systematic uncertainties in the observed spectra of our galaxies by applying additive noise and multiplicative calibration models (e.g., van der Wel et al., 2016; Cappellari, 2017; Johnson et al., 2021). We follow the approach outlined in Section 4 of Carnall et al. (2019), by fitting a second-order multiplicative Chebyshev polynomial to account for problems with flux calibration, and an additive Gaussian process model with an exponential squared kernel to model correlated additive noise between spectral pixels in our data. ### A mass-complete sample To ensure that our final sample is mass complete, we restrict the sample to \(\log_{10}(M_{*}/\mathrm{M}_{\odot})\geq 10.3\) and \(1.0\leq z_{\mathrm{spec}}\leq 1.3\) (see Carnall et al., 2019). In addition, we require members of the sample to have \(U-V>0.88\times(V-J)+0.69\), in order to remove green-valley galaxies. This has been shown to be broadly equivalent to a specific SFR cut of sSFR \(<0.2/t_{\mathrm{H}}\), where \(t_{\mathrm{H}}\) is the age of the Universe at the relevant redshift (Carnall et al., 2018). These criteria produce a sample of 139 quiescent galaxies. To clean the quiescent sample of potential X-ray contaminants, we remove five objects with matches in either the _Chandra_ Seven Mega-second catalogue (Luo et al., 2017) or the X-UDS catalogue (Kocevski et al., 2018) that cover the CDFS and UDS fields, respectively. All five galaxies with X-ray matches also display strong [O ii] emission in their rest-frame UV spectra. We also search for potential radio-loud AGN using the Very Large Array (VLA) 1.4 GHz data available for both fields (Simpson et al., 2006; Bonzini et al., 2013), finding one additional AGN candidate. This object was not removed from the quiescent sample because it does not display strong [O ii] emission. Finally, we remove one galaxy whose spectrum is highly contaminated (due to a nearby object), leaving a final, cleaned sample of 114 quiescent VANDELS galaxies. This final sample is shown on the _UVJ_ plane in Fig. 1, colour-coded by mass-weighted age, \(\mathrm{D}_{\mathrm{a}}4000\) and stellar metallicity. ### Stacked spectra In the sections of our analysis where we make use of stacked spectra, we use the following standard procedure to produce our stacks. We first de-redshift and then re-sample each individual spectrum onto a uniform 2.5 A wavelength grid using the spectral re-sampling module SpectRes(Carnall, 2017). Prior to stacking, we normalise by the median flux across the wavelength range \(3500-3700\) A. The median flux across all spectra in each pixel is then calculated. Uncertainties in the stacked spectra are calculated using the standard error on the median. For stacked spectra where we wish to show correlations with D\({}_{n}\)4000, the spectrum is then normalised by the median flux in the blue continuum band of the D\({}_{n}\)4000 index, such that the median flux density in the red continuum band corresponds to the D\({}_{n}\)4000 index of the stacked spectrum. We calculate D\({}_{n}\)4000 using the same prescription outlined in Section 3.4 of Hamadouche et al. (2022). ### Size measurements We use the Galfit(Peng et al., 2002) size measurements from Hamadouche et al. (2022) for 110/114 galaxies in the final sample. For the remaining galaxies we adopted an identical procedure to Hamadouche et al. (2022), using _HST_ F160W images for the three galaxies within the CANDELS footprint and _HST_ ACS F850LP imaging in CDFS for the single galaxy lying outside the CANDELS footprint. ## 4 Results In this section, we present the results obtained from full-spectral fitting of our final quiescent galaxy sample. \begin{table} \begin{tabular}{l c c c c} \hline **Component** & **Parameter** & **Symbol / Unit** & **Range** & **Prior** & **Hyperparameters** \\ \hline Global & Redshift & \(z_{\rm spec}\) & \(z_{\rm spec}\pm\) 0.015 & Gaussian & \(\mu=z_{\rm spec}\)\(\sigma=0.005\) \\ \hline \multirow{4}{*}{SFH} & Stellar mass formed & \(M_{*}/{\rm M}_{\odot}\) & (1, 10\({}^{13}\)) & log & \\ & Metallicity & \(Z_{*}/{\rm Z}_{\odot}\) & (0.2, 2.5) & log & \\ & Falling slope & \(\alpha\) & (0.1, 10\({}^{3}\)) & log & \\ & Rising slope & \(\beta\) & (0.1, 10\({}^{3}\)) & log & \\ & Peak time & \(\tau/\) Gyr & (0.1, \(t_{\rm obs}\)) & uniform & \\ \hline \multirow{4}{*}{Dust} & Attenuation at 5500 Å & \(A_{\rm V}/{\rm mag}\) & (0, 4) & uniform & \\ & Deviation from Calzetti et al. (2000) slope & \(\delta\) & (\(-0.3,0.3\)) & Gaussian & \\ & Strength of 2175 Å bump & \(B\) & (0, 5) & uniform & \\ \hline \multirow{4}{*}{Calibration} & Zeroth order & \(P_{0}\) & (0.5, 1.5) & Gaussian & \(\mu=1.0\), \(\sigma=0.25\) \\ & First order & \(P_{1}\) & (\(-0.5,0.5\)) & Gaussian & \(\mu=0.0\), \(\sigma=0.25\) \\ & Second order & \(P_{2}\) & (\(-0.5,0.5\)) & Gaussian & \(\mu=0.0\), \(\sigma=0.25\) \\ \hline \multirow{4}{*}{Noise} & White-noise scaling & \(a\) & (0.1, 10) & log & \\ & Correlated noise amplitude & \(b/f_{\rm max}\) & (0.0001, 1) & log & \\ \cline{1-1} & Correlation length & \(l/\Delta\lambda\) & (0.01, 1) & log & \\ \hline \end{tabular} \end{table} Table 1: Details of the parameter ranges and priors adopted for the Bagpipes fitting of the VANDELS photometry and spectroscopy (see Section 3.1). Priors listed as logarithmic are uniform in log-base-ten of the parameter. Figure 1: The distribution of the final mass-complete sample of 114 quiescent VANDELS galaxies on the _UVJ_ plane, highlighting trends between the rest-frame _UVJ_ colours and (left to right) D\({}_{n}\)4000, mass-weighted age and stellar metallicity. The first two panels show that redder rest-frame _UVJ_ colours correlate with higher D\({}_{n}\)4000 values and older mass-weighted ages, consistent with literature results (e.g., Belli et al., 2019; Carnall et al., 2019). The last panel, colour-coded by metallicity, does not demonstrate any significant trend. ### Trends with rest-frame _Uvj_ colour In Fig. 1, we show the distribution of the final mass-complete sample of VANDELS quiescent galaxies (see Section 3.2) on the rest-frame _UVJ_ diagram, coloured by mass-weighted age, D\({}_{n}\)4000, and stellar metallicity. In the first panel, we see that the galaxies with redder _U-V_ and _V-J_ colours, also tend to have higher mass-weighted ages, consistent with recent literature results (e.g., Belli et al., 2019; Carnall et al., 2019). In the next panel, the trend with D\({}_{n}\)4000 is similar; lighter-coloured points indicate higher D\({}_{n}\)4000 values, which is consistent with the trend seen in other samples at similar redshifts (e.g. Whitaker et al., 2013). The final panel of Fig. 1 shows the sample coloured by metallicity. There is no obvious trend between metallicity and _UVJ_ colour. The individual metallicities we measure are however consistent with scattering around the median value of \(\log_{10}(Z_{*}/\mathrm{Z}_{\odot})=-0.13\pm 0.08\) determined from an optical+NIR stack at \(z\sim 1.15\) by Carnall et al. (2022). ### The relationship between stellar mass and age We present our results for the stellar-mass vs age relation in Fig. 2. We plot redshift of formation, \(z_{\mathrm{form}}\), against stellar mass. The right-hand axis shows the corresponding formation time, \(t_{\mathrm{form}}\), measured forwards from the Big Bang. In this paper, we take \(t_{\mathrm{form}}\) and \(z_{\mathrm{form}}\) to be the age of the Universe and redshift corresponding to the mass-weighted age of the galaxy. We see a clear negative correlation, albeit with considerable scatter. A trend is also visible between \(t_{\mathrm{form}}\) and D\({}_{n}\)4000 in Fig. 2, with galaxies that have earlier formation times exhibiting higher values of D\({}_{n}\)4000, as would be expected. We fit a linear relationship between \(t_{\mathrm{form}}\) and \(\log_{10}(M_{*}/\mathrm{M}_{\odot})\), including an intrinsic scatter term, using the nested sampling Monte Carlo algorithm MLFriends (Buchner, 2016, 2019) using the UltraNest1 package (Buchner, 2021). We derive a best-fitting relation of: Footnote 1: [https://johannesbuchner.github.io/UltraNest/](https://johannesbuchner.github.io/UltraNest/) \[(t_{\mathrm{form}}\ /\ \mathrm{Gyr})=2.85^{+0.08}_{-0.09}{-1.20}{+0.28}\log_{10}(M _{*}/10^{11}\ \mathrm{M}_{\odot}). \tag{1}\] We also find an intrinsic scatter of \((t_{\mathrm{form}}\ /\ \mathrm{Gyr})=0.51^{+0.09}_{-0.07}\). We show the fit to our data in Fig. 2 (purple line) with the shaded region showing the \(1\sigma\) confidence interval. We also show the result derived by Carnall et al. (2019), using the VANDELS DR2 sample of 53 galaxies, which is a subset of our new 114-galaxy final VANDELS sample. The slope of our new relation is in good agreement with this previous result, however we recover a \(\sim 300\) Myr offset towards younger ages. To explore the origin of this offset, we re-fit our linear model to the sub-sample of 53 galaxies used by Carnall et al. (2019), obtaining a result consistent with theirs. We therefore conclude that this offset is a result of our expanded statistical VANDELS DR4 sample, which contains more galaxies that have high stellar masses and lower formation redshifts with respect to the DR2 subset. We also explore the median relationship between stellar Figure 2: _Left:_ Stellar mass versus formation redshift for our final mass-complete VANDELS DR4 quiescent galaxy sample. The sample is colour-coded by D\({}_{n}\)4000, demonstrating a clear preference for higher D\({}_{n}\)4000 values at earlier formation times. The relationship we fit in Section 4.2 is shown in purple, with the \(1\sigma\) confidence interval shaded. The relation derived for a smaller sample from VANDELS DR2 by Carnall et al. (2019) is shown in blue. _Right:_ Median formation redshifts for our sample in 0.3-dex bins. The error bars are the standard errors for \(t_{\mathrm{form}}/\ \mathrm{Gyr}\) and \(\log_{10}(M_{*}/\mathrm{M}_{\odot})\) in each stellar-mass bin. A clear negative correlation is observed. The final bin contains only ten objects above \(\log_{10}(M_{*}/\mathrm{M}_{\odot})>11.2\), meaning that the apparent flattening of the relationship is challenging to assess. mass and age in our sample by binning our galaxies into equal-width stellar-mass bins of 0.3 dex. This is shown in the right-hand panel of Fig. 2, where the relationship is clear up to stellar masses of \(\log_{10}(M_{\star}/\mathrm{M}_{\odot})\simeq 11.2\). We discuss this relationship in more detail in Section 5.1, making comparisons to relevant literature, which are shown in Fig. 3. ### The oldest galaxy at \(z\simeq 1\) From inspection of Fig. 2, it is clear that there is a single galaxy (ID: 111129) which falls significantly below the main stellar mass vs age distribution, with a formation time of \(t_{\mathrm{form}}=0.75^{+0.41}_{-0.29}\) Gyr (\(z_{\mathrm{form}}=7.02^{+3.06}_{-2.07}\)), and a quenching time (\(t_{\mathrm{quench}}\) is defined as the age of the Universe at which the normalised star-formation rate, nSFR, as defined in Carnall et al. 2018, first falls below 0.1) of \(t_{\mathrm{quench}}=1.94^{+0.86}_{-0.67}\) Gyr (\(z_{\mathrm{quench}}=3.23^{+1.41}_{-0.93}\)) after the Big Bang, respectively. Given recent reports, based on the first data from JWST, of the assembly of significant numbers of massive galaxies during the first billion years (e.g., Labbe et al., 2022), and their subsequent quenching during the second billion years (e.g., Carnall et al., 2022), this is clearly an object of significant interest. ## 5 Discussion In Section 4, we report the relationship between stellar mass and age from full spectral fitting of our mass-complete VANDELS quiescent sample. In this section, we discuss our results, focusing on relationships between age, _UVJ_ position, \(\mathrm{D}_{\mathrm{z}}4000\) and metallicity evident within our sample. ### The formation times of quiescent galaxies Over the past decade, extensive research has been conducted into the star-formation histories and ages of quiescent galaxies. This has revealed a sub-population of extremely old galaxies, which formed very early in cosmic history (e.g., Glazebrook et al., 2017; Schreiber et al., 2018; Valentino et al., 2020). These galaxies tend to have higher stellar masses and more compact morphologies than is typical for the quiescent population. In order to constrain the build-up of the quiescent population across cosmic time, and reveal the fate of these oldest, most extreme systems, detailed knowledge of the stellar mass vs stellar age relationship as a function of observed redshift is required. The stellar mass vs stellar age relation presented in Section 4.2 is based on the robust, mass-complete VANDELS spectroscopic sample. In this section, we compare these results with similar studies in the literature, across a broad redshift range. Figure 3: Stellar mass versus formation redshift for massive quiescent galaxies, taken from a range of studies across a wide range in observed redshift. Our sample of VANDELS DR4 quiescent galaxies are shown as circles, and our best-fit line is over-plotted. For some studies that do not report best-fitting relationships between stellar mass and age, we have fitted their results for individual galaxies using the methodology described in Section 4.2. These studies broadly agree on the slope of the relationship, which is found to be consistent at \(\simeq\) 1.5 Gyr per decade in mass across cosmic history. The normalisation of these relationships does not follow the expected smooth evolution with observed redshift, likely due to methodological differences (see Section 5.1.2). Our results are placed into the context of recent literature in Fig. 3, which shows results derived by Gallazzi et al. (2005, 2014); Choi et al. (2014); Onodera et al. (2015); Schreiber et al. (2018); Belli et al. (2019); Carnall et al. (2019); Merlin et al. (2019); Estrada-Carpenter et al. (2020) and Tacchella et al. (2022), with the stellar mass vs age relations derived in various observed redshift ranges over-plotted. For several data-sets shown in the figure, no average relationship between stellar mass and age is calculated by the authors. In these cases, we perform a fit to the individual galaxy masses and ages, using the same method outlined in Section 4.2. #### 5.1.1 Slopes of the observed relationships The slope of the stellar mass vs age relationship is intimately connected to the physics of quenching in massive galaxies. We derive a slope for the VANDELS DR4 sample of \(1.20^{+0.28}_{-0.27}\) Gyr per decade in mass. As can be seen from Fig. 3, this is in good agreement with the other literature relationships shown. At the highest redshifts, the sample of Schreiber et al. (2018) at \(3.0<z<4.0\) displays a slope consistent with our result at \(z\simeq 1.1\) to within \(2\sigma\). In the local Universe, the results of Gallazzi et al. (2005) also display a very similar slope. This suggests the slope of the stellar mass vs age relationship for massive quiescent galaxies remains broadly constant across cosmic history. As can be seen from Fig. 3, the results of Belli et al. (2019), who study a sample of 23 massive quiescent galaxies at \(1.5<z<2.5\) using data from the Keck-MOSFIRE spectrograph, suggest a steeper relation between stellar mass and age. We perform a fit to their galaxies on the stellar mass-age plane, finding a slope of \(1.73^{+0.40}_{-0.40}\) Gyr per decade in mass. Whilst this is a steeper slope than our result, it is not strongly in tension, owing to the relatively small samples involved. In Carnall et al. (2019) and Tacchella et al. (2022), the authors compare the observed stellar mass vs age relationship with the predictions of cosmological simulations. Carnall et al. (2019) derive this relationship from snapshots of the 100 \(h^{-1}\) Mpc box runs of Simba (Dave et al., 2019) and IllustrisTNG (Nelson et al., 2018) at \(z=0.1\) and \(z=1.0\). They find that these simulations predict slopes of \(\simeq\)1.5 Gyr per decade in mass in the local Universe, but much shallower slopes at \(z\sim 1\), with Tacchella et al. (2022) reporting similar findings for IllustrisTNG at \(z\sim 0.7\). Our results are consistent with the predicted slopes of \(\sim\)1.5 Gyr per decade in mass from these two simulations in the local Universe (\(z\sim 0.1\)). However, our results again suggest that simulations should seek to reproduce the same, steeper stellar mass vs age relationship for massive quiescent galaxies throughout cosmic history. #### 5.1.2 Normalisations of the observed relationships The redshift evolution of the average age of quiescent galaxies at fixed stellar mass is influenced primarily by the quenching of new galaxies that join the quiescent population over time (e.g. McLeod et al., 2021). This effect is sometimes known as progenitor bias. The expected evolution of the relationships shown in Fig. 3 due to progenitor bias would be a steady increase in normalisation from high to low redshift. The rate of this decrease in formation redshift with decreasing observed redshift is also highly sensitive to the effects of merger and rejuvenation events. Unfortunately, this idealised smooth upward evolution with decreasing redshift is not observed in Fig. 3. Whilst studies of galaxy samples at the highest observed redshifts typically return the highest formation redshifts, and the local-Universe study of Gallazzi et al. (2005) returns the lowest, there is confusion between these extremes. We report later average formation times than all studies targeting higher-redshift galaxy samples; however, both Gallazzi et al. (2014) and Tacchella et al. (2022) also report earlier average formation times than our study, despite analysing samples at lower observed redshifts (\(z\simeq 0.7\), as opposed to \(z\simeq 1.1\) for our sample). As has been discussed in several recent works (Tacchella et al., 2022; Carnall et al., 2022), these differences are likely the result of methodological differences between studies. The two most important of these are different definitions of age Figure 4: Stellar mass versus formation redshift and quenching redshift (in the top and bottom panels respectively) for our final sample of VANDELS quiescent galaxies. The galaxies are colour-coded by D\({}_{4}\)4000, highlighting that higher values of D\({}_{4}\)4000 are observed for galaxies with earlier formation times. (mass-weighted vs light-weighted), and differences between parametric and non-parametric SFH models, the latter of which typically return older stellar ages (Carnall et al., 2019; Leja et al., 2019, 2019). For this reason, a clear understanding of the redshift evolution of the normalisation of this relationship requires a study applying the same methodology to observed samples at a wide range of redshifts. #### 5.1.3 Number density of the oldest galaxies at \(z\simeq 1\) Fig. 4 shows the formation and quenching redshifts of our VANDELS quiescent sample versus stellar mass, in the top and bottom panels, respectively. A significant number of galaxies have formation redshifts of \(z_{\rm form}>3\), all of which have stellar masses of \(\log_{10}(M_{*}/{\rm M}_{\odot})\geq 10.6\). Only one galaxy has a formation redshift of \(z_{\rm form}>5\) (see Section 4.3). Only two galaxies in our sample have quenching redshifts \(z_{\rm quench}>3\). These objects are of particular interest, given that current simulations seem to under-predict the numbers of galaxies that quenched at these very early times (see Schreiber et al., 2018; Cecchi et al., 2019; Tacchella et al., 2022), possibly due to an additional mechanism capable of causing a rapid early shutdown of star formation, not yet included in simulations. We calculate the number density of galaxies in our VANDELS sample that have \(z_{\rm quench}>3\), recovering a value of \(1.12^{+1.47}_{-0.72}\times 10^{-5}\) Mpc\({}^{-3}\) (with Poisson uncertainties calculated using the confidence intervals presented in Gehrels, 1986). This is consistent with the results of Schreiber et al. (2018), who calculate a number density for quiescent galaxies observed at \(3<z<4\) of \((1.4\pm 0.3)\times 10^{-5}\) Mpc\({}^{-3}\). This preliminary agreement is encouraging, though our sample of such old quiescent galaxies is very small. Our result is consistent with neither rejuvenation or mergers having a significant impact on this population from \(1<z<3\). For example, if we had found no quiescent galaxies with \(z_{\rm quench}>3\) in our sample, this would suggest that the majority of \(z>3\) quiescent galaxies experience mergers and/or rejuvenation by \(z\simeq 1\). However, much larger samples will be necessary to conduct detailed comparisons of this nature. ### The relationship between colour and age Next we consider the relationship between galaxy stellar age and position on the _UVJ_ diagram, performing a stacking analysis of our sample using two bins in rest-frame colour. Following the approach of Whitaker et al. (2013), we divide the sample into two bins on the _UVJ_ diagram, separated using the criteria: \[(U-V)=-1.14\times(V-J)+3.10, \tag{2}\] as illustrated in the inset panel of Fig. 5 by the dot-dashed line. In order to minimise the impact of the correlation between stellar mass and age, we additionally restrict both _UVJ_ bins to only include galaxies with \(\log_{10}(M_{*}/{\rm M}_{\odot})\geq 10.6\). Adopting these criteria produces bins containing similar numbers of objects and comparable median stellar masses; \(\log_{10}(M_{\rm med}/{\rm M}_{\odot})=10.86\pm 0.03\) and \(11.00\pm 0.04\) for the blue and red _UVJ_ bins, respectively. In the main panel of Fig. 5 we show stacked spectra constructed from the objects in each _UVJ_ bin and in Table 2 we present \({\rm D_{n}4000}\) values calculated from the stacked spectra, together with the median \({\rm D_{n}4000}\) values and stellar ages of the objects in each bin. It is clear from these results that galaxies within the red _UVJ_ bin display larger \({\rm D_{n}4000}\) values and older stellar ages than their counterparts within the blue _UVJ_ bin. This is consistent with the expected correlation between age and _UVJ_ colour (e.g., Whitaker et al., 2013; Belli et al., 2019), and with the trends observed in the centre panel _UVJ_ diagram in Fig. 1, which is colour-coded by mass-weighted age from the Bagpipes fits. This stacking experiment independently confirms and quantifies the age-colour trend, given the offset in median \({\rm D_{n}4000}\) values calculated for the red and blue _UVJ_ colour bins. We note that Whitaker et al. (2013) find ages of \(0.9^{+0.2}_{-0.1}\) Gyr and \(1.6^{+0.5}_{-0.4}\) Gyr for their blue and red _UVJ_ sub-samples, based on stacked grism spectra of quiescent galaxies at \(1.4<z<2.2\). Within the large uncertainties, this difference of \(0.7\pm 0.5\) Gyr is fully consistent with the difference of \(0.4\pm 0.2\) Gyr we find from our analysis. ### Quenching timescales As discussed in the introduction, recent studies of the star-formation histories of quiescent galaxies point to the existence of multiple quenching channels (e.g., Belli et al., 2019; Carnall et al., 2019; Tacchella et al., 2022). In general, the star-formation histories we derive for the VANDELS quiescent sample are consistent with this picture, displaying a range of formation and quenching times. However, to investigate this issue in more detail it is interesting to define a quenching timescale parameter: \(\Delta t_{\rm quench}\). In this work, \(t_{\rm quench}\) is defined as the age of the Universe at which the normalised star-formation rate (nSFR; see Carnall et al., 2018) falls below 0.1, corresponding to the time after the Big Bang at which a galaxy is labelled as quiescent by our selection criteria. Therefore, the quenching timescale is naturally defined as: \[\Delta t_{\rm quench}=t(z_{\rm quench})-t(z_{\rm form}). \tag{3}\] In Fig. 6, we plot quenching timescale versus stellar mass for the VANDELS quiescent galaxies. It is clear that within our sample there is not a significant correlation between quenching timescale and stellar mass, with some of the highest mass galaxies having quenching timescales of \(2-3\) Gyr, while others quench in significantly less than 1 Gyr. The mean quenching timescale for our full sample is \(\Delta t_{\rm quench}=1.4\pm 0.1\) Gyr. Although the large scatter observed within our VANDELS sample in Fig. 6 may be a result of intrinsic galaxy-to-galaxy variations, it is still important to note the fact that properties such as star-formation histories, ages and quenching times can be affected by the fitting method (see e.g. Pforr et al., 2012; Carnall et al., 2019; Leja et al., 2019). Throughout this paper we have used a double-power law \begin{table} \begin{tabular}{c c c c c} \hline _UVJ_ position & \(N\) & \({\rm D_{n}4000_{stack}}\) & \({\rm D_{n}4000_{med}}\) & age / Gyr \\ \hline Blue _UVJ_ & 45 & \(1.57\pm 0.01\) & \(1.58\pm 0.03\) & \(2.45\pm 0.14\) \\ Red _UVJ_ & 47 & \(1.66\pm 0.02\) & \(1.63\pm 0.04\) & \(2.80\pm 0.12\) \\ \hline \end{tabular} \end{table} Table 2: The \({\rm D_{n}4000}\) values calculated from the stacked spectra shown in Fig. 5. We also report median values in each of the three bins for \({\rm D_{n}4000}\) and mass-weighted age, where \(N\) represents the number of galaxies in each bin. parametric SFH, which is more flexible than other parametric SFHs (due to it allowing independent rising and falling phases) and is also a better estimator of stellar masses and ages (Carnall et al., 2018). Although parametric SFHs (such as the double-power law) appear to describe the majority of simulated galaxy populations well, they are not without fault; for example, parametric SFHs fail to model sharp transitions in SFR as accurately as non-parametric SFHs (Leja et al., 2019). For non-parametric models, SFR and ages have been shown to be more dependent on the priors used, than e.g. stellar masses (Leja et al., 2019), however, the errors on these parameters are larger and thus more realistic than the smaller errors produced by parametric SFHs. In Fig. 6, the quenching timescales of our galaxies appear to be well-constrained, however these error bars may not be truly representative of the uncertainty on this parameter due to these SFH modelling effects. In future work, we will extend our methodology to extract quenching timescales from quiescent galaxy samples using non-parametric SFHs, in order to explore the effects of different approaches to modelling star-formation histories. This will allow for a more quantitative understanding of the mechanisms by which these galaxies have quenched, and give further insight into their evolution since quenching. ## 6 Conclusions In this paper, we have explored the relationships between stellar mass, age, star-formation history and quenching timescales for a robust spectroscopic sample of quiescent galaxies at \(1.0<z<1.3\). Our main results and conclusions can be summarised as follows: 1. We derive significantly improved constraints on the relationship between stellar population age and stellar mass for quiescent galaxies at \(z\simeq 1.1\). From our full VANDELS sample we derive an age-mass relation which has a slope of \(1.20^{+0.28}_{-0.27}\) Gyr per decade in stellar mass. 2. Comparing to previous studies in the literature, we find good agreement on the slope of the age-mass relation for quiescent Figure 5: The stacked spectra of our final sample of galaxies (for galaxies with \(\log_{10}(M_{*}/\mathrm{M}_{\odot})\geq 10.6\)) in two bins based on their position on the _UVJ_-diagram. The galaxies are split into blue _UVJ_ and red _UVJ_ populations (see inset) following a method similar to the one adopted by Whitaker et al. (2013). As detailed in the main text, we see an increase in \(\mathrm{D}_{n}4000\) and median mass-weighted age with increasing _UVJ_ colour (from blue to red). Figure 6: Quenching timescale (\(\Delta t_{\mathrm{quench}}\)) versus stellar mass for the full VANDELS quiescent galaxy sample. The value of the \(\mathrm{D}_{n}4000\) index is shown by the colour bar. galaxies from the local Universe out to \(z\simeq 4\). The observed slope is in good agreement with the prediction from simulations at \(z\simeq 0\), but significantly steeper than simulations predict at \(z\geq 1\). 3. The results of our spectro-photometric fitting predict that the number density of already quenched galaxies at \(z\geq 3\) with stellar masses \(\log_{10}(M_{\star}/{\rm M}_{\odot})\geq 10.6\) is \(1.12^{+1.47}_{-1.72}\times 10^{-5}\) Mpc\({}^{-3}\). Although subject to large uncertainties due to small-number statistics, this estimate is in good agreement with the latest measurements at \(3<z<4\). The implication is that rejuvenation or merger events are not playing a major role in modulating the number density of the oldest massive quiescent galaxies within the redshift interval \(1<z<3\), although they cannot be ruled out entirely. 4. We confirm previously reported results that quiescent galaxies with redder \(U\)\(U\)\(J\) colours are systematically older than their bluer counterparts, finding an off-set of \(0.4\pm 0.2\) Gyr in the median age of mass-matched samples. 5. The VANDELS sample of \(z\simeq 1.1\) quiescent galaxies displays a wide range of formation and quenching redshifts. We find that the mean quenching timescale is \(1.4\pm 0.1\) Gyr, where \(\Delta t_{\rm quench}=t(z_{\rm quench})-t(z_{\rm form})\). The oldest galaxy within the VANDELS sample (ID: 111129) has \(z_{\rm form}=7.02^{+3.06}_{-2.07}\) and \(z_{\rm quench}=3.23^{+1.41}_{-0.93}\). Future studies using data from surveys such as PRIMER (Dunlop et al., 2021) and the JWST Advanced Deep Extragalactic Survey (JADES), as well as near-infrared ground-based spectroscopy from the MOONS spectrograph on the VLT will provide higher SNR and larger samples of quiescent galaxies out to \(z\simeq 2.5\). Combining these data with more sophisticated galaxy fitting methods (e.g. non-parametric SFHs) will enable a better understanding of quiescent galaxy properties and quenching mechanisms out to higher redshift. ## Acknowledgements The authors thank the referee for useful comments which helped improve the quality of this manuscript. M. L. Hamadouche, R. Begley, and C. T. Donnan acknowledge the support of the UK Science and Technology Facilities Council. A. C. Carnall acknowledges the support of the Leverhulme Trust. F. Cullen and T. M. Stanton acknowledge support from a UKRI Frontier Research Guarantee Grant (PI Cullen; grant reference EP/X021025/1). Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003(E-Q) (The VANDELS ESO Public Spectroscopic Survey). Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under ESO programme ID 179.A-2005 and on data products produced by TERAPIX and the Cambridge Astronomy Survey Unit on behalf of the UltraVISTA consortium. This research made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration et al., 2013, 2018). ## Data Availability The VANDELS survey is a European Southern Observatory Public Spectroscopic Survey. The full spectroscopic dataset, together with the complementary photometric information and derived quantities are available from [http://vandels.inaf.it](http://vandels.inaf.it), as well as from the ESO archive [https://www.eso.org/qi/](https://www.eso.org/qi/).
2302.11752
EVJVQA Challenge: Multilingual Visual Question Answering
Visual Question Answering (VQA) is a challenging task of natural language processing (NLP) and computer vision (CV), attracting significant attention from researchers. English is a resource-rich language that has witnessed various developments in datasets and models for visual question answering. Visual question answering in other languages also would be developed for resources and models. In addition, there is no multilingual dataset targeting the visual content of a particular country with its own objects and cultural characteristics. To address the weakness, we provide the research community with a benchmark dataset named EVJVQA, including 33,000+ pairs of question-answer over three languages: Vietnamese, English, and Japanese, on approximately 5,000 images taken from Vietnam for evaluating multilingual VQA systems or models. EVJVQA is used as a benchmark dataset for the challenge of multilingual visual question answering at the 9th Workshop on Vietnamese Language and Speech Processing (VLSP 2022). This task attracted 62 participant teams from various universities and organizations. In this article, we present details of the organization of the challenge, an overview of the methods employed by shared-task participants, and the results. The highest performances are 0.4392 in F1-score and 0.4009 in BLUE on the private test set. The multilingual QA systems proposed by the top 2 teams use ViT for the pre-trained vision model and mT5 for the pre-trained language model, a powerful pre-trained language model based on the transformer architecture. EVJVQA is a challenging dataset that motivates NLP and CV researchers to further explore the multilingual models or systems for visual question answering systems. We released the challenge on the Codalab evaluation system for further research.
Ngan Luu-Thuy Nguyen, Nghia Hieu Nguyen, Duong T. D Vo, Khanh Quoc Tran, Kiet Van Nguyen
2023-02-23T02:38:39Z
http://arxiv.org/abs/2302.11752v5
# VLSP 2022 - EVJVQA Challenge: Multilingual Visual Question Answering ###### Abstract Visual Question Answering (VQA) is a challenging task of natural language processing (NLP) and computer vision (CV), attracting significant attention from researchers. English is a resource-rich language that has witnessed various developments in datasets and models for visual question answering. Visual question answering in other languages also would be developed for resources and models. In addition, there is no multilingual dataset targeting the visual content of a particular country with its own objects and cultural characteristics. To address the weakness, we provide the research community with a benchmark dataset named EVJVQA, including 33,000+ pairs of question-answer over three languages: Vietnamese, English, and Japanese, on approximately 5,000 images taken from Vietnam for evaluating multilingual VQA systems or models. EVJVQA is used as a benchmark dataset for the challenge of multilingual visual question answering at the 9th Workshop on Vietnamese Language and Speech Processing (VLSP 2022). This task attracted 62 participant teams from various universities and organizations. In this article, we present details of the organization of the challenge, an overview of the methods employed by shared-task participants, and the results. The highest performances are 0.4392 in F1-score and 0.4009 in BLUE on the private test set. The multilingual QA systems proposed by the top 2 teams use ViT for the pre-trained vision model and mT5 for the pre-trained language model, a powerful pre-trained language model based on the transformer architecture. EVJVQA is a challenging dataset that motivates NLP and CV researchers to further explore the multilingual models or systems for visual question answering systems. We released the challenge on the Codalab evaluation system for further research. ## 1 Introduction Visual or Image-Based Question Answering is a challenging task that requires knowledge of two hot AI fields: natural language processing and computer vision. Specifically, querying the information of images through human-language questions is a friendly and natural approach to searching for information, meeting the needs of people extracting information in many domains such as life, education, work, etc. However, studies have mainly focused on resource-rich languages such as English. In this challenge, we aim to extend visual question answering to more languages, including rich and low-resource languages, including resources and models. English has witnessed numerous benchmarks for evaluating and developing visual question answering models or systems. Recently, researchers designed datasets with goal-oriented evaluations. Firstly, VQA datasets [1, 16, 17, 18] were created on general images. Soon after, the more complex dataset based on reasoning is discovered by [10]. In addition, VQA [1] is also geared towards support applications for seemingly-impaired and blind people. [19] showed that VQA has the ability to read text on photos. More challenging, VQA requires external, commonsense, or world knowledge to predict more correct answers [13, 14]. Besides, Changpinyo et al. [3] proposed a multilingual dataset for visual question answering in 13 languages. However, this dataset is done automatically based on the auto-translation and verification method. In this paper, we presented a human-generation multilingual dataset, including three languages: English (resource-rich language), Japanese, and Vietnamese(resource-poor language). In this paper, we have three main contributions described as follows. 1. Firstly, we constructed UIT-EVJVQA, a mul tilingual dataset for evaluating the visual question answering systems or models, which comprises 33.790 question-answer pairs in three languages: English, Vietnamese, and Japanese. 2. We organize the VLSP2022-EVJVQA Challenge for evaluating multilingual VQA models (Vietnamese, English, and Japanese) at the VLSP 2022. Our baseline system obtains 0.3346 in F1-score and 0.2275 in BLEU on the public and private test sets, respectively, and there are no models of participating teams that pass 0.44 (in F1-score) on the private test set, which indicates our dataset is challenging and requires the development of multilingual VQA models. 3. When combined with other VQA datasets for analysis, UIT-EVJVQA could potentially be a useful resource for multilingual research. The following is how the rest of the article is organized. In Section 2, we provide a brief overview of the background and relevant studies. We introduce the VLSP 2021-EVJVQA Challenge in Section 3. Our new dataset (UIT-EVJVQA) is presented in detail in Section 4. Section 5 presents the systems and results proposed by participating teams. In Section 6, we provide further analysis of the challenge results. Finally, Section 7 summarizes the findings of the VLSP 2022-EVJVQA Challenge and suggests future research directions. ## 2 Background and Related Work Visual Question Answering (VQA) is a challenging task that has significant value not only in the research community but also in daily life. VQA task was first introduced by (Antol et al., 2015). The authors were successful in creating a novel dataset and fundamental English methodologies. Inspired by that success, various further studies have been created and implemented in a variety of languages (Gupta, 2017) including Chinese (Qi et al., 2022), Japanese (Shimizu et al., 2018), and Vietnamese (Tran et al., 2021). VQA has gained more attention from researchers in recent years and has shown significant growth. Studies are currently being introduced not only in monolingual but also in multilingual applications (Pfeiffer et al., 2021; Khan et al., 2021; Liu et al., 2022; Nooralahzadeh and Sennrich, 2022). This stage contributes significantly to the creation of multilingual VQA (mVQA) systems. Some typical research works in this approach can be mentioned such as (Gupta et al., 2020) with the study that the proposed model is capable of predicting responses from questions in Hindi, English, or Code-mixed (Hinglish: Hindi-English) languages; Changpinyo et al. (Changpinyo et al., 2022) with a translation-based mVQA dataset in 7 distinct languages; Gao et al. (Gao et al., 2015) construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset contains over 150,000 images and 310,000 freestyle Chinese question-answer pairs and their English translations. In this study, we create the first dataset for the task of mVQA on English-Vietnamese-Japanese (EVJVQA). The EVJVQA dataset is expected to open up new research areas and aid in evaluating multilingual VQA models. ## 3 The VLSP 2022 - EVJVQA Challenge ### Task Definition This task aims to enable the ability of computers to understand images and answer relevant questions in different languages from users. The task is defined as below (Figure 1): * **Input**: Given an image and a question that can be answerable. * **Output**: An answer where can be a span related to the image's content. **Language Selection**. Three languages are selected in which the main language is Vietnamese, and the other two popular other languages (English and Japanese) in the pictures taken from Vietnam. ### Evaluation Metrics In this challenge, we use two evaluation metrics: F1 and BLUE. Based on (Rajpurkar et al., 2016), the F1-score of each answer is calculated based on tokens of the gold answer (GA) and tokens of the predicted answer (PA). The overall F1 is averaged across all questions of each set. For Vietnamese and English languages, we calculate F1 based on tokens, whereas F1 is calculated based on characters for Japanese. \[Precision(P)=\frac{GA\cap PA}{PA}\] \[Recall(R)=\frac{GA\cap PA}{GA}\] \[F1=\frac{2PR}{P+R}\] Inspired by [10], the **B**ilingual **E**valuation **U**nderstudy (BLEU), a popular evaluation metric in machine translation, computes the n-gram co-occurrence between human-generation answers and system-generation answers. The best performances were estimated by averaged BLEU-based performances (BLEU-1, BLEU-2, BLEU-3, and BLEU-4) on the public test and private test sets. Both evaluation metrics ignore punctuations. ### Schedule and Overview Summary Table 1 shows important dates of the VLSP 2022 - EVJVQA Challenge. It lasted for two months, during which the participating teams spent 39 days developing the multilingual visual question answering systems. Besides, Table 2 presents an overview of the participating teams who joined the VLSP2022-EVJVQA. ## 4 Corpus Creation A previous work [20] inherited assets from the well-known VQA benchmark in English and the COCO-QA [14], then they proposed a semi-automatic annotating system by using machine translation to translate question-answer pairs from English to Vietnamese. On the other hand, we argue that the context in images captured in Vietnam is more complicated than in images coming from VQA benchmarks in English because of its crowded scene and "out-of-common" objects, or in particular, objects that are not commonly used outside of Vietnam. Moreover, using such a machine translation system as [20] is hard to ensure the natural aspect of using language, which caused lots of confusion while evaluating VQA methods in Vietnamese. To overcome the above flaws and research and develop a VQA system, particularly for \begin{table} \begin{tabular}{l l} \hline \hline **Time** & **Phase** \\ \hline October 1st & Trial Data \\ October 5th & Public test \\ November 10th & Private test \\ November 12th & Competition end \\ \hline November 20th & Submission deadline \\ November 23th & Notification of acceptance \\ November 25th & Camera-ready due \\ \hline \hline \end{tabular} \end{table} Table 1: Schedule of the VLSP 2021 - ViMRC Challenge. \begin{table} \begin{tabular}{l r} \hline \hline **Metric** & **Value** \\ \hline \#Registration Teams & 62 \\ \#Joined Teams & 57 \\ \#Signed Data Agreements & 36 \\ \#Submitted Teams & 8 \\ \#Paper Submissions & 5 \\ \hline \hline \end{tabular} \end{table} Table 2: Participation summary of the VLSP 2022 - EVJVQA Challenge. Figure 1: Overview of the multilingual visual question answering task. the Vietnamese, we constructed the novel dataset with images collected manually and relevant to daily life in Vietnam. In addition, to address and challenge the research community, we provided our dataset _multilingual_ question-answer pairs to encourage the research community to explore and propose an effective system that can answer questions written in various languages. ### Image collection To build a VQA dataset in the Vietnamese context, we search for images with a diverse and open-domain set of keywords. We first selectively prepare various keywords which are relevant to Vietnamese locations and daily life activities or result in images that specifically contain targeted objects in Vietnamese scenes. For instance, the keywords can be Vietnamese streets, markets, sidewalk eateries, cultural sites, human outdoor activities, means of transport, or house interiors. Some keywords are appended with Vietnamese location names like Hanoi or Saigon for more variation in geological and cultural context. We then use these keywords to scrap images from Google Images. The image-scraping process is facilitated by the Octoparse tool. After collecting the images, we proceed with the filtering stage. The images originally came in various sizes. However, we must ensure the details in them are clearly visible. Therefore, we only keep images with widths and heights greater than 500 and 400, respectively. We also filter out GIF files and other file formats apart from JPEG and PNG. As a result, we obtain 4,909 images with their size varying in the range of 500 - 6000 pixels in terms of width and the range of 400 - 4032 pixels in terms of height. ### Questions and answers creation process The questions and answers (QAs) of the UIT-EVJVQA dataset are first created in Vietnamese throughout the set of images. Afterward, these QAs are translated into English and Japanese. Both stages are conducted by the source of crowd workers. QAs in all three languages are then merged according to the images and eventually constitute the final corpus. The overall pipeline of the aforementioned process is visualized in Figure 2. #### 4.2.1 Base Vietnamese QAs creation We first employ five crowd workers for the Vietnamese questions and answers creation stage. For each image, the workers are asked to formulate 3-5 question-answer pairs based on the **details** and objects that appear in the visible scene. The workers are required to conform to the following guideline: * Encourage using phrases or full sentences to give the answers. * Restrict the use of single words as answers. Figure 2: Overall pipeline process for creating the UIT-EVJVQA dataset. * No selective question or yes/no question allowed. * Numbers must be typed in alphabet characters rather than numeric characters, and they must not be greater than 10. * For colors, only use provided colors such as black, white, red, orange, yellow, green, blue, pink, purple, brown, and grey. If these colors cannot exactly describe the true color of the object, ignore such color property in the sentence. * In the case of mentioning direction, if following that direction words is an object, then such direction is defined based on that object, else using the perspective of the annotator to define the direction. Eventually, this stage yields 11,689 pairs of question and answer in Vietnamese for our dataset. #### 4.2.2 English and Japanese QAs human-translation All the Vietnamese QAs are henceforth passed through the human-translation stages. These stages demand the employment of qualified crowd workers for the translation of questions and answers. For English translation, the workers must have at least IELTS certification with an overall band score of 6.5. Meanwhile, translators for the Japanese translation task must achieve an N3 proficiency level or above. Overall, there were seven and nine translators working simultaneously for the English and Japanese translation process, respectively. The English and Japanese translation stages also have their corresponding guidelines. **English translation guideline**: The English QAs are translated from the Vietnamese ones, with many entities and attributes in the sentences retained during the translation as possible. The translators are encouraged to use phrases or full sentences as the translated answers. Apostrophes in sentences are restricted. Thus, translators should use the uncontracted form or other valid grammar formulations. **Japanese translation guideline**: The Japanese QAs are translated from the Vietnamese ones, with many entities and attributes in the sentences retained during the translation as possible. For transcribing Vietnamese proper nouns or other foreign words, the katakana syllabary is adopted. The polite form is utilized for writing translated questions and answers that contain verbs. In the case of complex Vietnamese questions and answers with multiple relevant information that may not be easily translated into continuous Japanese sentences, translators can use commas to split the sentences into smaller parts and then translate them subsequently. For example, the question "What products does the woman wearing a helmet go to the store to buy?" can be translated to Japanese as "\(\sim\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\downarrow\)\(\) (Vu et al., 2018) as in Vietnamese, a word may have more than one token (for instance, "cua hang tap hoa" is formed from two Vietnamese words "cita hang" and "tap hoa", which is in turn formed from more than one token). For English QAs, we achieved tokens by splitting sentences using space. For Japanese QAs, like Vietnamese, Japanese uses hieroglyphs to form their word, hence each Japanese word may have more than one hieroglyph. We use the janome1 library to perform word segmentation on Japanese QAs. Footnote 1: [https://github.com/mocobeta/janome](https://github.com/mocobeta/janome) Table 4 indicate, with the same meaning, Japanese in general use more words to describe than Vietnamese and English. This implies another challenge for VQA method when tackling Japanese text beside the complexity of multilingual in our dataset. Moreover, Vietnamese and English have the same distribution of length in terms of questions (according to Table 4), while English has shorter answers compared with those in Vietnamese. Answers of the three languages share the same characteristic where the most appearance of length is two. This indicates humans while giving answers, prefer saying in short statements, and this behavior leads to the classification approach on the VQA dataset (Teney et al., 2018). However, such short answers are not always the case in our dataset as the context of images taken in Vietnam is complicated because of the crowded scenes, traffic jams, or street stalls, and short answers are not enough to answer questions enquiring about complex scenes. Moreover, we aim to emphasize the language aspect of the VQA task, which means we want to guide the community to research and propose a system that can give answers flexibly and naturally as a human does, not the way of "selecting" answers from a defined set as most approaches in the VQA dataset (Antol et al., 2015; Teney et al., 2018). To this end, the answers given in our dataset are diverse in length and complicated in terms of level (word, phrase, or sentence level). Interestingly, while annotating answers, we found that giving a phrase or sentence as an answer is more fluent and human-like than giving only words or phrases as in the VQA dataset of (Antol et al., 2015). Another factor to consider while annotating the UIT-EVJVQA dataset is the language prior phenomenon (Goyal et al., 2017). This is the phenomenon where the VQA methods try to learn patterns between questions and answers, such as questions starting with "how many" usually go with the answer "two". As analyzed in (Goyal et al., 2017), the language priors in VQA dataset is the result of the classification approach proposed for VQA task (Teney et al., 2018) and cause the model to learn the way of recognizing answer based on the question rather than the way of how to make use of the image to answer the given question. Therefore while constructing the guideline to annotate the UIT-EVJVQA dataset, we propose to give answers using words, phrases, or sentences. In this way, we can first eliminate the traditional classification approach proposed for the VQA task in English as well as avoid the language priors in our dataset. ## 5 Systems and Results The aim of the challenge is to evaluate the quality of the teams' approaches to multilingual visual question-answering systems. ### Baseline System Following Changpinyo et al. (Changpinyo et al., 2022c), we adopt transfer learning based on Vision Transformer (ViT) (Dosovitskiy et al., 2020) and mBERT (Devlin et al., 2019) for our baseline system. This work uses the training set for fine-tuning the pre-trained mBERT (Devlin et al., 2019) model before generating answers. In addition, we use the ViT (Dosovitskiy et al., 2020) to extract the visual features of images (Figure 7). Pre-trained ViT and mBERT models are initialized from HuggingFace checkpoints. We trained the baseline with a batch size of 64 and adapted the learning rate scheduler from Vaswani et al. (Vaswani et al., 2017) to reduce the learning rate gradually after a number of iterations. The training process was interrupted automatically if the evaluation scores did not increase after 5 epochs. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{**Question**} & \multicolumn{3}{c}{**Answer**} \\ \cline{2-7} & **Max.** & **Min.** & **Avg.** & **Max.** & **Min.** & **Avg.** \\ \hline **Vietnamese** & 22 & 3 & 8.7 & 32 & 1 & 7.2 \\ **English** & 26 & 3 & 8.6 & 23 & 1 & 5.0 \\ **Japanese** & 45 & 4 & 13.3 & 23 & 1 & 5.9 \\ \hline \hline \end{tabular} \end{table} Table 4: Statistic of question and answer in the UIT-EVJVQA dataset. ### Challenge Submission The competition was hosted offline where the organizers provide the training set and public test set to the participant teams to evaluate and fine-tune their methods. When the competition came into the private test phase, the submission policy allows each participant team to submit up to 3 _different_ methods during a submission time lasting 3 days. After the three-day private test phase, we evaluated their submitted results and obtain their F1 score as well as avg. BLEU score on the private test set. The final score of each participant team is the highest score among their submitted models on the private test set and their rank was indicated based on the F1 score ((BLUE as a secondary score when there is a tie). ### Experimental Results There are 8 participant teams submitted in total, and 5 of them have F1 scores higher than the baseline system. ## 6 Result Analysis We mainly analyze the results of methods coming from participating teams based on the length of questions and answers in each language. To alleviate the result analysis, we define the four different ranges for the length of questions and answers. In particular, we define the _short questions_ (respec Figure 4: Statistics of the length of QAs in Vietnamese partition of the UIT-EVJVQA dataset. Figure 5: Statistics of the length of QAs in English partition of the UIT-EVJVQA dataset. Figure 3: Word cloud of tokens in three language partitions of the UIT-EVJVQA dataset. tive, _short answer_) are questions whose length is shorter or equal to 5 tokens, _medium questions_ (respective, _medium answer_) questions whose length is between 6 to 10 tokens, _long questions_ (respective, _long answer_) are questions whose length is between 11 to 15 tokens, and finally, _very long questions_ (respective, _very long answer_) are questions whose length is greater than 15 tokens. Tokens of questions and answers are defined in the same manner when we do statistics in Section 4.4. We sequentially report the results of the top-5 models in terms of scores and visualization in quantitative analysis and qualitative analysis, respectively, to support our statements as well as indicate what the research community should pay attention to when constructing the novel open-ended VQA dataset. From Table 6, we have the top-5 models share the same characteristic on the English part of the UIT-EVJVQA dataset. Particularly, when we observe the results based on the question length, we can see that all submitted models have the same behavior when they give a higher performance on short questions and medium questions, while they yield a few drawbacks on long questions and very long questions. Turn the attention to the last four rows, which are the results based on answer length. We have different behavior. The top-5 models give better results on medium and long answers while yielding worse results on short and very long answers. Come from Figure 5, we can see that most English answers are short answers, which means Figure 6: Statistics of the length of QAs in Japanese partition of the UIT-EVJVQA dataset. \begin{table} \begin{tabular}{l l c c c c c} \hline \hline \multirow{2}{*}{No.} & \multirow{2}{*}{Team name} & \multirow{2}{*}{Model type} & \multirow{2}{*}{Models} & \multicolumn{2}{c}{Public Test} & \multicolumn{2}{c}{Private Test} \\ \cline{5-8} & & & F1 & BLEU & F1 & BLEU \\ \hline **1** & **CIST AI** & Single & ViT + mT5 & 0.3491 & 0.2508 & **0.4392** & **0.4009** \\ **2** & **OhYeah** & Single & ViT + mT5 & **0.5755** & **0.4866** & 0.4349 & 0.3868 \\ **3** & **DS-STBFL** & Ensemble & CNN-Seq2Seq + ViT + OFA & 0.3390 & 0.2156 & 0.4210 & 0.3482 \\ 4 & FCoin & Single & ViT + mBERT & 0.3355 & 0.2437 & 0.4103 & 0.3549 \\ 5 & VL-UIT & Single & BEiT + CLIP + Detectron-2 + mBERT + BM25 + FastText & 0.3053 & 0.1878 & 0.3663 & 0.2743 \\ 6 & BDboi & Ensemble & ViT + BEiT + SwinTransformer + CLIP + OFA + BLIP & 0.3023 & 0.2183 & 0.3164 & 0.2649 \\ 7 & UIT-squad & Ensemble & VinVL+mBERT & 0.3224 & 0.2238 & 0.3024 & 0.1667 \\ 8 & VC-Internship & Single & ResNet-152 + OFA & 0.3017 & 0.1639 & 0.3007 & 0.1337 \\ 9 & _Baseline_ & Single & ViT + mBERT & _0.2924_ & _0.2183_ & _0.3346_ & _0.2275_ \\ \hline \hline \end{tabular} \end{table} Table 5: Final results of submitted methods. Figure 7: The baseline model architecture at the VLSP2022-EVJVQA Challenge. models have more short answers to learning hence logically, they should have better performance on short answers rather than medium answers and long answers. To answer this weird insight, we showed all answers given by the top-5 models, and we found out that all top-5 models tend to give medium answers, even for questions having short answers. The most exciting thing here is how models give medium or lengthy answers: they repeat some tokens from the questions, and this is the primary way our annotators give medium and lengthy answers to questions. For questions with medium or long answers, answers from top-5 models mostly repeat some tokens from questions, and the gold medium or gold answers also repeat some tokens from questions. Thanks to these matched tokens with questions, F1 scores, and avg. BLEU scores of the answers from top-5 models are pretty high, while the crucial tokens, which determine whether or not the information in these answers is correct or not, usually have a length of 2 or 3 tokens; in case of totally wrong, they still do not affect the overall scores significantly. To gain a better understanding, we show some samples in the Appendix for an intuitive explanation. Coming into the Vietnamese part of the UIT-EVJVQA dataset results, we have quite different behavior where most models perform better on medium and long questions. When we observe the length distribution of questions in Figure 4, most questions have a length range of around eight tokens, or most have medium length. While in English, most questions fell in the range of 5 tokens and seven tokens. Therefore the top-5 models have a good performance on short and medium questions in English, while they achieved better performance on medium and long answers in Vietnamese. For Vietnamese answers, as indicated in Table 7, top-5 models share the same performance as the English answers: they perform better on medium and long answers than on short and very long answers. We also provided some samples in Appendix to better demonstrate the method with English answers. On the Japanese part of the UIT-EVJVQA dataset, the top-5 models have the same color on the English part when they give better results on medium and long questions. But unlike their performance in Vietnamese and English, where they achieved better scores for medium and long answers, the top-5 models have results as increase as the length of answers. From Figure 6, although the occurrence of short answers is the highest, the cumulative of medium and long answers are higher than the cumulative of short answers, which indicates the top-5 models not only have more medium and lengthy answers to learn but also tend to give medium or long answers to have the optimal loss on the training set. We also provided some samples in the Appendix for better demonstration. Apart from the previous discussion, we can conclude on the UIT-EVJVQA dataset, most deep learning models tend to give lengthy answers to given questions with images, and the way they give lengthy answers is by repeatedly using some tokens of questions as starting point of the answers, and the main wrong parts of these answers are at the tokens indicating vital information to answer the questions such as objects, colors or side (see Appendix for more examples). Hence, to better understand how wrong the top-5 model gives predictions, we conduct analyses focusing on the usage of side, object, and color words in each language. **Side words**: One of the most confusing at \begin{table} \begin{tabular}{l l l l l l l l l l l l} \hline \hline & & \multicolumn{2}{c}{**Top 1**} & \multicolumn{2}{c}{**Top 2**} & \multicolumn{2}{c}{**Top 3**} & \multicolumn{2}{c}{**Top 4**} & \multicolumn{2}{c}{**Top 5**} \\ \cline{3-11} & & **F1** & **BLEU** & **F1** & **BLEU** & **F1** & **BLEU** & **F1** & **BLEU** & **F1** & **BLEU** \\ \hline \multirow{4}{*}{**CUT-10**} & S & 0.3900 & 0.1867 & 0.5000 & 0.1250 & 0.4261 & 0.2022 & 0.4308 & 0.2184 & 0.3728 & 0.1535 \\ & M & 0.3935 & 0.2218 & 0.4922 & 0.2961 & 0.3833 & 0.1889 & 0.3730 & 0.1909 & 0.3222 & 0.1329 \\ & L & 0.3815 & 0.2044 & 0.4470 & 0.2832 & 0.3475 & 0.1628 & 0.3248 & 0.1549 & 0.2982 & 0.1093 \\ & XL & 0.3681 & 0.2019 & 0.4368 & 0.2838 & 0.3059 & 0.1518 & 0.3137 & 0.1253 & 0.3030 & 0.1288 \\ \hline \multirow{4}{*}{**CUT-10**} & S & 0.3000 & 0.1308 & 0.2429 & 0.0975 & 0.2999 & 0.1209 & 0.3004 & 0.1282 & 0.2498 & 0.0912 \\ & M & 0.4751 & 0.2912 & 0.3478 & 0.1994 & 0.4603 & 0.2530 & 0.4502 & 0.2553 & 0.4013 & 0.1737 \\ & L & 0.4742 & 0.2948 & 0.5063 & 0.3522 & 0.4473 & 0.2237 & 0.3837 & 0.1897 & 0.3642 & 0.1517 \\ & XL & 0.4456 & 0.1580 & 0.5709 & 0.4029 & 0.3882 & 0.1049 & 0.3798 & 0.1175 & 0.4219 & 0.1392 \\ \hline \hline \end{tabular} \end{table} Table 6: Results of the top-5 methods in English part on the private test set. S stands for short, M stands for medium, L stands for long, and XL stands for very long. tributes while inferring the description of objects is the use of side or direction. This is the case where an object is observed on the left or right of another object or appears on the left or right side of the scene. In English, side words are the words "left" and "right" in our dataset. While in Japanese, they are "\(\overset{\ast}{L}\)" and "\(\overset{\ast}{L}\)". However, they are not simply "trai" and "phai" in Vietnamese, as "trai" may not often be used to convey a side. Therefore, we broaden the set of side words as "ben trai", "ben phai", "tay trai" (left-hand side) and "tay phai" (right-hand side) according to the observation of the dataset. To measure how well each submitted model uses those side words in each language, we adopt the F1-score to calculate the proportion of match in terms of side words between the predictions and the gold answers. According to the results in Table 9, most models failed to indicate the side of objects while answering the questions. **Color words**: As an attribute frequently appears in the QAs of our dataset, color is worth being observed as part of the inference performance of the models. Here we measure the F1-score of color word matches between predicted and gold answers for every team. The color words have been defined in the guideline in Section 4.2.1. However, there are some minor exceptions due to the crowdsourcing process. \begin{table} \begin{tabular}{c l l l l l l l l l l} \hline \hline & & \multicolumn{2}{c}{**Top 1**} & \multicolumn{2}{c}{**Top 2**} & \multicolumn{2}{c}{**Top 3**} & \multicolumn{2}{c}{**Top 4**} & \multicolumn{2}{c}{**Top 5**} \\ \cline{3-11} & & **F1** & **BLEU** & **F1** & **BLEU** & **F1** & **BLEU** & **F1** & **BLEU** & **F1** & **BLEU** \\ \hline \multirow{4}{*}{**Cour.**} & S & 0.4143 & 0.2088 & 0.4370 & 0.2412 & 0.4582 & 0.2482 & 0.4894 & 0.2768 & 0.4116 & 0.1909 \\ & M & 0.4997 & 0.3259 & 0.4976 & 0.3150 & 0.5008 & 0.3111 & 0.4879 & 0.3016 & 0.4347 & 0.2271 \\ & L & 0.4797 & 0.2954 & 0.4582 & 0.2582 & 0.4600 & 0.2687 & 0.4423 & 0.2545 & 0.4014 & 0.1835 \\ & XL & 0.4197 & 0.2446 & 0.4349 & 0.2083 & 0.4430 & 0.2359 & 0.4121 & 0.2234 & 0.3154 & 0.1447 \\ \hline \multirow{4}{*}{**Cour.**} & S & 0.3621 & 0.1659 & 0.3719 & 0.1700 & 0.3692 & 0.1666 & 0.3576 & 0.1656 & 0.3493 & 0.1481 \\ & M & 0.5564 & 0.3874 & 0.5507 & 0.3743 & 0.5467 & 0.3626 & 0.5433 & 0.3609 & 0.4705 & 0.2604 \\ \cline{1-1} & L & 0.5326 & 0.3623 & 0.4859 & 0.2849 & 0.5141 & 0.3244 & 0.4802 & 0.2851 & 0.4007 & 0.1760 \\ \cline{1-1} & XL & 0.4898 & 0.3172 & 0.4627 & 0.1959 & 0.5328 & 0.3253 & 0.4142 & 0.2243 & 0.3430 & 0.1278 \\ \hline \hline \end{tabular} \end{table} Table 7: Results of the top-5 methods in Vietnamese part on the private test set. S stands for short, M stands for medium, L stands for long, and XL stands for very long. \begin{table} \begin{tabular}{c l l l l l l l l l l l} \hline \hline & & \multicolumn{2}{c}{**Top 1**} & \multicolumn{2}{c}{**Top 2**} & \multicolumn{2}{c}{**Top 3**} & \multicolumn{2}{c}{**Top 4**} & \multicolumn{2}{c}{**Top 5**} \\ \cline{3-11} & & **F1** & **BLEU** & **F1** & **BLEU** & **F1** & **BLEU** & **F1** & **BLEU** & **F1** & **BLEU** \\ \hline \multirow{4}{*}{**Cour.**} & S & 0.7500 & 0.3125 & 0.4060 & 0.1917 & 0.6667 & 0.1850 & 0.4444 & 0.1250 & 0.3333 & 0.0460 \\ & M & 0.4494 & 0.2004 & 0.3905 & 0.2164 & 0.4634 & 0.2984 & 0.4841 & 0.3118 & 0.4485 & 0.2447 \\ & L & 0.4489 & 0.2841 & 0.3679 & 0.1974 & 0.4395 & 0.2720 & 0.4150 & 0.2741 & 0.4058 & 0.2333 \\ & XL & 0.4399 & 0.2915 & 0.3456 & 0.1863 & 0.3896 & 0.2333 & 0.3834 & 0.2453 & 0.3452 & 0.1815 \\ \hline \multirow{4}{*}{**Cour.**} & S & 0.1983 & 0.075 & 0.3101 & 0.1336 & 0.2057 & 0.0713 & 0.1809 & 0.0714 & 0.1983 & 0.0750 \\ & M & 0.3503 & 0.1942 & 0.4654 & 0.2822 & 0.3236 & 0.1750 & 0.3443 & 0.2095 & 0.3503 & 0.1942 \\ \cline{1-1} & L & 0.5242 & 0.3667 & 0.4436 & 0.2731 & 0.4734 & 0.3156 & 0.4609 & 0.3230 & 0.5242 & 0.3667 \\ \cline{1-1} & XL & 0.5911 & 0.4245 & 0.3684 & 0.1586 & 0.5131 & 0.3321 & 0.4983 & 0.3358 & 0.5911 & 0.4245 \\ \hline \hline \end{tabular} \end{table} Table 8: Results of the top-5 methods in Japanese partition on the private test set in. S stands for short, M stands for medium, L stands for long, and XL stands for very long. \begin{table} \begin{tabular}{c l l l} \hline \hline Team & Vietnamese & English & Japanese \\ \hline \hline CIST AI & **0.4948** & **0.3889** & 0.3922 \\ OhYeah & 0.3814 & 0.3235 & **0.4085** \\ DS-STBFL & 0.4811 & 0.3137 & 0.3366 \\ FCoin & 0.4021 & 0.3039 & 0.3595 \\ VL-UIT & 0.2268 & 0.2418 & 0.1471 \\ \hline \hline \end{tabular} \end{table} Table 9: F1-score for side predictions of every team for each language We also visualize the color word distribution in the prediction of every team for each language. It is worth noting that in the Vietnamese QAs, there is a substantial quantity of "xanh" color words which are unclear to be whether green or blue, but rather depend on the context of the visual scene. As we can see in Figure (a)a, Figure (b)b and Figure (c)c, the ground truth color words are not evenly distributed because some colors such as white, black and red are used more frequently in the dataset, while the instances of brown, gray and purple are scant. This skewness is then emphasized through the overall inference of models. In Japanese, most submitted models clearly express bias as they intensively describe objects in white color, while less-appeared colors are poorly used. This kind of behavior is similar to the prior-language phenomenon pointed out in the previous work (Goyal et al., 2017) as when being given questions about colors, top-5 models tend to use most-appearance colors despite the colors in images, and this phenomenon is also noticeable in other languages for some teams. For instance, the submitted model of team VL-UIT even ignores many of the less-appeared colors. As an attribute that is regularly used to describe an object, color also plays an important role in distinguishing between objects, which is an essential factor in visual question answering. Therefore, more efforts must be conducted to degrade the skewness in the color distribution in a way that helps the model infer better and more precisely describe objects out of images. We suggest future works should define a fixed range of colors and pay attention while asking and using colors to answer in order that we can ignore such prior-language phenomena on the VQA dataset. **Objects**: To get the distribution of objects in the UIT-EVJVQA dataset, we used the POS method from the previous work (Vu et al., 2018) and achieved objects by collecting tokens tagged as nouns. We used the same manner as investigating the behavior of top-5 models on color words but we can not find such language-prior phenomenon on objects. To find out the reason why most of the top-5 models failed in recognizing objects in images, we observed the results of pre-trained image models. However, most of the top-5 model's used pre-trained image models that output grid-based features (Jiang et al., 2020) such as ViT (Dosovitskiy et al., 2020) or BEiT (Bao et al., 2021), which means it is hard to visualize and interpret the results of those image models. Nevertheless, we can interpret results of other similar pre-trained image models such as Faster-RCNN (Ren et al., 2015) used ResNet101 (He et al., 2016) as the backbone or Swin Transformer for object detection (Liu et al., 2021) as these models were pre-trained on the ImageNet dataset (Russakovsky et al., 2015) as well before being fine-tuned on various tasks, hence we used the outcomes of these two models on the training set of Figure 8: Color words distribution in ground truth and predictions of each team in three languages. the UIT-EVJVQA dataset as an approximate way to investigate the reason of failure in recognizing an object in images of the top-5 models. As depicted in Figure 9, these two image models accurately detect objects that they were trained, but lots of common objects in Vietnam such as non la or fan (the first column of three images in Figure 9 or some traditional products available in most of the markets in Vietnam (the last column of three images in Figure 9) are not detected, and some of the detected objects in Figure 9 are not correct. This result implies the incorrect image understanding of pre-trained images models trained on images captured outside of Vietnam and indirectly affects the performance of VQA models (in case of using grid-based features, as region-based features (Jiang et al., 2020) were achieved from grid-based features, e.g. from the backbone of Faster-RCNN, hence the failures indicated from region-based features indicates as well the failure in grid-based features). In conclusion available pre-trained image models are not relevant to scenes captured in Vietnam and the Computer Vision (CV) community in Vietnam should research and develop a better appropriate pre-trained image model, especially for images taken in Vietnam so that we can effectively tackle recent trending tasks where multi modeling task such as VQA is one of them. ## 7 Conclusion and Future Work The VLSP2022-EVJVQA Challenge on multilingual image-based question answering has been organized at the VLSP 2022. Even though 57 teams had legally signed up to get the training dataset, only eight teams submitted their results. Because several teams enrolled for many challenges at the VLSP 2022, the other teams may not have enough time to explore VQA models. The highest performances are 0.4392 in F1-score and 0.4009 in BLUE on the private test set. The multilingual VQA systems proposed by the top 2 teams use ViT for the pre-trained vision model and mT5 for the pre-trained language model. EVJVQA is a challenging dataset including the training set, the development set (public test set), and the test set (private test set) that motivates NLP and CV researchers to further explore the multilingual models or systems in visual question answering. To increase performance in multilingual visual question answering, we intend to increase the amount and quality of annotated questions in the future. In addition, we also make human Figure 9: Some examples for the performance of pre-trained image models. From top to bottom: origin images, object detection results of Faster-RCNN with Resnet101 (He et al., 2016) as backbone, and object detection results of Cascade RCNN (Cai and Vasconcelos, 2018) with Swin Transformer (Liu et al., 2021) as the backbone (Zooming out x4 times for better illustration as well as clearly observing the detected labels and their confident scores of each model). adversarial questions based on findings proposed by the research work [2]. ## Acknowledgements We would like to thank the efforts of annotators who have contributed to the construction of a high-quality resource for the natural language processing research community. The VLSP 2022 was supported by organizations: Aimsoft, Bee, INT2, and DAGORAS, and educational organizations: VNU-HCM University of Information Technology, VNU University of Science, VNU University of Engineering and Technology, Hanoi University of Science and Technology, Vietnam Lexicography Centre, University of Science and Technology of Hanoi, ThuyLoi University, and VAST Institute of Information Technology. This work is partially supported by Vingroup Innovation Foundation (VINIF) in project code VINIF.2020.DA14.
2306.12974
Adaptive Bernstein Change Detector for High-Dimensional Data Streams
Change detection is of fundamental importance when analyzing data streams. Detecting changes both quickly and accurately enables monitoring and prediction systems to react, e.g., by issuing an alarm or by updating a learning algorithm. However, detecting changes is challenging when observations are high-dimensional. In high-dimensional data, change detectors should not only be able to identify when changes happen, but also in which subspace they occur. Ideally, one should also quantify how severe they are. Our approach, ABCD, has these properties. ABCD learns an encoder-decoder model and monitors its accuracy over a window of adaptive size. ABCD derives a change score based on Bernstein's inequality to detect deviations in terms of accuracy, which indicate changes. Our experiments demonstrate that ABCD outperforms its best competitor by up to 20% in F1-score on average. It can also accurately estimate changes' subspace, together with a severity measure that correlates with the ground truth.
Marco Heyden, Edouard Fouché, Vadim Arzamasov, Tanja Fenn, Florian Kalinke, Klemens Böhm
2023-06-22T15:35:38Z
http://arxiv.org/abs/2306.12974v2
# Adaptive Bernstein Change Detector for High-Dimensional Data Streams ###### Abstract Change detection is of fundamental importance when analyzing data streams. Detecting changes both quickly and accurately enables monitoring and prediction systems to react, e.g., by issuing an alarm or by updating a learning algorithm. However, detecting changes is challenging when observations are high-dimensional. In high-dimensional data, change detectors should not only be able to identify when changes happen, but also in which subspace they occur. Ideally, one should also quantify how severe they are. Our approach, ABCD, has these properties. ABCD learns an encoder-decoder model and monitors its accuracy over a window of adaptive size. ABCD derives a change score based on Bernstein's inequality to detect deviations in terms of accuracy, which indicate changes. Our experiments demonstrate that ABCD outperforms its best competitor by at least 8% and up to 23% in F1-score on average. It can also accurately estimate changes' subspace, together with a severity measure that correlates with the ground truth. change detection, concept drift, data streams, high-dimensionality ## 1 Introduction Data streams are open-ended, ever-evolving sequences of observations from some process. They pose unique challenges for analysis and decision-making. One crucial task is to detect changes, i.e., shifts in the observed data, that may indicate a change in the underlying process. Change detection has been an active research area. However, the high-dimensional setting, in which observations contain a large number of simultaneously measured quantities, did not receive enough attention. Yet, it may yield useful insights in environmental monitoring (de Jong and Bosman, 2019), human activity recognition (Vrigkas et al, 2015), network traffic monitoring (Naseer et al, 2020), automotive (Liu et al, 2019), predictive maintenance (Zhao et al, 2018), and biochemical engineering (Mowbray et al, 2021): **Example** (Biofuel production).: The production of fuel from biomass is a complex process comprising many interdependent process steps. Those include pyrolysis, synthesis, distillation, and separation. Many steps rely on (by-)products of other steps as reactants, leading to a highly interconnected system with many process parameters. A monitoring system tracks the process parameters to detect failures in the plant: (i) The system must detect changes in a large (i.e., high-dimensional) vector of process parameters, which may indicate failures. (ii) The system must find out which process parameters are affected by the change to allow for a targeted reaction. Since the system is very complex and has many interconnected components, change is often evident only when considering correlations between process parameters. An example would be the correlation between temperature and concentration fluctuations. So it is insufficient to monitor each process parameter in isolation. (iii) There can exist slight changes which only require minor adjustments and more severe ones that require immediate intervention to avoid a shutdown of the plant. The monitoring system should provide an estimate of the severity of change. The example illustrates three requirements for modern change detectors: * **R1: Change point.** The primary task of change detectors is to identify that the data stream has changed and when it occurred. * **R2: Change subspace.** A change may only concern a subset of dimensions -- the _change subspace_. Change detectors for high-dimensional data streams should be able to identify such subspaces. * **R3: Change severity.** Quantifying relative change severity to distinguish between changes of different importance is essential to react appropriately. Prior works already acknowledge the relevance of the above requirements (Lu et al, 2019; Webb et al, 2018). However, fulfilling R1-R3 in combination remains challenging since they depend on each other: on the one hand, detecting changes in high-dimensional data is difficult because changes typically only affect few dimensions. Unaffected dimensions "dilute" a change (i.e., a change occuring in a subspace appears to be less severe in the full space). This might make changes harder to detect in all dimensions. On the other hand, detecting the change subspace should occur _after_ detecting a change, since monitoring all possibles subspaces is intractable. Last, one should restrict computation of change severity to the change subspace to eliminate dilution. Existing methods for change detection, summarized in Table 1, either are univariate (UV), multivariate (MV), or specifically designed for high-dimensional data (HD); the latter claim efficiency w.r.t. high-dimensionality or resilience against the "curse of dimensionality". However, they do not fulfill R1-R3 in combination sufficiently well as Section 2 describes. Thus, we propose the Adaptive Bernstein Change Detector (ABCD), which addresses R1-R3 in combination. We articulate our contributions as follows: **(i) Problem Definition:** We formalize the problem of detecting changes in high-dimensional data streams such that R1-R3 can be tackled in combination. **(ii) Adaptive Bernstein Change Detector:** We present ABCD, a change detector for high-dimensional data, that satisfies R1-R3. It monitors the loss of an encoder-decoder model using an adaptive window size and statistical testing. Adaptive windows enable ABCD to detect severe changes quickly and, over a longer period, identify hard-to-detect changes that would typically require a large window size. **(iii) Bernstein change score:** Our approach applies a statistical test based on Bernstein's inequality. This limits the probability of false alarms. **(iv) Online computation:** We propose an efficient method for computing the change score in adaptive windows and discuss design choices leading to constant time and memory. **(v) Benchmarking:** We conduct experiments on 10 real-world and synthetic data streams with many dimensions and compare ABCD with recent approaches. The results indicate that ABCD outperforms its competitors consistently w.r.t. R1-R3, is robust to high-dimensional data and is useful in domains including human activity recognition, gas detection, and image processing. We also study ABCD's parameter sensitivity. Our code1 follows the popular Scikit-Multiflow API (Montiel et al, 2018), so it is easy to use in future research. Footnote 1: [https://github.com/heymarco/AdaptiveBernsteinChangeDetector](https://github.com/heymarco/AdaptiveBernsteinChangeDetector) ## 2 Related work ### Change detector types Most existing change detectors are _supervised_, i.e., they focus on detecting changes in the relationship between input data and a target variable (Iwashita and Papa, 2019). However, class labels are rarely available in reality, which limits their applicability. On the contrary, the _unsupervised_ change detectors aim to detect changes only in the input data. Our approach belongs to this category, so we restrict our review to unsupervised approaches. Most existing approaches detect changes whenever a measure of discrepancy between newer observations (the current window) and older observations (the reference window) exceeds a threshold. Some approaches, e.g., D3 (Gozuacik et al, 2019) or PCA-CD (Qahtan et al, 2015), implement the reference and current window as two contiguous sliding windows. Other approaches, such as IBDD (de Souza et al, 2020), IKS (dos Reis et al, 2016) or WATCH (Faber et al, 2021) use a fixed reference window. A major problem is to choose the appropriate size for the window; thus (Bifet and Gavalda, 2007) propose windows of adaptive size, that grow while the stream remains unchanged and shrink otherwise. Several work leverage this principle, e.g. (Sun et al, 2016; Khamassi et al, 2015; Fouche et al, 2019; Suryawanshi et al, 2022). We also use adaptive windows to lower the number of parameters of ABCD. ### Univariate change detection There exist many approaches for change detection in univariate (UV) data streams. Two of them, Adaptive Windowing (ADWIN) (Bifet and Gavalda, 2007) and SeqDrift2 (Pears et al, 2014), share some similarity with our approach. Like ADWIN, ABCD relies on an adaptive window. Like SeqDrift2, it uses Bernstein's inequality (Bernstein, 1924). But unlike ADWIN and SeqDrift2, ABCD can handle high-dimensional data while fulfilling R1-R3. ### Multivariate change detection To detect changes in multivariate (MV) data, some approaches apply univariate algorithms in each dimension of the stream. Faithfull et al (2019) propose to use one ADWIN detector per dimension (with \(k\) dimensions). They declare a change whenever a certain fraction of the detectors agree. We call this approach AdwinK later on. Similarly, IKS (dos Reis et al, 2016) uses an incremental variant of the Kolmogorov-Smirnov test deployed in each dimension. Unlike AdwinK, IKS issues an alarm if at least one dimension changes. There also exist approaches specifically designed for multivariate (Jaworski et al, 2020; Ceci et al, 2020; Qahtan et al, 2015; Gozuacik et al, 2019; Dasu et al, 2006), or even high-dimensional (HD) data (Faber et al, 2021; de Souza et al, 2020). Similar to ABCD, Jaworski et al (2020) and Ceci et al (2020) \begin{table} \begin{tabular}{l l l l l l} \hline Approach & Reference & Type & R1 & R2 & R3 \\ \hline ADWIN & Bifet and Gavalda (2007) & UV & ✓ & – & – \\ SeqDrift2 & Pears et al (2014) & UV & ✓ & – & – \\ kdq-Tree & Dasu et al (2006) & MV & ✓ & – & ✓ \\ PCA-CD & Qahtan et al (2015) & MV & ✓ & – & ✓ \\ IKS & dos Reis et al (2016) & MV & ✓ & ✓ & – \\ LDD-DSDA & Liu et al (2017) & MV & ✓ & – & – \\ AdwinK & Faithfull et al (2019) & MV & ✓ & ✓ & – \\ D3 & Gozüaçk et al (2019) & MV & ✓ & – & ✓ \\ ECHAD & Ceci et al (2020) & MV & ✓ & – & ✓ \\ IBDD & de Souza et al (2020) & HD & ✓ & – & ✓ \\ WATCH & Faber et al (2021) & HD & ✓ & – & ✓ \\ **ABCD** & this work & HD & ✓ & ✓ & ✓ \\ \hline \end{tabular} \end{table} Table 1: State of the art. use dimensionality-reduction methods to capture the relationships between dimensions. However, our approach is computationally more efficient, limits the probability of false alarms, identifies change subspace, and estimates change severity. D3 (Gozuacik et al, 2019) uses the AUC-ROC score of a discriminative classifier that tries to distinguish the data in two sliding windows. It reports a change if the AUC-ROC score exceeds a pre-defined threshold. PCA-CD (Qahtan et al, 2015) first maps observations in two windows to fewer dimensions using PCA. Then the approach estimates the KL-divergence between both windows for each principal component. PCA-CD detects a change if the maximum observed KL-divergence exceeds a threshold. However, (Goldenberg and Webb, 2019) point out that this technique is limited to linear transformations and ignores combined change in multiple dimensions. LDD-DSDA (Liu et al, 2017) measures the degree of local drift that describes regional density changes in the input data. The approach proposed by (Dasu et al, 2006) structures observations from two windows (sliding or fixed) in a kdq-tree. For each node, they measure the KL-divergence between observations from both windows. However, (Qahtan et al, 2015) show experimentally that this approach is not suitable for high-dimensional data. IBDD (de Souza et al, 2020) and WATCH (Faber et al, 2021) specifically address challenges arising from high-dimensional data. The former monitors the mean squared deviation between two equally sized windows. The latter monitors the Wasserstein distance between a reference and a sliding window. However, both cannot detect change subspaces or measure severity. ### Offline change point detection Offline change point detection, also known as signal segmentation, divides time series of a given length into \(K\) homogeneous segments (Truong et al, 2020). Many of the respective algorithms are not suitable for data streams: Some require specifying \(K\) a priori (Bai and Perron, 2003; Harchaoui and Cappe, 2007; Lung-Yut-Fong et al, 2015); others (Killick et al, 2012; Lajugie et al, 2014; Matteson and James, 2014; Chakar et al, 2017; Garreau and Arlot, 2018) scale superlinearly with time. WATCH (Faber et al, 2021), discussed above, is the state of the art extension of offline change point detection to data streams. ### Change subspace The notion of a _change subspace_ is different from the existing notion of _change region_(Lu et al, 2019). The former describes a subset of dimensions that changed, the latter identifies density changes in some local region, e.g., a hyperrectangle or cluster (Liu et al, 2017). Our definition of change subspaces is related to _marginal change magnitude_(Webb et al, 2018), but is more general since it can also accomodate changes in a subspace's joint distribution. Because high-dimensional spaces are typically sparse (due to the curse of dimensionality), identifying density changes in them is not effective. On the other hand, knowing that a change affected a specific set of dimensions can help identify the cause of the change, as we have motivated in our introductory example. Thus, we focus on detecting change subspaces in this work. In the domain of statistical process control, some approaches extend well-known methods, such as Cusum (Page, 1954) or Shewhart charts (Shewhart, 1930), to multiple dimensions. They address the problem of identifying change subspaces to some extent, however, they often make unrealistic assumptions: they focus on Gaussian or sub-Gaussian data (Chaudhuri et al, 2021; Xie et al, 2020), require that different dimensions are initially independent (Chaudhuri et al, 2021), require subspace changes to be of low rank (Xie et al, 2020), or assume that the size of the change subspace is known a priori (Jiao et al, 2018). From the approaches reviewed in Section 2.3 only AdwinK and IKS identify the corresponding change subspace. However, both approaches do not find changes that hide in subspaces, e.g., correlation changes, because they monitor each dimension in isolation. In contrast, our approach aims to learn the relationships between different dimensions so that it can detect such changes. Next, AdwinK cannot identify subspaces with fewer than \(k\) dimensions. ### Change severity According to (Lu et al, 2019), change severity is a positive measure of the discrepancy between the data observed before and after the change. One can either measure the divergence between distributions directly, as done by kdq-Tree (Dasu et al, 2006), LDD-DSDA (Liu et al, 2017), and WATCH (Faber et al, 2021), or indirectly with a score that correlates with change severity, as done by D3 (Gozuacik et al, 2019). Following this reasoning, an approach that satisfies R3 should compute a score that depends on the change severity (Gozuacik et al, 2019; Dasu et al, 2006; de Souza et al, 2020; Qahtan et al, 2015; Faber et al, 2021), i.e., the higher the score, the higher the severity. Finally, hypothesis-testing-based approaches, such as ADWIN (Bifet and Gavalda, 2007), SeqDrift2 (Pears et al, 2014), AdwinK (Faithfull et al, 2019), or IKS (dos Reis et al, 2016), do not quantify change severity: a slight change observed over a longer time can lead to the same \(p\)-value as a severe change observed over a shorter time, hence \(p\) is not informative about change severity. ### Pattern based change detection A related line of research, pattern-based change detection, deals with identifying changes in temporal graphs (Loglisci et al, 2018; Impedovo et al, 2019, 2020, 2020). In particular, Loglisci et al (2018) detect changes in the graph, identify the affected subgraphs, and quantify the amount of change for these subgraphs. This is similar to our methodology. However, these methods work well with graph data, but we are dealing with vector data. To apply these methods in our context, one would need to create a graph, e.g., by representing each dimension as a node and indicating pairwise correlations with edges. However, constructing such a graph becomes impractical for high-dimensional observations because of the exponentially growing number of subspaces. ### Competitors In our experiments, we compare to AdwinK, IKS, D3, IBDD, and WATCH. IBDD, WATCH, and D3 are recent change detectors for multivariate and high-dimensional data that fulfill R3. AdwinK extends the ADWIN algorithm to the multivariate case and fulfills R2. Finally, IKS is the only approach employing a non-parametric two-sample test for change detection while also satisfying R2. ## 3 Preliminaries We are interested in finding changes in the last \(t\) observations \(S=(x_{1},x_{2},\ldots,x_{t})\) from a stream of data. Each \(x_{i}\) is a \(d\)-dimensional vector independently drawn from a (unknown) distribution \(F_{i}\). We assume without loss of generality that each vector coordinate is bounded in \([0,1]\), i.e., \(x_{i}\in[0,1]^{d}\). **Definition 1** (Change).: _A change occurs at time point \(t^{*}\) if the data-generating distribution changes after \(t^{*}\colon F_{t^{*}}\neq F_{t^{*}+1}\)._ In high-dimensional data, changes typically affect only a subset of dimensions, which we call the _change subspace_. Let \(D=\{1,2,\ldots,d\}\) be the set of dimensions and \(F_{i}^{D^{\prime}}\) be the joint distribution of \(F_{i}\) observed in the subspace \(D^{\prime}\subseteq D\) at time step \(i\). We define the change subspace as follows: **Definition 2** (Change subspace).: _The change subspace \(D^{*}\) at time \(t^{*}\) is the union of all \(D^{\prime}\subseteq D\) in which the joint distribution \(F^{D^{\prime}}\) changed and which does not contain a subspace \(D^{\prime\prime}\) for which \(F_{t^{*}}^{D^{\prime\prime}}\neq F_{t^{*}+1}^{D^{\prime\prime}}\)._ If the dimensions in \(D^{*}\) are uncorrelated, then changes will be visible on the marginal distributions, i.e., all \(D^{\prime}\) are of size 1. However, changes may only be detectable w.r.t the joint distribution of \(D^{*}\) or the union of its subspaces of size greater than 1, which our definition accommodates. Note that the definition can also handle multiple co-occurring changes and considers them as one single change. Last, change severity measures the difference between \(F_{t^{*}}^{D^{*}}\) and \(F_{t^{*}+1}^{D^{*}}\): **Definition 3** (Change severity).: _The severity of a change is a positive function \(\Delta\) of the mismatch between \(F_{t^{*}}^{D^{*}}\) and \(F_{t^{*}+1}^{D^{*}}\)._ Since we do not know the true distributions \(F_{t^{*}}\) and \(F_{t^{*}+1}\), the best we can do is detecting changes and their characteristics based on the observed data. ## 4 Approach ### Principle of ABCD Direct comparison of high-dimensional distributions is impractical as it requires many samples (Gretton et al, 2012). Yet the number of variables required to describe such data with high accuracy is often much smaller than \(d\)(Lee and Verleysen, 2007). Dimensionality reduction techniques let us _encode_ observations in fewer dimensions. The more information encodings retain, the better one can reconstruct (_decode_) the original data. However, if the distribution changes, the reconstruction will degrade and produce higher errors. We leverage this principle in ABCD by monitoring the reconstruction loss of an encoder-decoder model \(\psi\circ\phi\) for some encoder function \(\phi\) and decoder function \(\psi\). Figure 1 illustrates this. Specifically, we first learn \(\phi:[0,1]^{d}\rightarrow[0,1]^{d^{\prime}}\) with \(d^{\prime}=\lfloor\eta d\rfloor<d\), \(\eta\in(\nicefrac{{1}}{{d}},1)\), mapping the data to fewer dimensions, and \(\psi:[0,1]^{d^{\prime}}\rightarrow[0,1]^{d}\). Then, we monitor the loss between each \(x_{t}\) and its reconstruction \(\hat{x}_{t}=\psi\circ\phi(x_{t})=\psi(\phi(x_{t}))\): \[L_{t}=MSE(x_{t},\hat{x}_{t})=\frac{1}{d}\sum_{j=1}^{d}\left(x_{t,j}-\hat{x}_{t,j}\right)^{2}=\frac{1}{d}\sum_{j=1}^{d}L_{t,j} \tag{1}\] We hypothesize that distribution changes lead to outdated encoder-decoder models -- see for example (Jaworski et al, 2020) for empirical evidence. Hence, we assume that changes in the reconstruction affect the _mean_\(\mu_{t^{*}+1}\) of the loss, because the model can no longer accurately reconstruct the input: \[F_{t^{*}}\neq F_{t^{*}+1}\implies\mu_{t^{*}}\neq\mu_{t^{*}+1} \tag{2}\] We can now replace the definition of change in high-dimensional data with an easier-to-evaluate, univariate proxy: \[\exists t^{*}\in[1,\ldots,t]:\mu_{t^{*}}\neq\mu_{t^{*}+1} \tag{3}\] It allows detecting arbitrary changes in the original (high-dimensional) distribution as long as they affect the average loss of the encoder-decoder. Since the true \(\mu_{t^{*}}\) and \(\mu_{t^{*}+1}\) are unknown, we estimate them from the stream: \[\hat{\mu}_{1,t^{*}}=\frac{1}{t^{*}}\sum_{i=1}^{t^{*}}L_{i},\quad\hat{\mu}_{t^{ *}+1,t}=\frac{1}{t-t^{*}}\sum_{i=t^{*}+1}^{t}L_{i}. \tag{4}\] Figure 1: Overview of ABCD. ### Detecting the change point ABCD detects a change at \(t^{*}\) if \(\hat{\mu}_{1,t^{*}}\) differs _significantly_ from \(\hat{\mu}_{t^{*}+1,t}\). To quantify this, we derive a test based on Bernstein's inequality (Bernstein, 1924). It is often tighter than more general alternatives like Hoeffding's inequality (Boucheron et al, 2013). Let \(\hat{\mu}_{1},\hat{\mu}_{2}\) be the averages of two independent samples from two univariate random variables. One wants to evaluate if both random variables have the same expected values: The null hypothesis \(H_{0}\) is \(\mu_{1}=\mu_{2}\). Based on the two samples, one rejects \(H_{0}\) if \(\Pr\left(\left|\hat{\mu}_{1}-\hat{\mu}_{2}\right|\ \geq\epsilon\right)\leq\delta\) where \(\delta\) is a preset significance level. The following theorem allows evaluating Equation (3) based on Bernstein's inequality. **Theorem 1** (Bound on \(\Pr\left(\left|\hat{\mu}_{1}-\hat{\mu}_{2}\right|\geq\epsilon\right)\)).: _Given two independent samples of size \(n_{1}\) and \(n_{2}\) from two random variables with absolute values less than \(M\) almost surely, let \((\hat{\mu}_{1},\sigma_{1}^{2}),(\hat{\mu}_{2},\sigma_{2}^{2})\) denote their sample means and variances and \(\mu_{1},\mu_{2}\) the unknown expected values. Assuming \(\mu_{1}=\mu_{2}\), we have:_ \[\Pr\left(\left|\hat{\mu}_{1}-\hat{\mu}_{2}\right|\geq\epsilon\right)\leq\\ 2\exp\left\{-\frac{n_{1}(\kappa\epsilon)^{2}}{2\left(\sigma_{1 }^{2}+\frac{1}{3}\kappa M\epsilon\right)}\right\}+2\exp\left\{-\frac{n_{2}((1 -\kappa)\epsilon)^{2}}{2\left(\sigma_{2}^{2}+\frac{1}{3}(1-\kappa)M\epsilon \right)}\right\}\in(0,4]\\ \forall\kappa\in[0,1]. \tag{5}\] Proof.: We follow the same steps as in (Bifet and Gavalda, 2007; Pears et al, 2014). Recall Bernstein's inequality: \[\Pr\left(\left|\hat{\mu}-\mu\right|\geq\epsilon\right)\leq 2\exp\left\{-\frac{n \epsilon^{2}}{2\left(\sigma^{2}+\frac{1}{3}M\epsilon\right)}\right\} \tag{6}\] We apply the union bound to \(\Pr\left(\left|\hat{\mu}_{1}-\hat{\mu}_{2}\right|\geq\epsilon\right)\). For all \(\kappa\in[0,1]\), we have: \[\Pr\left(\left|\hat{\mu}_{1}-\hat{\mu}_{2}\right|\geq\epsilon\right)\quad \leq\quad\Pr\left(\left|\hat{\mu}_{1}-\mu_{1}\right|\geq\kappa\epsilon\right) \ +\ \Pr\left(\left|\hat{\mu}_{2}-\mu_{2}\right|\geq(1-\kappa)\epsilon\right) \tag{7}\] Substituting above with Bernstein's inequality completes the proof. With regard to change detection, one can use Equation (5) to evaluate for a time point \(k\) if a change occurred. The question is, however, how to choose \(\epsilon\) to limit the probability of false alarm at any time \(t\) to a maximum \(\delta\). Our approach is to set \(\epsilon\) to the observed \(\left|\hat{\mu}_{1,k}-\hat{\mu}_{k+1,t}\right|\) and to set \(n_{1}=k\), \(n_{2}=t-k\). The result bounds the probability of observing \(\left|\hat{\mu}_{1,k}-\hat{\mu}_{k+1,t}\right|\) between two independent samples of sizes \(k\) and \(t-k\) under \(H_{0}\). If this probability is very low, the distributions must have changed at \(k\). Then, we search for changes at multiple time points \(k\) in the current window. Hence, we obtain multiple such probability estimates; our change score is their minimum: \[p=\min_{k}\Pr\left(\left|\hat{\mu}_{1}-\hat{\mu}_{2}\right|\geq\left|\hat{\mu} _{1,k}-\hat{\mu}_{k+1,t}\right|\right) \tag{8}\] The corresponding change point \(t^{*}\) splits \((L_{1},L_{2},\ldots,L_{t})\) into the two subwindows with the statistically most different mean. #### 4.2.1 Choice of parameter \(\kappa\) The bound in Equation (5) holds for any \(\kappa\in[0,1]\). A good choice, however, provides a tighter estimate, resulting in faster change detection for a given rate of allowed false alarms \(\delta\). (Bifet and Gavalda, 2007) suggest to choose \(\kappa\) s.th. \(Pr(|\hat{\mu}_{1}-\mu_{1}|\geq\kappa\epsilon)\approx Pr(|\hat{\mu}_{2}-\mu_{2}| \geq(1-\kappa)\epsilon)\), that approximately minimizes the upper bound. Substituting both sides with Bernstein's inequality, we get \[\frac{n_{1}(\kappa\epsilon)^{2}}{\sigma_{1}^{2}+\frac{\kappa M\epsilon}{3}}= \frac{n_{2}(1-\kappa)^{2}\epsilon^{2}}{\sigma_{2}^{2}+\frac{(1-\kappa)M \epsilon}{3}}. \tag{9}\] Setting \(n_{1}=rn_{2}\) and simplifying, we have \[\frac{3\sigma_{1}^{2}+\kappa M\epsilon}{r\kappa^{2}}=\frac{3\sigma_{2}^{2}+(1 -\kappa)M\epsilon}{(1-\kappa)^{2}}. \tag{10}\] To solve for \(\kappa\), note that \(|\hat{\mu}_{1,k}-\hat{\mu}_{k+1,t}|\approx 0\) for large enough \(k\) and \(t-k\) while there is no change. This leads to a change score \(p\gg\delta\) for any choice of \(\kappa\). Hence, choosing \(\kappa\) optimal is irrelevant while there is no change. In contrast, if a change occurs, the change in the model's loss dominates the variance in both subwindows, leading to \(M\epsilon\gg\sigma_{1}^{2},\sigma_{2}^{2}\). In that case, the influence of \(\sigma_{1}^{2},\sigma_{2}^{2}\) is negligible for sufficiently large \(\kappa\) and \(1-\kappa\): \[\frac{\kappa M\epsilon}{r\kappa^{2}}=\frac{(1-\kappa)M\epsilon}{(1-\kappa)^{ 2}}. \tag{11}\] Solving Equation (11) for \(\kappa\) results in our recommendation for \(\kappa\) (Equation (12) which we restrict to \([\kappa_{min},1-\kappa_{min}]\) with \(\kappa_{min}=0.05\). \[\kappa=\frac{1}{1+r}=\frac{n_{2}}{n_{1}+n_{2}} \tag{12}\] #### 4.2.2 Minimum sample sizes and outlier sensitivity This section investigates the conditions under which ABCD detects changes. We derive a minimum size of the first window above which ABCD detects a change. It bases on the fact that the number of observations before an evaluated time point \(k\) remains fixed while the number of observations after \(k\) grows with \(t\). Those counts are \(n_{1}=k\) and \(n_{2}=t-k\) in Equation (5). Also, since we consider bounded random variables, their variance is bounded as well. Hence, the second term in Equation (5) approaches \(0\) for any \(\epsilon>0\). With this, solving Equation (5) for \(n_{1}\) yields: \[n_{1}\geq\left\lceil 2\log\left(\frac{2}{\delta}\right)\left(\frac{\sigma_{1}^{ 2}}{(\kappa\epsilon)^{2}}+\frac{M}{3\kappa\epsilon}\right)\right\rceil. \tag{13}\] By setting \(\epsilon=|\hat{\mu}_{1}-\hat{\mu}_{2}|\) we see that the required size of the first window decreases the larger the change in the average reconstruction error. For example, with \(M=1\), \(\epsilon=\sigma_{1}=0.1\), and \(\delta=0.05\) our approach requires \(n_{1}\geq 32\). Since ABCD detects changes in the average reconstruction loss of a bounded vector, it is stable with respect to outliers as long as they are reasonably rare. To see this, assume w.l.o.g. that window 1 contains \(n_{out}\) outliers and that \(\epsilon>0\). One can show that the average of the outliers, \(\hat{\mu}_{out}\), must exceed the average of the remaining inliers, \(\hat{\mu}_{in}\), by \(n_{1}\epsilon/n_{out}\). In the example above, a single outlier would thus have to exceed \(\hat{\mu}_{in}\) by \(n_{1}\epsilon=3.2\). This, however, is impossible because \(M=1\) bounds the reconstruction loss. ### Detecting the change subspace After detecting a change, we identify the change subspace. Restricting the encoding size to \(d^{\prime}<d\) forces the model to learn relationships between different input dimensions. As a result, the loss observed for dimension \(j\) contains not only information about the change in that dimension (i.e., the marginal distribution in \(j\) changes), but also about correlations influencing dimension \(j\). Hence, we can detect changes in the marginal- and joint-distributions by evaluating in which dimensions the loss changed the most. Algorithm 1 describes how we identify change subspaces. For each dimension \(j\), we compute the average loss (the squared error in dimension \(j\)) before and after \(t^{*}\), denoted \(\hat{\mu}_{1,t^{*}}^{j},\hat{\mu}_{t^{*}+1,t}^{j}\) (lines 5 and 6), and the standard deviation \(\sigma_{1,t^{*}}^{j},\sigma_{t^{*}+1,t}^{j}\) (lines 6 and 7). We then evaluate Equation (5), returning an upper bound on the \(p\)-value in the range \((0,4]\) for dimension \(j\) (line 9). If \(p_{j}<\tau\in[0,4]\), an external parameter for which we give a recommendation later on, we add \(j\) to the change subspace (lines 10 and 11). ``` 1:\((x_{1},\hat{x}_{1}),\ldots,(x_{t},\hat{x}_{t})\), \(t^{*}\) 2:procedureSubspace 3:\(D^{*}\leftarrow\emptyset\) 4:for all\(j\in 1,\ldots,d\)do 5:\(s\leftarrow\left((x_{i,j}-\hat{x}_{i,j})^{2}\ \forall i\in 1,\ldots,t\right)\) 6:\(\hat{\mu}_{1,t^{*}}^{j}=\frac{1}{t^{*}}\sum_{i=1}^{t^{*}}s_{i},\quad\hat{\mu }_{t^{*}+1,t}^{j}=\frac{1}{t-t^{*}}\sum_{i=t^{*}+1}^{t}s_{i}\) 7:\(\sigma_{1,t^{*}}^{j}=\sqrt{\frac{1}{t^{*}}\sum_{i=1}^{t^{*}}\left(s_{i}-\hat {\mu}_{1,t^{*}}^{j}\right)^{2}}\) 8:\(\sigma_{t^{*}+1,t}^{j}=\sqrt{\frac{1}{t-t^{*}}\sum_{i=t^{*}+1}^{t}\left(s_{i} -\hat{\mu}_{t^{*}+1,t}^{j}\right)^{2}}\) 9:\(p_{j}\leftarrow\) Evaluate Equation (5) \(\triangleright\) Bernstein score 10:if\(p_{j}<\tau\)then 11:\(D^{*}\gets D^{*}\cup\{j\}\) 12:Return\(D^{*}\) ``` **Algorithm 1** Identification of change subspaces. ### Quantifying change severity ABCD provides a measure of change severity in the affected subspace, based on the assumption that the loss in the change subspace increases with severity. Hence, we compute the average loss observed in \(D^{*}\) before and after the change, \[\hat{\mu}_{1,t^{*}}^{D^{*}}=\frac{1}{|D^{*}|t^{*}}\sum_{i=1}^{t^{*}}\sum_{j\in D ^{*}}L_{i,j},\quad\hat{\mu}_{t^{*}+1,t}^{D^{*}}=\frac{1}{|D^{*}|(t-t^{*})}\sum_{ i=t^{*}+1}^{t}\sum_{j\in D^{*}}L_{i,j} \tag{14}\] and the standard deviation observed before the change: \[\sigma_{1,t^{*}}^{D^{*}}=\sqrt{\frac{1}{t^{*}}\sum_{i=1}^{t^{*}}\left(\hat{\mu }_{i}^{D^{*}}-\hat{\mu}_{1,t^{*}}^{D^{*}}\right)^{2}}\text{ with }\hat{\mu}_{i}^{D^{*}}=\frac{1}{|D^{*}|}\sum_{j\in D^{*}}L_{i,j} \tag{15}\] We then standard-normalize the average loss \(\hat{\mu}_{t^{*}+1}^{D^{*}}\) observed after the change: \[\Delta=\frac{\left|\hat{\mu}_{t^{*}+1,t}^{D^{*}}-\hat{\mu}_{1,t^{*}}^{D^{*}} \right|}{\sigma_{1,t^{*}}^{D^{*}}}\in\mathbb{R}^{+} \tag{16}\] Intuitively, \(\Delta\) is the standard deviation of model's loss on the new distribution. ### Working with windows In comparison to most approaches, ABCD evaluates multiple possible change points within an adaptive time interval \([1,\ldots,t]\). This frees the user from choosing the window size a-priori and allows to detect changes at variable time scales. Next, we discuss how to efficiently evaluate those time points. #### 4.5.1 Maintaining loss statistics online To avoid recomputing average loss values and their variance for multiple time points every time new observations arrive, we store Welford aggregates \(A_{1,k}\) summarizing the stream in the interval \([1,\ldots,k]\). Each aggregate \(A_{1,k}\) is a tuple containing the average loss \(\hat{\mu}_{1,k}\) and the sum of squared differences \(ssd_{1,k}=k^{-1}\sum_{j=1}^{k}L_{j}\). We store these aggregates for the time interval \([1,\ldots,t]\). **Creating a new aggregate.** Every time a new observation with loss \(L_{t}\) arrives, we create a new aggregate based on the previous aggregate \(A_{1,t-1}=(\hat{\mu}_{1,t-1},ssd_{1,t-1})\) in \(\mathcal{O}(1)\) using Welford's algorithm (Knuth, 1997): \[\hat{\mu}_{1,t}=\hat{\mu}_{1,t-1}+\frac{1}{t}(L_{t}-\hat{\mu}_{1,t-1}) \tag{17}\] \[ssd_{1,t}=ssd_{1,t-1}+(L_{t}-\hat{\mu}_{1,t-1})\left(L_{t}-\hat{\mu}_{1,t}\right) \tag{18}\] **Computing the statistics.** Two aggregates \(A_{1,k}\) and \(A_{1,t}\), \(t>k\) overlap in the time interval \([1,\ldots,k]\). We leverage this overlap to derive an aggregate \(A_{k+1,t}=(\hat{\mu}_{k+1,t},ssd_{k+1,t})\) representing the time interval \([k+1,\ldots,t]\) Equation (19) and Equation (20) are based on Chan's method for combining variance estimates of non-overlapping samples (Chan et al, 1982). \[\hat{\mu}_{k+1,t}=\frac{1}{t-k}(t\hat{\mu}_{1,t}-k\hat{\mu}_{1,k}) \tag{19}\] \[ssd_{k+1,t}=ssd_{1,t}-ssd_{1,k}-\frac{k(t-k)}{t}\left(\hat{\mu}_{1,k}-\hat{\mu}_{ k+1,t}\right)^{2} \tag{20}\] From \(ssd_{1,k}\) and \(ssd_{k+1,t}\) we can compute the sample variances as follows: \[\sigma_{1,k}^{2}=\frac{ssd_{1,k}}{k-1},\quad\sigma_{k+1,t}^{2}=\frac{ssd_{k+1, t}}{t-k-1} \tag{21}\] **Derivation.** Given two non-overlapping samples \(A=\{x_{1},\ldots,x_{m}\}\) and \(B=\{x_{1},\ldots,x_{n}\}\) of a real random variable. Let \(T_{A}=\sum_{i=1}^{m}x_{i}\) and \(T_{B}=\sum_{i=1}^{n}x_{i}\) be the sums of the samples and \(ssd_{A}=\sum_{i=1}^{m}(x_{i}-m^{-1}T_{A})^{2}\) and \(ssd_{B}=\sum_{i=1}^{n}(x_{i}-n^{-1}T_{B})^{2}\) be the sums of squared distances from the mean. For the union of both sets \(AB=A\cup B\) we have \(T_{AB}=T_{A}+T_{B}\), which is equivalent to \((m+n)\hat{\mu}_{AB}=m\hat{\mu}_{A}+n\hat{\mu}_{B}\). Solving for \(\hat{\mu}_{B}\) gives \[\hat{\mu}_{B}=\frac{m+n}{n}\hat{\mu}_{AB}-\frac{m}{n}\hat{\mu}_{A}. \tag{22}\] Substituting \(n=t-k\), \(m=k\), \(\hat{\mu}_{A}=\hat{\mu}_{1,k}\), \(\hat{\mu}_{B}=\hat{\mu}_{k+1,t}\), and \(\hat{\mu}_{1,t}=\hat{\mu}_{AB}\) gives Equation (19); next we derive Equation (20). Chan et al (1982) state: \[ssd_{AB}=ssd_{A}+ssd_{B}+\frac{m}{n(m+n)}\left(\frac{n}{m}T_{A}-T_{B}\right)^{ 2}, \tag{23}\] which is equivalent to \[ssd_{AB}=ssd_{A}+ssd_{B}+\frac{m}{n(m+n)}\Bigg{(}n\underbrace{\left(\frac{1}{ m}T_{A}-\frac{1}{n}T_{B}\right)}_{=\hat{\mu}_{A}-\hat{\mu}_{B}}\Bigg{)}^{2}. \tag{24}\] Solving for \(ssd_{B}\), applying the former substitutions, and setting \(ssd_{A}=ssd_{1,k}\), \(ssd_{B}=ssd_{k+1,t}\), and \(ssd_{1,t}=ssd_{AB}\) results in Equation (20). ### Implementation #### Algorithm One can implement ABCD as a recursive algorithm, shown in Algorithm 2, which restarts every time a change occurs. We keep a data structure \(W\) that contains the aggregates, instances, and reconstructions. \(W\) can either be empty, or, in the case of a recursive execution, already contain data from the previous execution. Table 2 summarizes ABCD's external parameters. Prior to execution, our algorithm must first obtain a model of the current data from an initial sample of size \(n_{min}\). If necessary, ABCD allows enough instances to arrive (lines 5-7). Larger choices of \(n_{min}\) allow for better approximations of the current distribution but delay change detection. Hence our recommendation is to set \(n_{min}\) as small as possible to still learn the current distribution; a default of \(n_{min}=100\) has worked well for us. Afterwards, the algorithm trains the model using the instances in \(W\) (lines 8-9). ABCD can in principle work with various encoder-decoder models; thus we deal with tuning the model only on a high level. Nonetheless, we give recommendations in our sensitivity study later on. After model training, ABCD detects changes. It reconstructs each new observation \(x_{t+1}\) (line 11), creates a new aggregate \(A_{1,t+1}\) (line 12), and adds \(w_{t+1}\coloneqq(A_{1,t+1},\hat{x}_{t+1},x_{t+1})\) to \(W\) (lines 13-14). Our approach then computes change score \(p\) and change point \(t^{*}\) (lines 15-16). If \(p<\delta\), it detects a change. Once ABCD detects a change, it identifies the corresponding subspace and evaluates its severity (lines 21-22). Then it adapts \(W\) by dropping the outdated part of the window (line 23), including all information obtained with the outdated model. At last, we restart ABCD with the adapted window (line 24). Discussion In the worst case our approach consumes linear time and memory because \(W\) grows linearly with \(t\). However, we can simply restrict the size of \(W\) to \(n_{max}\) items for constant memory or evaluate only \(k_{max}\) window splits for constant runtime. In the latter case we split \(W\) at every \(t/k_{max}\)th time point. Regarding \(n_{max}\), it is beneficial that the remaining aggregates still contain information about all observations in \((1,\ldots,t)\). Hence, ABCD considers the _entire_ past since the last change even though one restricts the size of \(W\). ABCD can work with any encoder-decoder model, such as deep neural networks. However, handling a high influx of new observations faster than the model's processing capability can be challenging. Assuming that \(\psi\circ\phi\in\mathcal{O}(g(d))\) for some function \(g\) of dimensionality \(d\), the processing time of a single instance during serial execution is in \(\mathcal{O}\left(g(d)+k_{max}\right)\). Nevertheless, both the deep architecture components and the computation of the change score (cf. Equation 8) can be executed in parallel using specialized hardware. Dimensionality reduction techniques are often already present in data stream mining pipelines, for example as a preprocessing step to improve the accuracy of a classifier (Yan et al, 2006). Reusing an existing dimensionality reduction model makes it is easy to integrate ABCD into an existing pipeline. \begin{table} \begin{tabular}{l l} \hline \hline Parameter & Description \\ \hline \(\eta\in(\nicefrac{{1}}{{d}},1)\) & The relative dimensionality of the encoding \\ \(\delta\in(0,1)\) & The allowed probability of false alarms \\ \(\tau\in(0,4]\) & The threshold for detecting subspaces \\ \hline \hline \end{tabular} \end{table} Table 2: External parameters. ``` 1:The model \(\psi\circ\phi\), threshold \(\delta\), threshold \(\tau\) 2:procedureABCD(\(W\)) 3:\(\psi\circ\phi\gets Null\); \(t\leftarrow|W|\)\(\triangleright\) Model not yet trained 4:while new instance \(x_{t+1}\)do 5:if\(t<n_{min}\)then\(\triangleright\) Warm up 6:\(w_{t+1}\leftarrow(-,-,x_{t+1})\) 7:\(W\gets W\)\(||\)\(w_{t+1}\) 8:elseif\(\psi\circ\phi=Null\)then 9:\(\psi\circ\phi\leftarrow\) TrainModel(W) 10:else 11:\(\hat{x}_{t+1}\leftarrow\psi(\phi(x_{t+1}))\)\(\triangleright\) Reconstruct 12:\(A_{t+1}\leftarrow\) update aggregate \(A_{t}\) with \(L_{t+1}\) 13:\(W\gets W\)\(||\)\((A_{t+1},\hat{x}_{t+1},x_{t+1})\) 14:\(p\leftarrow\) Equation (8)\(\triangleright\) Bernstein score 15:\(t^{*}\leftarrow\) argmin\({}_{k}\) of Equation (8) 16:if\(p<\delta\)then\(\triangleright\) A change occurred 17:\(D^{*}\leftarrow\) Subspace(\(W,t^{*},\tau\)) 18:\(\Delta\leftarrow\) Severity(\(W,t^{*},D^{*}\)) 19:\(W\leftarrow\{(-,-,X_{i})\)\(\forall w_{i}\in W:i>t^{*}\}\)\(\triangleright\) Restart 20:ABCD(\(W\)) ``` **Algorithm 2** Adaptive Bernstein Change Detector (ABCD) ## 5 Experiments This section describes our experiments and results. We first describe the experimental setting (Section 5.1). Then we analyze ABCD's change detection performance (Section 5.3), its ability to find change subspaces and quantify change severity (Section 5.4), and its parameter sensitivity (Section 5.5). ### Algorithms We evaluate ABCD with different encoder-decoder models: (1) Principal Component Analysis (PCA) (\(d^{\prime}=\eta d\)), (2) Kernel-PCA (\(d^{\prime}=\eta d\), RBF-kernel), and (3) a standard fully-connected autoencoder model with one hidden ReLU layer (\(d^{\prime}=\eta d\)) and an output layer with sigmoid activation. For (1) and (2), we rely on the default scikit-learn implementations. We implement the autoencoder (3) in pytorch and train it through gradient descent using \(E\) epochs and an Adam optimizer with default parameters according to Kingma and Ba (2015). We compare ABCD with AdwinK, IKS, IBDD, WATCH, and D3 (c.f. Section 2). We evaluate for each approach a large grid of parameters, shown in Table 3. Whenever possible, the evaluated grids of hyperparameters for competitors base on recommendations in respective papers. Otherwise, we choose them based on preliminary experiments. For ABCD, we evaluate larger and smaller values for \(\delta\), \(\eta\) and \(E\) to observe our approach's sensitivity to those parameters. The choice of \(\tau=2.5\) is our recommended default based on our sensitivity study in Section 5.5. Last, we set \(n_{min}=100\) and \(k_{max}=20\), minimum values that have worked well in preliminary experiments. ### Datasets There are not many public benchmark data streams for change detection. Thus we generate our own from seven real-world (rw) and synthetic (syn) classification datasets, similar to (Faber et al, 2021; Faithfull et al, 2019). We simulate changing data streams2 by sorting the data by label, unless stated otherwise. If the label changes, a change has occurred. In real-world data streams, the number of observations between changes depends on each dataset, reported below. In the synthetic streams, we introduce changes every 2000 observations, which is a relatively large interval, to assess whether some approaches generate many false alarms. The generators base on the following datasets: Footnote 2: Available here [https://github.com/heymarco/AdaptiveBernsteinChangeDetector](https://github.com/heymarco/AdaptiveBernsteinChangeDetector) * **HAR (rw):** The dataset _Human Activity Recognition with Smartphones_(Anguita et al, 2013) (\(d=561\)) bases on smartphone accelerometer \begin{table} \begin{tabular}{l l l} \hline Algorithm & Parameter & Values \\ \hline ABCD & model & PCA, Kernel PCA, Autoencoders \\ & \(\delta\) & \(0.2,0.05,0.01\) \\ & \(\eta\) & \(0,3,0.5,0.7\) \\ & \(E^{\dagger}\) & \(20,50,100\) \\ & \(n_{min}\); \(k_{max}\); \(\tau\) & 100; 20; 2.5 \\ \hline AdwinK & \(k\) & \(0.01^{*}_{0}\)\(0.05^{*}_{0}\)\(0.1^{*}_{0}\)\(0.2^{*}_{0}\)\(0.3^{*}_{0}\)\(0.4^{*}_{0}\)\(0.5^{*}\) \\ & \(\delta\) & 0.05 \\ \hline D3 & \(\omega\) & \(100^{*}_{2}\)\(250^{*}_{0}\)\(500^{*}\) \\ & \(\rho\) & \(0.1^{*}_{0}\)\(0.2^{*}_{0}\)\(0.3^{*}_{0}\)\(0.4^{*}_{0}\)\(0.5^{*}\) \\ & \(\tau\) & \(0.6^{*}_{0}\)\(0.7^{*}_{0}\)\(0.8^{*}_{0}\)\(0.9^{*}\) \\ & model & Logistic Regression\({}^{*}_{0}\) Decision Tree \\ \hline IBDD & \(\omega\) & \(100,200,300\) \\ & \(m\) & \(10,20,50,100\) \\ \hline IKS & \(W\) & \(100^{*}_{0}\)\(200,500^{*}\) \\ & \(\delta\) & \(0.05\) \\ \hline WATCH\({}^{\ddagger}\) & \(\omega\) & \(500,1000\) \\ & \(\kappa\) & \(100\) \\ & \(\epsilon\) & \(2,3\) \\ & \(\mu\) & \(1000,2000\) \\ \hline \multicolumn{3}{l}{\({}^{*}\) used or recommended in the respective papers} \\ \multicolumn{3}{l}{\({}^{\dagger}\) only relevant for autoencoders} \\ \multicolumn{3}{l}{\({}^{\ddagger}\) authors did not recommend parameters for their approach} \\ \end{tabular} \end{table} Table 3: Evaluated approaches and their parameters. and gyroscope readings for different actions a person performs. A change occurs on average every 1768 observations. * **GAS (rw):** This data set (Vergara et al, 2011) (\(d=128\)) contains data from 16 sensors exposed to 6 gases at various concentrations. A change occurs on average every 2265 observations. * **LED (syn):** The LED generator samples instances representing a digit on a seven segment display. It contains 17 additional random dimensions. We add changes by varying the probability of bit-flipping in the relevant dimensions. * **RBF (syn):** The RBF generator (Bifet et al, 2010) starts by drawing a fixed number of centroids. For each new instance, the generator chooses a centroid at random and adds Gaussian noise. To create changes, we increment the seed of the generator resulting in different centroids. We then use samples from the new generator in a subspace of random size. * **MNIST, FMNIST, and CIFAR (syn):** Those data generators sample from the image recognition datasets MNIST (LeCun et al, 1998), Fashion MNIST (FMNIST) (Xiao et al, 2017) (\(d=784\)), and CIFAR (Krizhevsky et al, 2009) (\(d=1024\), grayscale). Changes can occur rapidly ("abrupt" or "sudden") or in time intervals ("gradual" or "incremental"). The shorter the interval, the more sudden the change. We vary the interval size between 1 and 300 unless stated otherwise. Real-world and image data do not have a ground truth for change subspaces and severity. Thus we generate three additional data streams: * **HSphere (syn):** This generator draws from a \(d^{*}\)-dimensional hypersphere bound to \([0,1]\) and adds \(d-d^{*}\) random dimensions. We vary the radius and center of the hypersphere to introduce changes. The change subspace contains those dimensions that define the hypersphere. * **Normal-M/V (syn):** These generators sample from a \(d^{*}\)-dimensional normal distribution and add \(d-d^{*}\) random dimensions. For type **M**, changes affect the distribution's mean, for **V** we change the distribution's variance. ### Change point detection We use precision, recall, and F1-score to evaluate the performance of the approaches at detecting changes. We define true positives (TP), false positives (FP) and false negatives (FN) as follows: * **TP:** A change was detected before the next change. * **FN:** A change was not detected before the next change. * **FP:** A change was detected although no change occurred. Also, we report the mean time until detection (MTD) indicating the average number of instances until a change is detected. Figure 2 shows F1-score, precision, recall, and MTD for all datasets and algorithm, as well as a column "Average" that summarizes across datasets. Each box contains the results for the grid of hyperparameters shown in Table 3. We see that our approach outperforms its competitors w.r.t. F1-score and precision. It also is competitive in terms of recall, though it loses against IKS, IBDD, and WATCH. These approaches seem overly sensitive. Further, we observe that the interquartile range for ABCD is smaller than for its competitors. This indicates that ABCD is robust to the choice of parameters. One reason is that ABCD uses adaptive windows, thereby eliminating the effect of a window size parameter (demonstrated in Section 5.6). Another reason is that ABCD detects changes in reconstruction loss irrespective of the actual quality of the reconstructions. For instance, Kernel PCA and PCA produce reconstructions of different accuracy in our experiments. However, for both models, the average accuracy changes when the stream changes, which is what our algorithm detects. Note that our reported results do not yield information about the actual accuracy of the underlying encoder-decoder models. ABCD has a higher MTD than D3, IBDD, and IKS, i.e., it requires more data to detect changes. However, those competitors are much less conservative and detect many more changes than exist in the data. Hence they have low precision but high recall -- this leads to a lower MTD. Table 4 reports the results of all approaches with their best hyperparameters. WATCH achieves relatively high F1-score and precision. In fact, WATCH is our strongest competitor although we still outperform it by at least \(8\,\mathrm{\char 37}\). Also, WATCH has an MTD of \(623.3\), which is more than ABCD. Further, ABCD has much higher precision than its competitors. We assume this is because ABCD (1) leverages the relationships between dimensions, in comparison to AdwinK, IKS, or IBDD, and (2) learns those relationships more effectively than, say, D3 or WATCH. For example, we observed in our Figure 2: Change Point Detection: Results for different algorithms and data-sets; each box contains the results for the evaluated grid of parameters. experiments that WATCH was frequently unable to accurately approximate the Wasserstein distance in high-dimensional data. ABCD has lower recall than most competitors, partly due to their over-sensitivity. In this regard, our approach might benefit from application-specific encoder-decoder models that encode the data more effectively. ### Change subspace and severity We now evaluate change subspace identification and change severity estimation. We set \(d=\{24,100,500\}\) and vary the change subspace size \(d^{*}\) randomly in \([1,d]\) (except for LED, here the subspace always contains dimensions 1-7). We set the ground truth for the severity to the absolute difference between the parameters that define the concepts, e.g., the hypersphere-radius in HAR before and after the change. We report an approach's subspace detection accuracy (SAcc.) and Spearman's correlation between the detected severity and the ground truth. We also report the F1-score for detecting change points. Figure 3 shows our results. As before, each box summarizes the results for the grid of evaluated hyperparameters. ABCD outperforms its competitors w.r.t. F1. Comparing the two approaches, AdwinK and IKS, that monitor each dimension separately, we see that the former can only detect changes that \begin{table} \begin{tabular}{l c c c c} \hline \hline Approach & F1 & Prec. & Rec. & MTD \\ \hline ABCD (ae) & 0.87 & 0.96 & 0.82 & 273 \\ ABCD (kpca) & 0.85 & 0.97 & 0.78 & 265 \\ ABCD (pca) & 0.77 & 0.97 & 0.67 & 399 \\ \hline \hline \end{tabular} \begin{tabular}{l c c c} \hline \hline AdwinK & 0.45 & 0.53 & 0.47 & 406 \\ D3 & 0.51 & 0.42 & 0.86 & 251 \\ IBDD & 0.45 & 0.30 & 0.97 & 394 \\ IKS & 0.19 & 0.05 & 0.99 & 80 \\ WATCH & 0.69 & 0.54 & 0.97 & 623 \\ \hline \hline \end{tabular} \end{table} Table 4: Results of approaches with their best hyperparameter configuration w.r.t. F1 score averaged over all data sets. Figure 3: Results for evaluating change subspace and severity. affect the mean of the marginal distributions (i.e., on Norm-M, LED). At the same time, the latter can also detect other changes (e.g., changes in variance). This is expected since AdwinK compares the mean in two windows while IKS compares the empirical distributions. Regarding subspace detection, our approach achieves an accuracy of 0.74 for PCA, 0.79 for autoencoders, and 0.81 for Kernel PCA. AdwinK performs similarly well when changes affect the mean of the marginal distributions. Except on LED, IKS performs worse than ABCD and AdwinK, presumably because IKS issues an alarm as soon as a single dimension changed. The estimates of our approach correlate more strongly with the ground truth than those of competitors, with an average of 0.44. We expect more specialized models to perform even better. On LED, all methods tested appear to struggle to separate patterns from noise, resulting in poor noise level estimate and low correlation scores. ### Parameter sensitivity of ABCD #### Sensitivity to \(\eta\) Figure 3(a) plots F1 for different datasets over \(\eta\). We observe that the size of the bottleneck does not significantly impact ABCD's change detection performance. One observes a similar picture in Figure 3(b) which shows the subspace detection accuracy and Spearman's \(\rho\). The influence of \(\eta\) on both metrics is low. As mentioned earlier, we assume that the presence of _change_ in reconstruction loss, rather than the quality of reconstruction itself, is crucial for ABCD. An exception is the LED dataset, for which Spearman's \(\rho\) increases with \(\eta\). Here, we hypothesize that a larger bottlenecks allows the encoder-decoder model to filter out the input noise in LED more effectively, leading to a more accurate estimation of severity. #### Sensitivity to \(E\) Figure 3(c) plots our approach's performance on different datasets for different choices of \(E\). Overall, our approach seems to be robust to the choice of \(E\). On LED, however, larger choices of \(E\) lead to substantial improvements in F1-score. The reason may be that the autoencoder does not converge to a proper representation of the data for small \(E\). To avoid this, we recommend choosing \(E\geq 50\) and to increase the value if one observes that the model has not yet converged sufficiently. #### Sensitivity to \(\tau\) Figure 3(d) investigates how the choice of \(\tau\) affects the performance of ABCD at detecting subspaces. Since the change score in Equation (5) provides an upper bound on the probability that a change occurred, the function can return values greater than 1, i,e,. in the range \((0,4]\). Hence we vary \(\tau\) in that range and record the obtained subspace detection accuracy. For all approaches we achieve optimal accuracy at \(\tau\approx 2.5\). This is probably because some dimensions could change more severely than others, resulting in variations of the change scores observed in the different dimensions of the change subspace. Based on our findings we recommend \(\tau=2.5\) as default. ### Ablation study on window types Next, we investigate the effect of different window types on change detection performance. We evaluate those commonly found in change detection literature (and in our competitors) and couple them with encoder-decoder models and the probability bound in Equation (5). In particular, we compare: (1) Adaptive Figure 4: Sensitivity of our approach to its hyperparameters. windows (AW), as in ADWIN, AdwinK, and our approach, (2) fixed reference windows (RW), as in IKS, (3) sliding windows (SW), as in WATCH, and (4) jumping windows (JW), as in D3. The latter "jump" every \(\rho|W|\) instances. We evaluate the hyperparameters mentioned in Table 3. For example, because D3 uses jumping windows, we include the evaluated hyperparameters for D3 in our evaluation of jumping windows. In addition, we extend the grid with other reasonable choices since we already preselected those in Table 3 for our competitors in a preliminary study. For ABCD we use \(\eta=0.5\) and \(E=50\). Table 5 reports the average over all hyperparameter combinations. We see that adaptive windowing yields higher F1-score and recall than other techniques, while precision remains high (\(\geq 0.95\)). SWs have a lower MTD than AEs and hence seem to require a fewer instances until they detect a change. This is expected: in contrast to sliding windows, adaptive windows allow the detection of even slight changes after a longer period of time, resulting in both higher MTD and recall. ### Runtime analysis #### Comparison with competitors Figure 4(a) shows the mean time per observation (MTPO) of ABCD and its competitors for \(d\in\{10,100,1000,10,000\}\) running single-threaded. The results are averaged over all evaluated parameters (Table 3). ABCD (id) replaces the encoder-decoder model with the identity which does not cause overhead. This allows measuring how much the encoder-decoder model influences ABCD's runtime. The results confirm that the runtime of ABCD alone, i.e, without the encoding-decoding-process, remains unaffected by a stream's dimensionality. We observe that our approach is able to process around 10,000 observations per second for \(d\leq 100\). This is more than IKS, WATCH and AdwinK (except at \(d=10\)) but slower than D3 and IBDD. The reason is that our approach \begin{table} \begin{tabular}{l l c c c c} \hline \hline Model & Window & F1 & Prec. & Rec. & MTD \\ \hline AE & AW & **0.83** & 0.95 & **0.78** & 455.6 \\ & RW & 0.53 & **1.00** & 0.21 & 403.6 \\ & SW & 0.62 & **1.00** & 0.40 & **207.2** \\ & JW & 0.52 & 0.79 & 0.46 & 239.1 \\ \hline KPCA & AW & **0.83** & 0.99 & **0.75** & 309.0 \\ & RW & 0.56 & **1.00** & 0.23 & 456.3 \\ & SW & 0.68 & **1.00** & 0.49 & **202.8** \\ & JW & 0.50 & 0.77 & 0.33 & 266.2 \\ \hline PCA & AW & **0.72** & 0.98 & **0.55** & 355.3 \\ & RW & 0.36 & **1.00** & 0.09 & 400.0 \\ & SW & 0.53 & **1.00** & 0.33 & **206.7** \\ & JW & 0.46 & 0.75 & 0.20 & 239.9 \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation: Using encoder-decoder models with different window types. evaluates \(k_{max}\) possible change points in each time step. In high-dimensional data, our competitors' MTPO grows faster than ABCD with PCA or KPCA; in fact, ABCD (pca) is second fastest after D3 for \(d\geq 1000\). An exception is WATCH at \(d=10000\). This is due to an iteration cap for approximating the Wasserstein distance restricting the approach's MTPO. #### 5.7.2 Runtime depending on window size Next, we investigate ABCD's runtime for different choices of \(k_{max}\) and \(\eta\). We run this experiment on a single CPU thread. For all three evaluated models, the encoding-decoding of an observation has a time complexity of \(\mathcal{O}(\eta d^{2})\); hence, ABCD's processing time of one instance is in \(\mathcal{O}(\eta d^{2}+k_{max})\). We therefore expect a quadratic increase in execution time with dimensionality and a linear increase with \(\eta\) and \(k_{max}\) when running on a single core. Figure 5: Runtime analysis of ABCD. The results in Figure 5b show the influence of \(k_{max}\) on the execution time: \(k_{max}\) effectively restricts the MTPO as soon as \(|W|=k_{max}\). Afterwards, MTPO remains unaffected by \(|W|\). This also confirms that one can evaluate different possible change points in constant time using the proposed aggregates. We show the runtime for different choices of bottleneck-size \(\eta\) in Figure 5c. \(\eta\) has little influence on the runtime of ABCD with PCA and Kernel-PCA. However, coupled with an autoencoder (implemented in pytorch) we observe the expected linear increase in execution time from \(0.1\,\mathrm{ms}\) for \(\eta=0.3\) to \(0.3\,\mathrm{ms}\) for \(\eta=0.7\). Considering that change detection performance has shown to remain stable even for smaller choices of \(\eta\), we recommend \(\eta\leq 0.5\) as default. ## 6 Conclusion We presented a change detector for high-dimensional data streams, called ABCD, that monitors the reconstruction loss of an encoder-decoder-model in an adaptive window with a change score based on Bernstein's inequality. Our approach identifies changes and change subspaces, and provides a severity measure that correlates with the ground truth. Since encoder-decoder models are already used in many domains (Rani et al, 2022), our approach is widely applicable. In the future, it would thus be interesting to test ABCD with application or data specific encoder-decoder models. For example, one might observe even better performance on streams of image data when applying convolutional autoencoders. ABCD could also benefit from a theoretical analysis of the relationship between changes in data distribution and the loss of different encoder-decoder models. Last, we want to extend ABCD so that it can distinguish changes that overlap in time, a situation that becomes more likely with growing dimensionality.
2307.03588
Exploring the chemodynamics of metal-poor stellar populations
Metal-poor stars are key for studying the formation and evolution of the Galaxy. Evidence of the early mergers that built up the Galaxy remains in the distributions of abundances, kinematics, and orbital parameters of its stars. Several substructures resulting from these mergers have been tentatively identified in the literature. We conduct a global analysis of the chemodynamic properties of metal-poor stars. Our aim is to identify signs of accreted and in situ stars in different regions of the parameter space and to investigate their differences and similarities. We selected a sample of about 6600 metal-poor stars with [Fe/H] $\leq$ -0.8 from DR3 of the GALAH survey. We used unsupervised machine learning to separate stars in a parameter space made of two normalised orbital actions, plus [Fe/H] and [Mg/Fe], without additional a priori cuts on stellar properties. We divided the halo stars in four main groups. All groups exhibit a significant fraction of in situ contamination (ISC). Accreted stars of these groups have very similar chemical properties, except for those of the group of stars with very retrograde orbits. This points to at most two main sources of accreted stars in the current sample, the major one related to Gaia-Enceladus (GE) and the other possibly related to Thamnos and/or Sequoia. Stars of GE are r-process enriched at low metallicities, but a contribution of the s-process appears with increasing metallicity. A flat trend of [Eu/Mg] as a function of [Fe/H] suggests that only core collapse supernovae contributed to r-process elements in GE. To better characterise accreted stars in the low metallicity regime, high precision abundances and guidance from chemical evolution models are needed. It is possible that ISC in samples of accreted stars has been underestimated. This can have important consequences for attempts to estimate the properties of the original systems.
André Rodrigo da Silva, Rodolfo Smiljanic
2023-07-07T13:33:12Z
http://arxiv.org/abs/2307.03588v1
# Exploring the chemodynamics of metal-poor stellar populations ###### Abstract Context:Metal-poor stars are key for studying the formation and evolution of the Galaxy. Evidence of the early mergers that built up the Galaxy remains in the distributions of abundances, kinematics, and orbital parameters of its stars. Several substructures resulting from these mergers have been tentatively identified in the literature. Aims:We conduct a global analysis of the chemodynamic properties of metal-poor stars. Our aim is to identify signs of accreted and in situ stars in different regions of the parameter space and to investigate their differences and similarities. Methods:We selected a sample of about 6600 metal-poor stars with [Fe/H] \(\leq-0.8\) from DR3 of the GALAH survey. We used unsupervised machine learning to separate stars in a parameter space made of two normalised orbital actions, plus [Fe/H] and [Mg/Fe], without additional a priori cuts on stellar properties. Results:We divided the halo stars in four main groups. All groups exhibit a significant fraction of in situ contamination. Accreted stars of these groups have very similar chemical properties, except for those of the group of stars with very retrograde orbits. This points to at most two main sources of accreted stars in the current sample, the major one related to _Gaia_-Enceladus and the other possibly related to Thamons and/or Sequoa. Stars of _Gaia_-Enceladus are r-process enriched at low metallicities, but a contribution of the s-process appears with increasing metallicity. A flat trend of [Eu/Mg] as a function of [Fe/H] suggests that only core collapse supernovae contributed to r-process elements in _Gaia_-Enceladus. Conclusions:To better characterise accreted stars in the low metallicity regime, high precision abundances and guidance from chemical evolution models are needed. It is possible that in situ contamination in samples of accreted stars has been underestimated. This can have important consequences for attempts to estimate the properties of the original systems. ## 1 Introduction The quest to understand how galaxies form is one of the main endeavours of modern-day astrophysics. The Milky Way, our home galaxy, plays a central role in this quest, as stellar populations can be resolved in finer details than is possible for other galaxies. In the modern \(\Lambda\)CDM paradigm (see Springel et al., 2006; Spergel et al., 2007), large galaxies grow hierarchically through the accretion and merging of smaller systems (e.g., Searle & Zinn, 1978; White & Frenk, 1991; Bullock & Johnston, 2005; Bland-Hawthorn & Freeman, 2014). The sequence of these events leaves its imprint on the distributions of stellar properties of the main host galaxy (Helmi, 2008, 2020). Kinematic and orbital stellar parameters carry signs of this accretion history because the exchange of energy and momenta takes longer than the time that has passed since the formation of the Galaxy (Eggen et al., 1962). This process imposes the velocity space with very cold structures, detectable even after many billions of years (see, e.g., Helmi, 2008; Martin et al., 2022; Balbinot et al., 2023, and references therein). In addition, long-lived low-mass stars retain in their stellar atmospheres the chemical abundances from the time and place of their formation. Because of this, the chemical properties of old stars also offer fundamental information needed to reconstruct the history of the Milky Way (Freeman & Bland-Hawthorn, 2002). The discovery of the ongoing merger with the Sagittarius dwarf galaxy (Ibata et al., 1994) demonstrated that the hierarchical assembly of the Milky Way continues to this date. Not long after, Helmi et al. (1999) found evidence of halo star streams in the solar neighbourhood, now referred to as Helmi streams, which are relics of the Milky Way formation process (Limberg et al., 2021; Ruiz-Lara et al., 2022). Further substructures (streams and overdensities) in the halo, including debris left by the Sagittarius dwarf, were later uncovered by large photometric surveys (Newberg et al., 2002; Majewski et al., 2003; Belokurov et al., 2006; Juric et al., 2008; Perottoni et al., 2019) and more recently with _Gaia_ data (e.g. Necib et al., 2020; Ibata et al., 2021; Viswanathan et al., 2023). Indeed, thanks to the _Gaia_ mission (Gaia Collaboration et al., 2016), the substructure of the halo and the merger history of the Milky Way are now being revealed in greater detail. _Gaia_ is bringing about a revolution in the field of Galactic archaeology by providing parallaxes and proper motions for more than \(10^{9}\) stars. Using data from _Gaia_, Helmi et al. (2018) and Belokurov et al. (2018) identified an elliptical-like structure in the Toomre and Lindblad diagrams, which they associated with a previous major merger suffered by the Milky Way (with a satellite galaxy now referred to as _Gaia_-Sausage or _Gaia_-Enceladus; hereafter GE). This merger happened about 9.5 Ga1 ago (Gallart et al., 2019; Bonaca et al., 2020; Montalban et al., 2021; Borre et al., 2022; Giribaldi & Smiljanic, 2023). The GE progenitor galaxy is estimated to have had a stellar mass in the range of M\({}_{\bullet}\)\(\sim 10^{8}-\) \(5\times 10^{9}\) M\({}_{\odot}\)(Vincenzo et al., 2019; Feuillet et al., 2020; Mackereth and Bovy, 2020; Limberg et al., 2022; Lane et al., 2023), although see Rey et al. (2023) for evidence that it is not straightforward to infer the mass ratio of past mergers. Observational evidence of additional past mergers has since been discovered. The Sequoia merger is a highly retrograde substructure with weakly bound stars discovered by Barba et al. (2019) and Myoung et al. (2019). A non-exhaustive list of proposed substructures can also include Thamos (Koppelman et al., 2019), which could be the low-energy tail of Sequoia (see e.g. Kordopatis et al., 2020), the Koala (Forbes, 2020) or Kraken (Kruijssen et al., 2019, 2020), among others (see e.g. Donlon et al., 2020; Naidu et al., 2020; Necib et al., 2020; Yuan et al., 2020; Horta et al., 2021; Malhan et al., 2022, and references therein). However, careful work is still necessary to establish the reality and properties of each of these events (e.g. Buder et al., 2022; Donlon et al., 2022; Horta et al., 2023) and to demonstrate whether or not the identified structures actually correspond to distinct events (Jean-Baptiste et al., 2017; Koppelman et al., 2020; Amarante et al., 2022; Pagnini et al., 2022; Rey et al., 2023). To understand the chemodynamic properties of the progenitors of accreted systems, disentangling the stars that originated in the Milky Way from those that were accreted is crucial. There have been several approaches to this in the literature (e.g. Carollo and Chiba, 2021; Naidu et al., 2020; Feuillet et al., 2021; Belokurov and Kravtsov, 2022), but doing so is, of course, not trivial. Even more so given that we work with noisy data. As a way to select groups of stars likely dominated by one population or another, several authors have resorted to defining boxes or straight-line cuts on the kinematic, dynamic, and/or chemical parameter space(s). Another approach is to use supervised or unsupervised machine learning methods to separate different groups, overdensities, or clusterings of stars identified as peaks over distributions of properties in multidimensional spaces made up of different chemodynamic quantities (e.g. Buder et al., 2022; Lovdal et al., 2022; Myeong et al., 2022; Shank et al., 2022; Dodds et al., 2023; Giribaldi and Smiljanic, 2023; Ou et al., 2023; Zepeda et al., 2023). However, in many cases, the subsequent discussions usually focus on the properties of the few highlighted stellar groups. The general population of stars that remains as "background" tends to be ignored. Thus, such investigations provide only an incomplete assessment of the characteristics of the Milky Way metal-poor stellar populations. Focussing only on the peaks of the distributions can have a particularly impact on the identification of the trends of chemical evolution of each population part of the observed mixture (Giribaldi and Smiljanic, 2023). Here, we present an investigation that aims for a wider discussion of the properties of old metal-poor stars to try and improve our understanding of accreted and in situ stellar populations. Our analysis tries to limit to a minimum any a priori cut (in kinematics or chemistry) used to define groups of stars of similar properties. We first used unsupervised machine learning methods to separate groups with different properties in a restricted chemodynamic space. We then discuss the more global orbital and chemical properties of these groups to understand if they can be explained by in situ or accreted stars. We are particularly interested in identifying variations of these global properties that can be a sign of different fractions between accreted and in situ stars in each of the groups that are identified. Possible changes in the chemical patterns can also reveal whether one or several merger events are needed to explain the data. The article is divided as follows. In Section 2, the observational data are presented. Section 3 describes the machine learning methods applied to the data. In Section 4, we present the main results and discuss the characteristics of the stellar groups identified in our sample. Finally, Section 5 presents a summary of our findings. ## 2 Observational data ### Chemical abundances The stellar sample we analysed here was taken from Data Release (DR) 3 of the GALactic Archaeology with HERMES survey (GALAH, De Silva et al., 2015; Buder et al., 2021). GALAH is a spectroscopic survey that provides chemical abundances for up to 30 elements. In particular, GALAH is one of the two current surveys that provides abundances for the heavy neutron-capture elements Ba and Eu (the other one being the _Gaia_-ESO Survey, Gilmore et al., 2022; Randich et al., 2022). These elements can be used to trace the contributions of nucleosynthesis through the s- and r-processes. In its DR3, GALAH provides astrophysical parameters (e.g. chemical abundances and radial velocities) for 588 571 stars. For our analysis, we chose to select only metal-poor stars with [Fe/H] \(\leq-0.8\). Our goal with this selection is to eliminate most of the population that is dominated by disc stars and to concentrate the analysis on the old metal-poor stellar populations dominated by the halo, where most accreted stellar populations will be found (see Fig. 1). We expect that this simple metallicity cut will preselect a sample of heterogeneous origin, made up both of accreted and in-situ formed stars. The accreted stars can likely be traced to several mergers, but the in situ population is probably not less complex. The in situ part will include halo stars, such as what was called Erebus by Giribaldi and Smiljanic (2023), a fraction of old disc stars (the thick disc and its metal weak component; see Norris et al., 1985; Morrison et al., 1990; Beers et al., 2014), but also those stars that have been heated to halo orbits (Haywood et al., 2018; Di Matteo et al., 2019; Gallart et al., 2019), such as what has been called Splash by Belokurov et al. (2020) and Aurora by Belokurov and Kravtsov (2022). Although the GE merger was proposed as the agent that heats the orbits of Splash stars (Belokurov et al., 2020), there is the possibility that local interactions between disk stars and gas might also generate such a population (Amarante et al., 2020). Most of what has been identified as the Splash by Belokurov et al. Figure 1: Diagram of [Mg/Fe] as a function of [Fe/H] for the whole GALAH sample. Dashed lines indicate values of [Fe/H] \(=-0.8\) dex, \(-0.7\) dex and \(-0.6\) dex. (2020) is more metal rich than our selection criteria, nevertheless, we note that other authors have suggested that this population can extend to lower metallicities, at least down to [Fe/H] \(\sim-1.00\)(Horta et al., 2021; Donlon et al., 2022). Stars formed in a possible starburst related to the GE merger might also be present (Grand et al., 2020; Rey et al., 2023). Furthermore, we stress that we are aware that our metallicity selection also removes part of the metal-rich low-\(\alpha\) tail of the accreted populations belonging to the halo. However, we consider these low-\(\alpha\) stars to have already been extensively discussed in the literature (e.g. Mackereth et al., 2019; Buder et al., 2022; Myeong et al., 2022). The aim of our work is to explore the chemodynamic structure mainly toward the metal-poor region of the parameter space. Applying the metallicity selection mentioned above to the GALAH DR3 catalogue selected a sample of 24 817 stars. We decided to use the GALAH Value-Added Catalogue (VAC) which provides a cross-match of GALAH DR3 and _Gaia_ early DR3 (EDR3). This VAC contains the positions and proper motions provided by _Gaia_ (_Gaia_ EDR3; Gaia Collaboration et al., 2021) and also the Bayesian distances provided by Bailer-Jones et al. (2021). To further clean the sample, we followed the recommendations of the GALAH consortium and kept only the stars that have the flags: flag_sp = 0, snr_c3_iraf \(>\) 30, flag_fe_h = 0 and flag_Mg_fe = 0 (see Buder et al., 2022). As we discuss below, the values of [Fe/H] and [Mg/Fe] are used in the unsupervised machine learning analysis. This selection of high-quality results strongly reduces the final sample to 6 618 stars. We already anticipate here that when abundances of other elements are discussed, the relevant flag for that element is used to restrict the sample to those stars with reliable abundance determination. We do not apply further cuts on abundance errors, but let us mention here that for [Fe/H] and [Mg/Fe] the selected sample has a mean error of \(\pm 0.09\) dex, in both cases, with a dispersion of 0.03 dex for [Fe/H] and a dispersion of 0.04 dex for [Mg/Fe]. ### Orbital parameters We computed stellar orbits using galpy2, a galaxy dynamics tool written in Python that provides a range of gravitational potentials for use in calculations of orbital and kinematic parameters (see Bovy, 2015). One of the accepted input formats consists of an astropy (Astropy Collaboration et al., 2018) object made of the right ascension (\(\alpha\)), declination (\(\delta\)), proper motions (\(\mu_{\alpha}^{\ast}\) and \(\mu_{\delta}\)), radial velocity (\(\phi\)) and distance (\(d\)), of each star in any epoch. The vast majority of the sample selected as described above has a fractional error in the parallax value of 20% or less. We decided not to remove these few stars with larger errors, as they are not present in numbers that can potentially bias the results. Footnote 2: Available in [https://docs.galpy.org/en/](https://docs.galpy.org/en/) The orbits were integrated for 13 Ga assuming the Galactic potential determined by McMillan (2017). To estimate the uncertainties, a Monte Carlo simulation of 100 random samples was performed on all parameters, assuming that the errors have a Gaussian distribution. For our analysis, we extracted the following parameters: E\({}_{\rm n}\), the total binding energy of the orbit; J\({}_{\rm r}\), radial action, which is associated with the orbital eccentricity; J\({}_{\rm\phi}\), the azimuthal action, which is related to the rotation around the Galactic Centre; J\({}_{\rm z}\), the vertical action, which is related to how far the star moves from the galactic plane; and \(e\), the eccentricity of the orbit. We adopted a frame of reference in which L\({}_{\rm z}\)= J\({}_{\rm\phi}\). Figure 2 shows diagrams that use these parameters and are commonly used to classify stars into the different Galactic stellar populations. The top panel displays the Lindblad diagram (L\({}_{\rm z}\times\) E\({}_{\rm n}\)). The bottom panel displays the Toomre diagram with the stellar space velocities in Cartesian coordinates (\(V_{\rm z}\) is the component in the direction of Galactic rotation, \(V_{\rm z}\) in the radial direction, and \(V_{\rm z}\) in the direction of the Galactic north pole). From these plots, we can see that the sample still contains a significant number of stars with disc-like parameters in prograde orbits. The plume-like structure around zero net rotation, which has been associated with GE (Helmi et al., 2018), is also visible. ## 3 Analysis methods For our analysis, we sought a way to divide and explore the different stellar populations and/or substructures without resorting to boxes or straight-line cuts in the parameter space. Our aim is to obtain a global overview of the observed distribution functions of the relevant stellar properties. We note that it is not possible to obtain information on the absolute distributions of these properties, since we do not attempt to correct the selection functions of the GALAH and _Gaia_ surveys, which are the sources of data we use. We also note that the constraints we applied (like the flags in chemical abundances) introduce additional biases that would need to be taken into account if one wants to recover the absolute distributions. In this first effort, we simply aim to investigate whether there are broad characteristics that can help Figure 2: _Top_ - Lindblad diagram with our sample stars in gray. The Sun is included to serve as reference. _Bottom_ - Toomre diagram with our sample stars in gray. The yellow circle delineates the region of stars with total velocity below 233 km s\({}^{-1}\)(McMillan, 2017). The Sun is also shown for reference. to differentiate the stellar groups. To some extent, we also want to avoid the possibility that our prior knowledge introduces biases in our investigation of the sample. Thus, we have looked for unsupervised machine learning techniques that would help the sample itself tell how it should be best divided. We decided to build a chemodynamic space with the following quantities: two chemical parameters, [Fe/H] and [Mg/Fe], and two dynamic dimensions J\({}_{\mathrm{x}}\) and J\({}_{\mathrm{y}}\), where J\({}_{\mathrm{x}}\)and J\({}_{\mathrm{y}}\) are the axes of the axion map similar to that presented by Vasiliev (2019)3. Footnote 3: Where J\({}_{\mathrm{x}}\)=J\({}_{\mathrm{d}}\)/J\({}_{\mathrm{loc}}\), J\({}_{\mathrm{y}}\)=(J\({}_{\mathrm{r}}\)-J\({}_{\mathrm{z}}\))/J\({}_{\mathrm{loc}}\), and J\({}_{\mathrm{tot}}\)= \(|J|_{\mathrm{r}}|+|J|_{\mathrm{d}}|+|J|_{\mathrm{z}}\). The diagram [Mg/Fe] versus [Fe/H] has been extensively used to separate accreted from in situ stars (e.g. Buder et al., 2022; Horta et al., 2023, and references therein). It has been shown that dwarf galaxies have [Mg/Fe] versus [Fe/H] sequences where the so-called "knee", i.e. the place in terms of [Fe/H] values where the [Mg/Fe] ratio starts to decrease, occurs at lower metallicity than what is observed in Milky Way stars (Venn et al., 2004; Hasselquist et al., 2021). This is caused by a lower star formation efficiency, as the interstellar medium (ISM) cannot achieve higher values of [Fe/H] before the extra Fe contribution from Type Ia supernovae becomes important (Kirby et al., 2011; Matteucci, 2021). Although our metallicity selection does remove a good chunk of low-\(\alpha\) stars, the knee itself for accreted populations in the halo has been found between [Fe/H] = \(-1.5\) to \(-1.00\), depending on the author (e.g. Mackereth et al., 2019; Feuillet et al., 2021; Buder et al., 2022). Therefore, low-\(\alpha\) abundance remains useful to define accreted populations. Chemical parameters were normalised as needed, so they have means and variances of the same order of magnitude. Normalised actions have been used mainly to study globular clusters and their association with halo structures (Vasiliev, 2019; Myeong et al., 2019). A map built with these quantities is useful to separate objects by the type of orbit (prograde versus retrograde; polar versus radial; in-plane versus out-of-plane). Accreted populations in radial orbits should be easily recognised in such quantities. Nevertheless, as discussed by Lane et al. (2022), because the actions are normalised, there can be a certain degree of confusion between halo stars and hotter thick-disc stars when their vertical and radial actions become of magnitude comparable to the angular momentum. Nevertheless, thick-disc stars should still have high-\(\alpha\) abundances up to higher values of metallicities (e.g. Bensby et al., 2005; Fuhrmann, 2008; Recio-Blanco et al., 2014). From this follows our idea of combining normalised actions and the chemical parameters [Mg/Fe] and [Fe/H] in a multidimensional space. Our initial expectation is that this combination of quantities can break down, at least in part, the degeneracy between populations. In the first step of the analysis, we used a dimensionality reduction technique called t-distributed stochastic neighbour embedding (t-SNE). In general terms, when t-SNE is applied to data distributed in a multidimensional space, it returns a projected two- (2D) or three-dimensional (3D) map where the neighbourhood of the points is preserved. In our case, we resorted to a projection in 2D. In the second step, we used agglomerative hierarchical clustering with Ward's method (Ward Jr., 1963) on top of the t-SNE projection, to identify the different groups of neighbouring points. The methodology is described in detail below. ### t-Sne The t-distributed stochastic neighbour embedding is a manifold learning method that uses the affinity between the data points as a probability. As many other dimensionality reduction tools, such as the better-known principal component analysis (PCA), t-SNE is especially useful for exploring structures from an N-dimensional problem in a 2D map. In astronomy, t-SNE has been used for a number of applications: spectral reduction and classification (see Traven et al., 2017, 2020), selection or derivation of the parameters of metal-poor stars (see Matjevic et al., 2017; Hawkins et al., 2021; Hughes et al., 2022), identification of star clusters (see Kos et al., 2018; Chun et al., 2020), and exploration of properties of stellar populations (Anders et al., 2018; Queiroz et al., 2023). The algorithm works as follows. First, t-SNE computes the similarity between two N-dimensional vectors that represent the properties of two points in the parameter space, \(x_{i}\) and \(x_{j}\): \[p_{j\bar{i}}=\frac{\exp{(-\|x_{i}-x_{j}\|^{2}/2\sigma_{i}^{2})}}{\sum_{k\neq i }\exp{(-\|x_{i}-x_{k}\|^{2}/2\sigma_{i}^{2})}},\] where \(\sigma_{i}\) is a generalised uncertainty that can be manually set or estimated by the algorithm. We chose the latter option. The algorithm searches for a value that makes the entropy of the distribution over neighbours equal to \(\log~{}k\), where \(k\) is the perplexity parameter. The perplexity is related to the local number of neighbours at that point. Afterward, the similarity is made symmetric, mainly to avoid outliers: \[p_{ij}=\frac{p_{\bar{i}}+p_{\bar{i}\bar{j}}}{2N}\] The algorithm then attempts to create a map of smaller dimension, where the two original points are now characterised by the new vectors \(y_{i}\) and \(y_{j}\): \[q_{ij}=\frac{(1+\|y_{i}-y_{j}\|^{2})^{-1}}{\sum_{k\neq n}(1+\|y_{k}-y_{m}\|^{ 2})^{-1}}\] The aim is to make these two neighbourhood distributions, \(p_{\bar{j}\bar{i}}\) and \(q_{ij}\), match as well as possible. The location of the points on the t-SNE map is finally given by minimising the Kullback-Leibler (KL) divergence (Kullback and Leibler, 1951): \[KL(P\|Q)=\sum_{i\neq j}p_{ij}\log\frac{p_{ij}}{q_{ij}}\] The minimisation of the KL divergence is performed by means of a gradient descent. Note that the method emphasises local distances. It attempts to keep objects that were nearby in the multidimensional space to also end up close by in the final map, at the same time as it tries to separate objects that were far apart. Because of this, it is important to keep in mind that the distances in the final t-SNE map are not physically connected to the original values in the multidimensional space (i.e., the distances between the points in the final map are not linear combinations of the initial distances). The goal is simply to maintain objects with similar properties close to each other while, at the same time, separating those with very different properties. An advantage of t-SNE is that its efficiency does not depend on the density of the points, since it automatically adjusts how clumpy the projection will be. This is summarised in the perplexity parameter, which can be thought of as the average number of neighbours that one data point has. For optimal use of t-SNE, Wattenberg et al. (2016) recommends exploring the perplexity parameter and how it affects the reduction in dimensionality. After performing these tests, we set the perplexity at 35. An example of a 2D map obtained for our sample of metal-poor stars is presented in Fig. 3. The Sun has been included in the analysis to help with the visualisation. On this map, one can immediately notice structures at different levels. Stars that are clumped closely together around the Sun can be expected to show properties of disc stars (i.e., prograde circular orbits close to the Galactic plane). Metal-poor halo stars will be positioned in another region of the map. Indeed, the two main islands that can be visually identified in this figure seem to be related to a division between disc and halo stars (despite our metallicity cut, some important contamination by disc stars remained). It is important to note here that the position of the points and the density of the regions on the map change each time the algorithm is run. t-SNE preserves neither the global structure nor the density structure. The individual position of each point or the shape of each clump is not important. In the t-SNE results, the important factor is not how dense an area is on the final map, but the local structures that remain in the final map. As we stressed above, the goal is to keep similar data points together and to drive dissimilar data points away from each other. One other way to put this is, as t-SNE tries to agglomerate similar points together, the distance loses meaning. Two close points are similar, but the distance is not a direct measure of similarity or dissimilarity. The density and sizes of the clumps are adjusted mainly as a function of the perplexity parameter. To take some of the variation introduced by t-SNE into account, we performed 50 different realisations of the 2D map of our sample. We use these realisations to derive some statistics of how the stellar populations can be divided. Due to the above facts, for the next step of the analysis, we decided not to use clustering methods based on density (such as DBSCAN and its variations like HDBSCAN, see chapter 4 of Wattenberg et al. 2016) as they could in principle produce misleading results. Another problem with DBSCAN and its variations is shown in Ou et al. (2023). As these authors discuss, HDBSCAN results are unstable when the points are resampled by their uncertainties. We actually performed a few tests using HDBSCAN but finally decided to apply a different approach to separate the structures seen on the t-SNE map. That is what we will describe next. ### Hierarchical Clustering To separate the groups (or clusters) in the t-SNE map, we used hierarchical clustering. Hierarchical Aglomerative Clustering (HAC) tries to cluster objects by a distance, starting from the closest points and progressively including the farther ones, until all points have been grouped together. During this process, the minimum distance needed to identify a group is recorded. Even though, as we highlighted above, distances have no physical meaning on a t-SNE map, in our tests we found this to be the method that could best separate local structures for further exploration. Several different metrics can be used to determine the type of distance to be used in HAC, from Euclidean to Mahalanobis4, for example. The method proposed by Ward Jr. (1963), which is the one we use here, consists of passing the Euclidean distances between the points through the Lance-Williams formula. This is done to compute the distances between each pair of clusters using an unstructured linkage criterion, which are then used to decide which clusters to merge next (i.e., to determine which are the closest clusters). In our case, a structured Ward hierarchical clustering with a k-Nearest Neighbour linkage was used5. We decided to go for this approach, as it better preserves the structure between the data points. The method was implemented using the Python module scikit-learn (Pedregosa et al. 2011). Footnote 4: A distance, in multi-dimensional space, measured with respect to a centroid (such as the mean) of a distribution of points. Footnote 5: An example of structured and unstructured Ward’s hierarchical clustering can be seen in the scikit-learn documentation in here. To evaluate the number of clusters in which to divide the data, we used a dendogram generated from the HAC (Fig. 4). The vertical axis of this diagram shows the Euclidean distances between the clusters. The horizontal axis separates the number of groups that were generated at each distance. Here, we show the groups up to the third level of division. The first-level division appears to recover a separation between what seems to be halo- and disc-dominated populations. We note here that deciding which is the optimal number of groups in which the sample should be divided is subjective, since the method itself does not provide a metric that can be used to judge the goodness of a subdivision with respect to another. Figure 4: Dendogram of Ward hierarchical clustering for one of the 50 t-SNE realizations. In orange are the clusters that can be associated to the halo and in green the clusters with stars belonging to the disc. Figure 3: Reduced 2D t-SNE map for our sample of metal-poor stars. The projected dimensions themselves are not connected to the physical quantities used in the analysis. This is one example from the 50 realisations of the 2D maps that were created in our analysis. We decided to explore a division that selects 16 clusters, as it seems to divide the t-SNE map with enough granularity for an exploratory study. An example of the distribution of the 16 groups on the t-SNE map is shown in Fig. 5. The different colours in this plot are used to show how the clustering by HAC separates the clumps of stars. The Lindblad diagram showing the same divisions is shown in Fig. 6. At first glance, it seems that the method divided the stars following some pattern in the E\({}_{\rm a}\) and L\({}_{x}\) diagram. Nevertheless, it is possible to see that there are stars of several clusters occupying the same E\({}_{\rm n}\) by L\({}_{x}\) region (particularly in the disc region). Therefore, it is apparent that the separation we obtained is not simply a matter of cutting the sample at certain values of L\({}_{x}\). For the purposes of this paper, we will focus on the discussion of the clusters identified in the halo part of the 2D t-SNE map (the "island" to the right of Figs. 3 and 5). Clusters belonging to this island were separated and selected in each of the 50 t-SNE projections. This exercise was useful in giving an idea of the probabilities that each star is associated with a certain structure on the t-SNE map. However, it is not the case that in all 50 t-SNE maps, the halo island is divided into six groups (as is the case for the map shown in Fig. 5). In our analysis, the number of clusters found on the halo island ranged from four to seven. ## 4 Discussion The accreted stellar population that is now generally recognised as GE was established as the result of a major merger in the works of Helmi et al. (2018) and Belokurov et al. (2018) and thanks to _Gaia_ DR2 data (Gaia Collaboration et al., 2018, 2018). In fact, data from _Gaia_ and spectroscopic surveys had stimulated several other efforts aimed at better characterising Galactic halo stars that were in progress in parallel (Deason et al., 2018; Fernandez-Alvar et al., 2018; Hayes et al., 2018; Haywood et al., 2018). However, one should note that even before that, hints of this population had been found due to the low abundance of \(\alpha\) elements, retrograde motions, peculiar kinematics, or eccentric orbits of its stars (e.g. Carney et al., 1996; Majewski et al., 1996; Chiba and Beers, 2000; Gratton et al., 2003). That such stars could have an origin in the accretion of a dwarf galaxy had already been suggested (e.g. Gilmore et al., 2002; Brook et al., 2003). Several authors have investigated different ways to select stars with a high probability of belonging to the GE merger (e.g. Massari et al., 2019; Naidu et al., 2020; Feuillet et al., 2021; Buder et al., 2022; Carrillo et al., 2023). It is worth to mention that Massari et al. (2019) selection was developed for studying globular clusters, not for selecting field stars as is the case for the other references. We start to explore our sample trying to define which of the halo groups most likely corresponds to what has been defined in the literature as the GE merger. For that, we chose among the groups identified in the t-SNE map the one in which the distribution of stars shows close to zero angular momentum and radial motion, characteristics generally attributed to this population. That is the main decision in our exploratory study where some prior knowledge was used. This selection was repeated in each of the 50 projections performed with t-SNE. Even if the number of clusters on the halo island changed (from a minimum of four to a maximum of seven), we could always define the probable GE-dominated group as the cluster with stars that have an average angular momentum close to zero and a radial motion. The number of stars included in this cluster varied from projection to projection. Therefore, we computed statistics from these selections. What we finally identify as our GE-dominated population is made up of 317 stars that appeared in this group on at least 80% of the t-SNE maps. Before discussing the properties of the GE-dominated group, as an exercise, we compared our selection of GE stars with those selected using the criteria adopted by the different authors. We selected stars from our sample of 6 618 metal-poor stars using the criteria of Massari et al. (2019), Naidu et al. (2020), Feuillet et al. (2021), Limberg et al. (2022), and Horta et al. (2023). These multiple selections are compared in an example t-SNE map in Fig. 7. The colours are over-plotted on top of each other, but the samples overlap, like a Matryoshka doll. Interestingly, some of the GE selections even enter the island which we generically associate with the stellar populations of the disc. Our own selection, adopting a 80% (or even a 60%) probability cut, appears to be the most restrictive, returning a sample concentrated in a corner of the map. Most importantly, the figure shows that at least a part of our GE sample would also be selected using many of the other criteria. However, it seems important to note Figure 5: The same example projection of Fig. 3 with the clusters produced by agglomerative clustering. Figure 6: Lindblad diagram for the projection of Fig. 3 with the clusters produced by agglomerative clustering with the same colors as Fig. 5. The boxes correspond to the selection criteria for stars belonging to Sequoia and _Gaia_-Enceladus defined in Massari et al. (2019). here that for all criteria in the literature, there are stars on the t-SNE map that neighbour probable GE members, but that were not selected by them as part of GE. This, of course, reflects the fact that in the various works mentioned above, properties different from those we used to build the t-SNE map were used to select GE stars. The question of how to select the purest possible sample of GE stars remains open. We refer to Buder et al. (2022) for an extensive discussion on this topic using observational data and to Lane et al. (2022) for a discussion using simulated data (see also Carrillo et al. 2023). These last two references indeed find that using actions can return a sample of accreted stars with high purity, at the expense of completeness. It would be interesting to check, using the same simulations, the impact on purity and completeness of using actions and chemistry together for a selection of accreted stars. In the following sections, we explore in detail the stellar content of the groups defined in our analysis. ### General properties of the GE-dominated group The stars selected as part of the GE-dominated cluster are depicted in dynamical diagrams in Fig. 8. In the Lindblad diagram (left panel of the figure), the selected stars form a plume around L\({}_{\odot}\) = 0 that grows wider the less bound the stars become. A certain preference for stars with slightly prograde motions is apparent. This is confirmed on the action map (right panel). The stars occupy the corner of radial orbits with an asymmetry toward prograde motions. This in itself already suggests that we might be looking at a mixture of accreted and (heated) thick disc stars (see e.g. the discussion on the action map in Lane et al. 2022). A Kiel diagram of the stars is shown in the left panel of Fig. 9. For illustration, the plot includes isochrones from the MESA Isochrones and Stellar Tracks (MIST) database (Dotter 2016; Choi et al. 2016). The age-metallicity combinations of the isochrones were chosen to reproduce the GE age-metallicity relationship derived in Giribaldi & Smiljanic (2023). There is a general agreement between the parameters and the isochrones, showing at least a qualitative agreement with what is expected for a population dominated by GE. The exception is the region of the turn-off, where the stars have cooler temperatures than expected. This problem with the \(T_{\rm eff}\) values for turn-off stars was already shown in Giribaldi & Smiljanic (2023), where the GALAH values were found to be too cool by 200-300 K, on average, compared with the \(T_{\rm eff}\) values obtained using the infrared flux method (Casagrande et al. 2021), which in turn agree with the accurate \(T_{\rm eff}\) scale obtained in Giribaldi et al. (2021). A histogram with the metallicity distribution for these stars is presented in the right panel of Fig. 9. The mean metallicity value of our sample is \(\rm[Fe/H]=-1.29\) dex (median of \(\rm[Fe/H]\) = \(-1.26\) dex). This value is very similar to the mean metallicity of GE stars derived in other works, which was found to mostly be between \(-1.15\) and \(-1.3\)(Hayes et al. 2018; Mackereth et al. 2019; Matsuno et al. 2019; Das et al. 2020; Feuilllet et al. 2021). Our GE-dominated sample contains a good number of stars with metallicity as low as \(\rm[Fe/H]=-2.0\), with a few objects reaching \(\rm[Fe/H]=-2.7\). Other works have shown that the metal-poor tail of GE extends to metallicities below \(\rm[Fe/H]=-3.0\)(e.g. Das et al. 2020; Naidu et al. 2020; Monty et al. 2020; Bonifacio et al. 2021; Cordoni et al. 2021; Kielty et al. 2021; Giribaldi & Smiljanic 2023). However, several other works concentrate the discussion mostly on low-\(\alpha\) stars that can be easily identified when \(\rm[Fe/H]\)\(\geq-1.5\)(e.g. Montalban et al. 2021; Myeong et al. 2022) and miss the metal-poor tail that we include here. ### Accreted and in situ stars in the GE-dominated group Figure 10 depicts the GE dominated group in two chemical diagrams now commonly used to separate accreted stars from in situ formed stars, \(\rm[Mg/Fe]\)\(\rm\times[Fe/H]\) and \(\rm[Mg/Mn]\)\(\rm\times[Al/Fe]\)(see Hawkins et al. 2015; Das et al. 2020). We note here that the number of stars with good Al and Mn abundances in this group (94) is much smaller than the number of stars with good Mg abundances (327). Lower sequences of the [\(\alpha\)/Fe] and [Al/Fe] ratios as a function of [Fe/H] are generally seen in stars of dwarf galaxies when compared to Milky Way stars (Tolsoy et al. 2003; Venn et al. 2004; Kirby et al. 2010; Vargas et al. 2013; Hasselquist et al. 2021). Because of that, such characteristics are also expected to be seen in accreted stars. The GE-dominated group includes only stars of high [Mg/Mn] (i.e., rich in \(\alpha\) elements, which are supernovae (SNe) type II products, and poor in Mn, which is mainly a SNe type Ia product). However, they span a range in [Al/Fe] ratios, from \(-0.5\) to \(>+0.5\), indicating a possible mixture of in situ and accreted stars. Aluminium is produced mainly in non-explosive hydrogen burning (Arnould et al. 1999) and the evolution of [Al/Fe] with metallicity can be explained by Al being mainly returned to the ISM by SNe II with little contribution from SNe Ia (Kobayashi et al. 2020). As stars in dwarf galaxies do not show high values of [Al/Fe] (Hasselquist et al. 2021), such stars can be interpreted here as probably belonging to a heated population originating from the Galactic (proto) thick disc. In Fig. 10, we separate the possible in situ and accreted stars using the definitions proposed by Das et al. (2020). Of the 94 stars shown in the right panel of Fig. 10, 43 (45. 7%) seem to have an in situ origin. In fact, when looking at \(\rm[Mg/Fe]\)\(\rm\times[Fe/H]\) (left panel of Fig. 10), we see that the stars with high [Al/Fe] mostly have a high [Mg/Fe] ratio, which is closer to expectations of in situ populations. The presence of this in-situ contamination is consistent with what is seen on the action map (Fig. 8), as discussed above. Our selection of probable accreted GE stars displays a sequence with lower [Mg/Fe] values relative to the rest of the metal-poor sample, even for metallicities down to \(\rm[Fe/H]=-2.0\) (despite the visible large scatter in both groups of stars). We see Figure 7: t-SNE map comparing the selection of _Gaia_-Enceladus stars in the literature with ours. Our selection of stars with a probability of at least 80% of belonging to GE is shown in blue. The stars that would be selected if we relax the criterion to 60% are shown in yellow. The stars in green, purple, pink, salmon, and brown were selected according to the works of Massari et al. (2019), Naidu et al. (2020), Horta et al. (2023), Feuilllet et al. (2021), and Limberg et al. (2022), respectively. a difference of about 0.1 dex, between the GE stars and the remaining sample, with the accreted GE sample having a mean [Mg/Fe] of 0.14(12) dex. The [Mg/Fe] knee is not fully obvious to the eye (because of the metallicity cut in our sample), but there is a hint of its presence between [Fe/H] = \(-\)1.2 and \(-\)1.0. This agrees with, for example, Mackereth et al. (2019) and Horta et al. (2023) who found the knee around [Fe/H] \(\sim\)\(-\)1.2 dex using APOGEE data (but see also Monty et al. 2020), who found the GE knee at lower metallicity, [Fe/H] \(\sim\)\(-\)1.6, using a different sample of stars). In Fig. 10 we can see that some of the stars labelled as in situ contaminants in the GE-dominated cluster also display low [Mg/Fe] ratios. This dispersion can be attributed to the errors in the abundances, creating a large scatter in the points. In any case, this clearly shows that in this sample a simple cut in the diagram [Mg/Fe]\(\times\)[Fe/H] is not enough to isolate stars of accreted origin. Moreover, we are left to wonder whether the in situ contamination abruptly ends at the straight cut apparent in the right panel of Fig. 10. In fact, that stars in dwarf galaxies display low values of [Al/Fe] says nothing about the behaviour of this ratio in metal-poor stars of the Milky Way. Chemical evolution models seem to show that low [Al/Fe] values are a general feature of metal-poor stars and could also be present in stars formed in the Milky Way (Kobayashi et al. 2020). Therefore, it is not clear what the true level of in situ contamination is, even in the stars shown in green in the right panel of Fig. 10, which we assume to be most likely of accreted origin. To advance in this discussion and understand if there are chemical differences between the in situ and accreted stars with [Fe/H] \(\lesssim\)\(-\)1.2, we clearly need: i) stellar abundances of higher precision and ii) guidance from chemical evolution models. At the moment, we are forced to conclude that even a combination of chemical and dynamical parameters is not able to fully separate a pure sample of accreted origin, at least in the corner of the parameter space where we selected our GE-dominated cluster. Before discussing the abundances of other chemical elements, we first turn our attention to other groups of halo stars separated in the t-SNE maps. Figure 8: Diagrams showing how frequently each star of our sample was selected in the group we associate with the _Gaia_-Enceladus merger. _Left_ - Lindblad diagram (L\({}_{x}\) by E\({}_{a}\)). _Right_ - the action map. Labels indicate the type of orbit that can be found in each region of the map. Figure 9: _Left_ - Kiel Diagram of the GE-dominated group with MIST isochrones for [Fe/H] = \(-\)0.8, \(-\)1.0 and \(-\)1.5 dex with an age of 11.0 Ga and [Fe/H] = \(-\)2.0 dex with an age of 12.6 Ga. _Right_ - Normalised metallicity histogram comparing the GE-dominated group and the entire GALAH sample with [Fe/H] \(\leq\)\(-\)0.8 dex. ### Other halo groups To continue exploring our sample of metal-poor stars, we defined three additional clusters from the t-SNE maps. What we shall henceforth call the prograde cluster is defined as the one that has a mean positive \(L_{\rm c}\) value just higher than the GE cluster. The (intermediate) retrograde group is the one that has a mean negative value of \(L_{\rm c}\) that is just lower than the one of the GE cluster. The most retrograde group is the one with the most extreme negative mean value of \(L_{\rm c}\). The stars assigned to these groups are those that appear in the same cluster in at least 80% of the 50 t-SNE maps. The GE-dominated cluster is the largest with 327 stars. The most retrograde, prograde, and retrograde clusters have 253, 198 and 176 stars, respectively. In Fig. 11 the kinematic and dynamic properties of all four groups of stars are compared in several different diagrams. The average properties of each group are summarised in Table 1. In the Linblad and Toomre diagrams (top row of Fig. 11), the most retrograde group is in the same region as the Thammos substructure (Koppelman et al., 2019), although it also shows a tail toward very retrograde loosely bound stars (top left in the figure) entering the region identified as the Sequoa merger (see Myeong et al., 2018). We do not attempt to separate both substructures here, as the probable Sequoia tail is poorly populated. In both the Lindblad and Toomre diagrams, the other three clusters (at least partially) fall into regions that other authors might use to define GE (Helmi et al., 2018; Massari et al., 2019). The prograde cluster, however, has most of its stars in a region of the Toomre diagram that would be used to define a kinematic thick disc. These stars would end up excluded from most discussions of accreted stars. As the numbers in Table 1 show, there is a decrease in the mean orbital energy when moving from the prograde to the most retrograde group (although the dispersion is always high). The average eccentricity of our GE-dominated cluster is remarkably high, with a mean value of 0.86 (left panel in the middle row of Fig. 11). Both prograde and retrograde clusters also have high average eccentricity values, 0.80 and 0.76, respectively. High eccentricity is one of the characteristics often used to separate samples of GE stars (e.g. Naidu et al., 2020). Typical values for this separation are 0.7 or 0.8. Therefore, such a cut could end up including many of the stars in the prograde and retrograde clusters as part of GE. However, we remark that eccentricity is often not used by itself, and only after some sample cleaning is used to remove disc stars and possibly other structures. It is interesting that the prograde cluster tends to have a somewhat higher mean eccentricity than the retrograde one. This could indicate a higher contamination from accreted stars or be caused by the heating up of the disc caused by mergers. In the velocity plot (right panel, middle row), it is possible to see that the GE-dominated cluster is indeed the one that dominates in the regions of extreme radial motions; \(|V_{R}|\geq 200\) km s\({}^{-1}\) (and is so by construction). The prograde and retrograde clusters share a distribution around \(V_{\phi}\sim 0\) km s\({}^{-1}\), but the tails at extreme \(V_{R}\) values are less prominent (in particular for the prograde cluster). In terms of eccentricity and velocities, the most retrograde cluster is very different from the others. Eccentricity values vary from about 0.2 to 0.6 (with an average of 0.42). Myeong et al. (2019) found Sequoia stars to have typical eccentricity values around 0.6. This again suggests that the number of Sequoia stars in our very retrograde cluster is small. Thammos, on the other hand, was found to have lower eccentricity values than GE (Koppelman et al., 2020), with values below \(\sim\) 0.65 (Kordopatis et al., 2020; Horta et al., 2023). It seems likely that this cluster is dominated by Thammos stars. However, our most retrograde cluster has a smaller secondary peak in its eccentricity distribution around 0.2, with a few stars with smaller values. A very retrograde group of stars with thick disc chemistry was reported by Koppelman et al. (2019), but was not discussed in detail. Those stars could be the same as the one in this peak with small eccentricity values. They are not necessarily part of the disc, but could of the inner halo with motions confined to the disc region. The bottom row of Fig. 11 shows plots of \(Z_{\rm max}\) as a function of eccentricity (left) and of the square root of the radial action Figure 10: In gray the stars selected with [Fe/H] \(\leq-0.8\) are shown. The stars of the GE-dominated cluster are shown in other colours (347 stars in the left panel; 94 stars in the right panel). Black crosses are (43) stars with high [Al/Fe] ratios and thus are likely of in situ origin. Green star symbols are objects with low [Al/Fe] ratios or without Mn and/or Al abundances of good quality. The latter group is likely dominated by stars of accreted origin. _Left_ - The [Mg/Fe] by [Fe/H] diagram. _Right_ - The [Mg/Mn] by [Al/Fe] diagram. \begin{table} \begin{tabular}{l c c c c} \hline \hline Parameter & GE & Prograde & Retrograde & Most Retrograde \\ \hline L\({}_{\alpha}\)(10\({}^{3}\) km s\({}^{-1}\) kpc\({}^{-1}\)) & 78(274) & 341(255) & -260(185) & -821(428) \\ E\({}_{\alpha}\)(10\({}^{5}\) km\({}^{2}\) s\({}^{-2}\)) & -1.7065(2358) & -1.5473(2518) & -1.7290(1963) & -1.7602(1925) \\ J\({}_{\epsilon}\) & 1080(566) & 661(484) & 523(275) & 145(148) \\ J\({}_{\epsilon}\) & 57(485) & 59(56) & 174(169) & 175(242) \\ Eccentricity & 0.86(5) & 0.80(5) & 0.76(5) & 0.42(12) \\ [Fe/H] & -1.23(35) & -1.20(34) & -1.12(31) & -1.57(41) \\ [Mg/Fe] & 0.16(14) & 0.18(12) & 0.15(14) & 0.20(13) \\ Number of stars & 317 & 198 & 176 & 253 \\ \hline [MISSING_PAGE_POST] u/Fe] & 0.49(19) & 0.48(27) & 0.46(15) & 0.37(23) \\ Number of stars & 137 & 74 & 98 & 62 \\ \hline \hline \end{tabular} 1 \end{table} Table 1: Chemo-dynamical parameters for the halo groups. as a function of angular momentum (right). In the latter plot, the clusters are all well separated, which simply reflects the way they were defined in the first place. Our GE-dominated cluster matches the regions that Feuillet et al. (2021) and Matsuno et al. (2021) use to find GE stars. The most retrograde cluster also matches the position of Sequoia stars in Feuillet et al. (2021) but, as we discussed above, this cluster is mostly dominated by stars from Thamnos. In the former plot, we can see that most of the sample, in the four groups, tends to be inside the inner halo with orbits that take the stars out to at most 10 kpc from the Figure 11: In all panels, the GE is shown in light green, the prograde halo group in blue, the retrograde group in light brown, and the most retrograde group in violet._Top Left_ - The Lindblad diagram (L\({}_{\rm z}\)\(\times\)E\({}_{\rm n}\)). _Top Right_ - The Toomre Diagram. _Middle Left_ - Eccentricity as a function of the galactocentric distance (R\({}_{\rm gal}\)). _Middle Right_ - Cylindrical velocity components, V\({}_{\rm\delta}\) as a function of the radial component V\({}_{\rm r}\). _Bottom Left_ - Maximum distance from the Galactic (Z\({}_{\rm max}\)) plane as a function of eccentricity. The dashed line at Z\({}_{\rm max}\)= 2 kpc indicates the scale height of the thick disc (Li et al., 2018). _Bottom Right_ - Radial action (\(\sqrt{L_{\rm z}}\)) as a function of the orbital angular momentum \(L_{\rm z}\). Galactic plane. Li et al. (2018) reports a scale height of the thick disc component of the Galaxy of around 2.76 kpc. Interestingly, the stars of the most retrograde group are the ones that are concentrated mainly on typical disc heights, while the others show a tendency of higher dispersion. Stars from the GE-dominated cluster have a clearer tail to higher Z\({}_{\rm max}\) distances than the stars of the prograde and retrograde clusters. There are very few stars with Z\({}_{\rm max}\gtrsim 20\) kpc, which are most likely part of the outer halo (Carollo et al., 2007, 2010). The number of these outer halo stars tends to increase for higher values of eccentricity. The metallicity distributions of the three additional clusters are shown in Fig. 12. The similarity between the GE, retrograde, and prograde clusters is also visible here (see the top row of Fig. 12 compared to the left panel of Fig. 9). The one cluster that seems to have distinct properties is the very retrograde one (bottom panel in Fig. 12), which is clearly more populated in metal-poor stars (as also discussed elsewhere, e.g., Kordopatis et al., 2020; Horta et al., 2023). Let us now turn our attention back to the chemical abundances to investigate whether there are more similarities (or differences) among these clusters. ### Accreted and in situ stars in the other halo groups Figures 13, 14, 15 show the diagrams of [Mg/Fe] versus [Fe/H] and [Mg/Mn] versus [Al/Fe] for the prograde, intermediate retrograde, and most retrograde groups, respectively. Again, we note that requiring good abundances of Mn and Al decreases the size of the samples. For the prograde group, we found that 47 of the 78 stars (\(\sim\) 60%) with Al and Mn abundances are probably of in situ origin due to their high values of [Al/Fe] (Das et al., 2020). For the intermediate retrograde group, 39 of 84 stars (\(\sim\)46%) are possibly in situ. For the most retrograde group, 46 of 66 stars (\(\sim\)70%) seem to probably be of in situ origin. Although the number of in situ stars should probably decrease for groups that are increasingly more retrograde, we find that the most retrograde group is the one with the largest fractional in situ contamination. Nevertheless, it is hard to judge the significance of this finding, as this is also affected by the data quality and the capability of detecting and analysing the Al lines in these stars. In any case, we do clearly find that the in situ contamination is important in all dynamical parameter space, accounting for about 50% or more stars with measured Al and Mn abundances. This in situ contamination is probably what was identified in Giribaldi & Smiljanic (2023) and named Erbus, an old in situ component affected by the chaotic conditions in the early Galaxy. As discussed in Giribaldi & Smiljanic (2023), there is probably some relation between Erbus and the Aurora component identified by Belokurov & Kravtsov (2022). In all cases, the in situ contamination concentrates in the region of higher metallicity values ([Fe/H]\(\gtrsim-1.4\)). This is probably related to the limit of detectability of the Al and/or Mn lines. The in situ stars also tend to have values of [Mg/Fe] that are higher than for the others, although the scatter is always high. The most retrograde group is the one with stars that on average have higher values of [Mg/Fe], a feature that is probably related to its lower metallicity and the difficulty in separating accreted and in situ stars using Mg abundances at this regime. Chemical differences at the lower metallicity end are very unclear; Al and Mn abundances are mostly missing, and Mg values seem to overlap. In the sections below, we investigate whether other abundances give additional information on possible chemical differences between the groups. ### Europium and barium abundances Aguado et al. (2021) analysed nine stars from GE and Sequoia, with [Fe/H] \(<-1.4\), and found that they are enhanced in r-process elements with [Ba/Eu] \(\sim-0.6\) dex and [Eu/Fe] = 0.6 Figure 12: Histograms with the metallicity distribution of the three other clusters defined in the analysis: The prograde cluster in the top panel, the retrograde cluster in the middle, and the most retrograde cluster in the bottom. 0.7. Matsuno et al. (2021), using DR3 of GALAH, also found enrichment in r-process elements in GE stars, although not at the same level as Aguado et al. (2021), with [Ba/Eu]\(\sim\) 0.0 and [Eu/Fe]\(\sim\) 0.5. New observations reported by Naidu et al. (2022) also show an Eu enhancement in GE stars, with [Ba/Eu]\(\sim-\)0.45 (but the [Eu/Fe] ratios are not given). A globular cluster enriched with r-processes elements (NGC 1261; [Fe/H] = \(-\)1.26) has also been associated with _Gaia_-Enceladus (Koch-Hansen et al., 2021), further supporting the idea that this system contained an environment enriched by the r-process. In the panels of Fig. 16, we present the neutron capture ratios for our GE-dominated group, compared to the abundance distribution of the entire sample. For this discussion, we exclude from the group those in situ stars with high values of [Al/Fe]. The mean values of [Ba/Fe] and [Eu/Fe] of our GE are 0.26(42) dex and 0.52(18) dex, respectively, which implies that the GE-dominated group is enriched in elements of the r-process. This qualitatively agrees with the conclusions of previous works. However, we do not find that GE is as heavily enriched as found in Aguado et al. (2021) and find it more enriched than reported by Matsuno et al. (2021). The overall [Ba/Eu] ratio is \(-\)0.26, but Figure 14: Black crosses are (39) stars with high values of [Al/Fe] of probable in situ origin. The light brown star symbols are objects with a low [Al/Fe] ratio or without values for the Al and/or Mn abundances._Left_ - The [Mg/Fe] by [Fe/H] diagram highlighting 176 stars belonging to the retrograde group. _Right_ - The [Mg/Mn] by [Al/Fe] diagram highlighting 84 stars belonging to the retrograde group. Figure 13: Black crosses are (47) stars with high values of [Al/Fe] of probable in situ origin. The blue star symbols are objects with a low [Al/Fe] ratio or without values for the Al and/or Mn abundances._Left_ - The [Mg/Fe] by [Fe/H] diagram highlighting 198 stars belonging to the prograde group. _Right_ - The [Mg/Mn] by [Al/Fe] diagram highlighting 78 stars belonging to the prograde group. increases to \(-0.15\) if we consider only stars that have both Ba and Eu abundances (top left panel of Fig. 16). We can also restrict the investigation to only those stars that have [A]/Fe] \(<0\) (the numbers above include stars where Al and Mn were not detected). In this case, [Eu/Fe] remains similar, 0.48(15) dex but the [Ba/Fe] increases to 0.45(37). With this, we recover the same result as Matsuno et al. (2021), [Ba/Eu] \(\sim 0.0\). As these stars are at the high metallicity end of the sample, what we detect is an increase in Ba produced by the s-process as the metallicity increases. Only when we look at lower metallicities do the GE stars show evidence of a purer r-process enrichment. This agrees with the findings of Aguado et al. (2021), whose sample was of lower-metallicity stars. Similar behaviour is seen in the other halo groups, except for the intermediate retrograde one (see Figs. 17, 18, and 19). If we simply remove from the groups the stars that we think are in situ because of their high [A]/Fe] ratios, we obtain [Eu/Fe] \(=0.55(29)\) and [Ba/Fe] \(=0.33(44)\), [Eu/Fe] \(=0.47(14)\) and [Ba/Fe] \(=0.39(35)\), [Eu/Fe] \(=0.50(25)\) and [Ba/Fe] \(=0.20(41)\) for the prograde, retrograde, and most retrograde groups, respectively. The numbers change to [Eu/Fe] \(=0.49(18)\) and [Ba/Fe] \(=0.54(46)\), [Eu/Fe] \(=0.45(15)\) and [Ba/Fe] \(=0.39(21)\), [Eu/Fe] \(=0.42(18)\) and [Ba/Fe] \(=0.49(31)\) for the prograde, retrograde and most retrograde groups, respectively, if we focus on stars with low [A]/Fe] ratios only. Therefore, signs of an increased s-process contribution are also present in the prograde and most retrograde samples. For the intermediate retrograde sample, the different behaviour does not mean that an s-process contribution is not present. Instead, it reflects that this contribution of the s-process is already important at lower metallicities (see, e.g., the models of Kobayashi et al. 2020). This is supported by the increasing trend of [Ba/Fe] as a function of [Fe/H] seen in all groups (top right panels in Figs. 16, 17, 18 and 19). An interesting indicator to look at is the [Eu/Mg] ratio. This ratio reflects the correlation between the production sites of the two elements. A flat trend of [Eu/Mg] versus [Fe/H] shows that Eu and Mg are produced at the same rate. A positive slope indicates that there is an additional source of Eu that does not contribute much Mg. Skuladotitir and Salvadori (2020) using this indicator concluded that Eu abundances in ultra-faint dwarf (UFD) galaxies likely come from two sources. One results in quick Eu enrichment and produces Mg at the same time (possibly SNe II). The other is a delayed contributor that produces more Eu than Mg, enriching the medium in r-process elements only (possibly neutron star mergers). The [Eu/Mg] ratios are shown as a function of [Fe/H] in the bottom right panels of Figs. 16, 17, 18, and 19. In the GE, the intermediate retrograde and the slightly prograde groups, we detect a flat trend of [Eu/Mg]\(\sim\)+0.4 dex, indicating that these elements come continuously from the same source. Sometimes, particularly at the most metal-poor regime, there is one star or another that is very Eu rich, driving some variation in the running mean. Nevertheless, for most of the sample, the mean [Eu/Mg] seems constant. Although the mean seems flat, the most retrograde group appears to separate into two populations with distinct [Eu/Mg] values. A metal-poor part with [Eu/Mg]\(\sim\) +0.4 dex, similar to what is seen in the other groups, and another with [Eu/Mg]\(\sim\) 0 dex and [Fe/H]\(\geq-1.5\) dex. This is also seen in the [Eu/Fe] versus [Fe/H] plot (bottom left of Fig. 19). There is one group of stars that seems to remain Eu rich in the whole metallicity range, but another shows a decreasing [Eu/Fe] ratio with increasing metallicity. This latter population with [Eu/Mg]\(\sim\) 0 dex and [Fe/H]\(\geq-1.5\) dex, seems to match the behaviour of the in situ population better. That we see a flat [Eu/Mg] ratio among stars that are likely accreted, in all of our groups, means that in these stars Eu and Mg have similar rates of production. This implies that there is no clear sign of a source that only produces r-process elements, such as neutron star mergers (NSMs). This conclusion goes in contrary to the findings of other works, where such a contribution was needed (Naidu et al., 2022). We do see that the [Eu/Mg] ratio of GE stars becomes different from that of in situ stars at metallicities around [Fe/H] \(=\sim-1.3\), but without change in its [Eu/Mg] value. Assuming the age-metallicity relationship of GE Figure 15: Black crosses are (46) stars with high values of [A]/Fe] of probable in situ origin. The violet star symbols are objects with a low [A]/Fe] ratio or without values for the Al and/or Mn abundances _Left_ - The [Mg/Fe] by [Fe/H] diagram highlighting 253 stars belonging to the most retrograde group. _Right_ - The [Mg/Mn] by [A]/Fe] diagram highlighting 66 stars belonging to the most retrograde group. derived by Giribaldi & Smiljanic (2023), this metallicity corresponds to ages around 11.0-10.5 Ga. This means that even after \(\sim\)2.0-2.5 Ga of star formation history, the nucleosynthetic contribution from NSM did not become significant, implying perhaps longer delays for the contribution of this type of source. ### Other chemical abundances in the halo groups Finally, we tested for chemical differences between halo groups using all other elements available in the GALAH DR3 catalogue. Distributions of abundances were compared using the Kolmogorov-Smirnov (KS) test. We adopted the traditional value of \(p=0.05\) Bonferroni corrected for the number of comparisons we make, thus \(p=0.002\) for considering the distributions as different (rejecting the hypothesis that they are the same). As in the discussion above, this comparison was done for two cases; first, removing only the confirmed in situ contamination that has [Al/Fe] \(>0.0\) (and leaving inside the sample the stars without good Al abundances), and second, restricting the sample to only those objects with good Al abundances and [Al/Fe] \(<0.0\). part of the results for the first case are shown in Fig. 20 in the form of correlation matrices. Essentially, we find that only the most retrograde group appears to be chemically distinct from the other groups. It differs Figure 16: Diagrams involving abundances of Ba and Eu for the GE-dominated group. In grey, the whole sample of metal-poor stars is shown. Black crosses are stars of the GE-dominated group that were found to be of in situ origin with high values of [Al/Fe]. The green symbols are the remaining stars of the group. _Top Left_ - [Ba/Eu] as a function of [Fe/H]. Dashed lines indicate the solar ratio, and values for pure contributions for the s- and r-processes as calculated by Bisterzo et al. (2014). The horizontal lines on the KDE plot mark the peak values. _Top Right_ - [Eu/Ba] as a function of [Eu/Fe]. The limits in [Eu/Fe] for defining the r-l and r-l stars proposed by Beers & Christilieb (2005) are indicated. _Bottom Left_ - [Eu/Fe] as a function of [Fe/H]. _Bottom Right_ - [Eu/Mg] as a function of [Fe/H]. The green line and the green stripe show the running mean and the scatter, respectively. from all other groups in the distributions of [O/Fe]. In addition, it differs from the prograde group in [Y/Fe]; from the GE-dominated group in [Si/Fe] and [Mn/Fe]; and from the intermediate retrograde group in [Si/Fe], [Y/Fe] and [Ba/Fe]. Otherwise, we also detect a difference in [Si/Fe] between the prograde and intermediate retrograde groups and in [Ba/Fe] between the GE-dominated and intermediate retrograde groups. For the other elements available in GALAH DR3, we found no difference between the groups. If we concentrate instead on those stars with good Al abundances and [Al/Fe] \(<\) 0.0, all differences disappear except for the [Mn/Fe] ratio of the most retrograde group, which becomes different from all other groups. However, one should keep in mind that sometimes the number of stars that are compared can be small, depending on the element being tested. In summary, from comparisons of both subsamples, we find that the most retrograde group is the only one with clear signs of being chemically different from the others. The abundances of the prograde, GE-dominated, and intermediate retrograde groups are very similar. In addition, we separated from each group two samples, one containing only dwarfs and the other only giants. We repeated the abundance comparisons with KS tests only among samples of the same stellar types. As discussed in more detail in Appendix A, Fig. 10, abundance differences appear only in the sample of giants. Also, in this case, the differences appear only for the most retrograde group. For dwarfs, all abundances are similar. Although this seems to indicate the existence of systematic effects between dwarfs and giants in the GALAH results, the differences seen in the giants support the discussion above. We interpret these results, together with the information that was gathered in the previous sections, as signs that the accreted component part of the prograde, GE-dominated, and retrograde groups is the same. Therefore, our exploration seems to show no significant signs of accretion events other than GE in the region of the parameter space occupied by the three groups; at least in Figure 17: Same as Fig. 16 but for the prograde group. The stars shown in blue have low [Al/Fe] or lack Al and/or Mn abundances. the sample we selected for analysis here. This conclusion could be different if, for example, the sample was selected to extend to higher metallicity values (see e.g. Myeong et al., 2022). However, the most retrograde group shows signs of being chemically distinct from the others. We interpret this result as showing that the accreted stars present in this group possibly have a different origin. This could be attributed to a different accretion event, for example Thammos and/or Sequoia (Myeong et al., 2018; Koppelman et al., 2019) or to a separate region of the GE progenitor, where the chemical enrichment history was distinct (Koppelman et al., 2020; Amarante et al., 2022). In that sense, we call attention to the discussion in Amarante et al. (2022), where it is argued on the basis of simulations that Sequoia could be explained as the metal-poor part of the same merger that gave origin to GE. Amarante et al. (2022) show that accreted stars that end up with very retrograde, high energy orbits tend to be more metal-poor, with a multi-peak metallicity distribution, more \(\alpha\)-rich, to have a larger spread in abundances, and to have less eccentric orbits. Although the stars in our most retrograde group are perhaps not in the same high-energy orbits, the remaining characteristics seen in the Amarante et al. (2022) simulations agree with what we detect. Finally, it is also important to note that uncertainties in the abundances might still be affecting our capability of detecting chemical differences among the groups. Therefore, additional studies that derive high-quality abundances are needed to confirm or refute our findings. ## 5 Conclusions We performed a comprehensive analysis of the chemodynamic properties of metal-poor stars ([Fe/H] \(<-0.8\)) present in the DR3 of the GALAH survey (Buder et al., 2021). We used unsupervised machine learning methods, t-SNE and HAC, on a parameter space made of two dynamic (J\({}_{\rm g}\) and J\({}_{\rm g}\)) and two chemical ([Fe/H] and [Mg/Fe]) quantities to separate the stars into groups Figure 18: Same as Fig. 16 but for the retrograde group. The stars shown in light brown have low [A/Fe] or lack Al and/or Mn abundances. of similar properties. We do not use cuts in the parameter space, but attempt to let the sample itself tell how it should be best divided. This analysis was repeated 50 times to estimate the probability that the stars belong to the groups that were identified. On the basis of this analysis, we defined four groups of halo stars. The first group was selected as the one in which the stars show an angular momentum and radial action distribution similar to what has been defined as the _Gaia_-Enceladus merger. We refer to this as the GE-dominated group. Two other groups were defined to be slightly more prograde and slightly more retrograde than the GE one. The final group was selected to be composed of stars that show the most extreme retrograde motions in the sample. We then performed a detailed exploratory study of the dynamical and chemical properties of the groups, trying to be as independent of predefined concepts as possible. The following conclusions can be drawn from this exploration. * Our GE-dominated group has a mean metallicity of [Fe/H] = \(-\)1.29 and extends to [Fe/H] \(\sim-\)2.7. The metallicity distribution of the prograde and retrograde groups is very similar. On the other hand, the most retrograde group has a larger fraction of more metal-poor stars; * The GE-dominated group contains stars with the most eccentric orbits in the sample (median eccentricity equal to 0.86); * The prograde and retrograde groups also contain stars of very eccentric orbits, but with medians of lower values, 0.80 and 0.76, respectively; * The most retrograde group has a completely different eccentricity distribution, between 0.2 and 0.6 with a mean of 0.42. Structures identified in the same regime of dynamic properties (Thamos and Sequoia) have already been reported to have eccentricities of about 0.6 or lower; * Using abundances of Mg, Mn, and Al, we find an important in-situ contamination in all four groups. We identify this contamination as the Erebus component defined in Giribaldi Figure 19: Same as Fig. 16 but for the most retrograde group. The stars shown in violet have low [Al/Fe] or lack Al and/or Mn abundances. & Smiljanic (2023), which could also be related to the Aurora defined by Belokurov & Kravtsov (2022). At the high-metallicity end (\(>-1.4\)), chemistry can tentatively be used to separate accreted stars from in situ ones. However, there is important scatter that sometimes blurs the division; * At lower metallicities, it is less clear that chemistry (Mg, Al, and Mn) can separate in situ and accreted stars. However, abundances of Al and Mn are not available for a large number of stars in the catalogue. We note that chemical evolution models suggest that low-metallicity in situ stars can also have low Al abundances (Kobayashi et al., 2020), causing additional difficulties in separating stars of different origins. * Because the chemodynamic separation between accreted and in situ stars becomes difficult, it is possible that unrecognised in situ contamination affects samples selected as being of possible low-metallicity accreted stars. This might have important consequences for works that attempt to infer the properties of the original systems that merged with the Galaxy. * Regarding neutron capture elements, we find accreted stars from GE to be r-process rich at the low metallicity end of our sample. However, the abundance of Ba increases with metallicity, indicating that an increasing contribution of the s-process is present. The same is seen in accreted stars of all groups. * The most retrograde group of stars seems to contain stars from two populations with different trends in [Eu/Fe] as a function of [Fe/H]. One population of stars remains Eu rich in the whole metallicity range, just as for the accreted stars in the other three halo groups. However, the other shows a decreasing [Eu/Fe] ratio with increasing metallicity. We interpret the latter as part of an in situ contamination. * The [Eu/Fe] ratios and in particular the flat trend of [Eu/Mg] with [Fe/H] indicate that the accreted stars were enriched by a source that produces Mg and r-process elements simultaneously (e.g. CCSNe). We do not see signs of major contribution from a delayed source, such as NSM, that produces only r-process elements. * We tested for differences in the abundances of all other elements available in the catalogue. We find that the accreted stars in the prograde, GE-dominated, and retrograde groups, at this regime of metallicity, do not differ among themselves. Essentially, only the stars of the most retrograde group show signs of different chemical abundances in some elements. * On the basis of the observations above, we conclude that the current sample shows signs of at most two distinct accretion events. One that we can associate to GE with stars distributed in the region of the parameter space occupied by three of our groups. The second, which can be associated to Thamnos and/or Sequoia, is present only in the most retrograde group of stars. Nevertheless, we call attention to the discussion in Amarante et al. (2022), which suggested that one single merger could explain the accreted stars in all four of our groups. However, we note here that the sample we analysed does not cover regions of the Galaxy where other substructures are found, as for example, Heracles (Horta et al., 2021) in the inner Galaxy, the Cetus Polar Stream (at about 34 kpc from the Sun, Newberg et al., 2009), or the Saggitiarius stream (beyond 20 kpc of the Sun, Majewski et al., 2003). Finally, we point out that to advance in the characterisation and separation of stars belonging to different merger events, chemical abundances of high quality are needed. Moreover, we believe that it is important to take into account the guidance of Figure 20: Results of the KS test applied to the chemical abundances of all four halo groups defined in this work. These results concern the case where the in situ contamination that has [Al/Fe] > 0.0 was removed from all groups. Brown-coloured regions indicate that the null hypothesis was not rejected (i.e., there is no difference between the groups). The green regions indicate that the null hypothesis was rejected (there is a difference between the groups). Only elements where some difference was detected are shown. For all other elements, the KS test indicates no difference between the groups. fered by chemical evolution models when this kind of separation is attempted. ###### Acknowledgements. The authors thank the anonymous referee for the quick and constructive feedback. ARS and RS acknowledge the support of the Polish National Science Centre (NCN) through the project 2018/31/B/ST9/01469. ARS would like to thank RS for the kind mentorship through these difficult times. ARS thanks Riano Giribbaldi, Maria Luiz Linhares Dantas, and Helio Perotton for the discussions. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France, and the NASA Astrophysics Data System. This work presents results from the European Space Agency (ESA) space mission Gaia Gaia data are being processed by the Gaia Data Processing and Analysis Consortium (DPA). Funding for the DPAC is provided by national institutions, in particular, the institutions participating in the Gaia Multilateral Agreement (HLA). The Gaia mission website is [https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia). The Gaia archive website is [https://arcliveives.esac.esa.int/gaia](https://arcliveives.esac.esa.int/gaia). This work also used GALAH survey data, their website is [http://galah-survey.org/](http://galah-survey.org/). This research made use of Astropy6 a community-developed core Python package for Astronomy (Astropy Collaboration et al., 2013, 2018). This research used Mesa Isochrones and Stellar Tracks (MIST)7(Dotter, 2016; Choi et al., 2016). Footnote 7: [https://www.astropy.org](https://www.astropy.org)
2305.15593
Latest vertex and tracking detector developments for the future Electron-Ion Collider
The high-luminosity high-energy Electron-Ion Collider (EIC) to be built at Brookhaven National Laboratory (BNL) will provide a clean environment to study several fundamental questions in the high energy and nuclear physics fields. A high granularity and low material budget vertex and tracking detector is required to provide precise measurements of primary and displaced vertex and track reconstruction with good momentum and spatial resolutions. The reference design of the EIC project detector at the first Interaction Point (IP), which was selected from three submitted EIC detector proposals, enables a series of high precision heavy flavor measurements. Based on the EIC project detector reference design, the newly formed ePIC collaboration is developing the technical design for the EIC project detector towards its construction and operation. The reference design of the EIC vertex and tracking detector consists of the Monolithic Active Pixel Sensor (MAPS) based silicon vertex and tracking subsystem, the Micro-Pattern Gas Detector (MPGD) based gas tracking subsystem and the AC-Coupled Low Gain Avalanche Diode (AC-LGAD) based silicon outer tracker, and it has the track reconstruction capability in the pseudorapidity region of -3.5 to 3.5 with full azimuthal coverage. Further detector geometry optimization with a new magnet and technology down selection are underway by the ePIC collaboration. The latest ePIC vertex and tracking detector geometry and its performance evaluated in simulation will be presented. Details of the EIC vertex and tracking detector R$\&$D, which include the proposed detector technologies, prototype sensor characterization and detector mechanical design will be discussed as well.
Xuan Li
2023-05-24T22:01:07Z
http://arxiv.org/abs/2305.15593v1
# Latest vertex and tracking detector developments for the future Electron-Ion Collider ###### Abstract The high-luminosity high-energy Electron-Ion Collider (EIC) to be built at Brookhaven National Laboratory (BNL) will provide a clean environment to study several fundamental questions in the high energy and nuclear physics fields. A high granularity and low material budget vertex and tracking detector is required to provide precise measurements of primary and displaced vertex and track reconstruction with good momentum and spatial resolutions. The reference design of the EIC project detector at the first Interaction Point (IP), which was selected from three submitted EIC detector proposals, enables a series of high precision heavy flavor measurements. Based on the EIC project detector reference design, the newly formed ePIC collaboration is developing the technical design for the EIC project detector towards its construction and operation. The reference design of the EIC vertex and tracking detector consists of the Monolithic Active Pixel Sensor (MAPS) based silicon vertex and tracking subsystem, the Micro-Pattern Gas Detector (MPGD) based gas tracking subsystem and the AC-Coupled Low Gain Avalanche Diode (AC-LGAD) based silicon outer tracker, and it has the track reconstruction capability in the pseudorapidity region of -3.5 to 3.5 with full azimuthal coverage. Further detector geometry optimization with a new magnet and technology down selection are underway by the ePIC collaboration. The latest ePIC vertex and tracking detector geometry and its performance evaluated in simulation will be presented. Details of the EIC vertex and tracking detector R&D, which include the proposed detector technologies, prototype sensor characterization and detector mechanical design will be discussed as well. Electron-Ion Collider, silicon, MAPS, AC-LGAD, vertex, track, radiation ## 1 Introduction The Electron-Ion Collider (EIC) will utilize high-luminosity high-energy electron+proton (\(e+p\)) and electron+nucleus (\(e+A\)) collisions to precisely study the nucleon/nucleus 3D structure, help address the proton spin puzzle, search for the gluon saturation and explore how quarks and gluons form visible matter inside vacuum and a nuclear medium [1]. The EIC project has received CD1 stage approval from the US Department of Energy (DOE) in 2021, which officially announces the starting of the defined EIC project. At the EIC, the beam energies of proton and nucleus (mass number A = 2-238) are 41 GeV, 100 GeV to 275 GeV and the electron beam energies vary from 2.5 GeV to 18 GeV. The instantaneous luminosity at the EIC is around \(10^{33-34}cm^{-2}s^{-1}\), which means the annual EIC operation can achieve around 10-100 \(fb^{-1}\) integrated luminosities. The EIC bunch crossing rate is around 10 ns. Two detectors have been planned to be built at the EIC, the first one is the EIC project detector located at the 6 clock position of the accelerator and the second one is currently named as detector II, which will be located at the 8 clock position of the accelerator. Recent developments of the EIC project detector, which include the reference detector design, vertex and tracking subsystem performance, detector geometry optimization and related R&D progress will be discussed. EIC project detector reference design and performance Since the EIC project achieved CD0 phase approval in 2020, three proto-collaborations: ECCE [2], ATHENA [3] and CORE [4] have performed dedicated detector and physics studies for the EIC project detector design. The recommendation of the EIC detector review committee suggests that the ECCE detector design [2] is selected as the EIC project detector reference design considering multiple factors such as detector performance, cost, risk and schedule. ### EIC project detector reference design The selected EIC project detector reference design consists of a Monolithic Active Pixel Sensor (MAPS) [5] based silicon vertex and tracking detector, a Micro Pattern Gas Detector (MPGD) [6] based tracking detector, an AC coupled Low Gain Avalanche Diode (AC-LGAD) based Time of Flight (ToF) detector, a dual Ring-imaging Cherenkov detector (dRICH), a mirror Ring-imaging Cherenkov detector (mRICH), a Detector of Internally Reflected Cherenkov light (DIRC) PID detector, ElectroMagnetic Calorimeters (EMCal) and Hadronic Calorimeters (HCAL). This proposed detector reference design utilizes the existing Babar magnet with a maximum magnetic field at 1.4 T. It can provide precise primary and displaced vertex determination, tracking reconstruction, particle identification and energy measurements in the pseudorapidity region of \(-3.5<\eta<3.5\). The layout of the EIC project detector reference design is shown in the left panel of Fig. 1. To realize the proposed physics measurements, the EIC vertex and tracking detector is required to provide fine spatial resolution for primary and displaced vertex determination; low material budgets to mitigate multiple particle scattering to improve the track momentum resolution; sufficient number of hits for track pattern recognition finding; and fast timing to suppress backgrounds from the beam and neighboring collisions. A hybrid vertex and tracking detector option has been adapted by the EIC detector reference design, which consists of the next generation MAPS [8], \(\mu\)RWELL [9] and AC-LGAD based layers/disks. The high granularity thin MAPS vertex and tracking detector will provide better than 10 \(\mu\)m spatial resolution per hit, which is critical for precise vertex and track reconstruction. The \(\mu\)RWELL and AC-LGAD detectors will provide the large lever arm constraints and additional hits to improve the track finding quality. Moreover, fast timing capabilities provided by the AC-LGAD and \(\mu\)RWELL detectors will help pin down the combinatorial backgrounds for track reconstruction at EIC. The geometry of the EIC vertex and tracking detec Figure 1: Geometry of the EIC project detector reference design implemented in GEANT4 [10] simulation (left) and the geometry of the vertex and tracking detector of the EIC project detector reference design (right). The left part of the detector locates in the electron beam going direction and the right part is in the proton/nucleus going direction. Detailed geometry parameters are listed in Table 1, Table 2, and Table 3. right panel of Fig. 1. According to the detector kinematic coverage, the EIC project detector vertex and tracking detector can be further divided into three subsystems: barrel, hadron endcap and electron endcap vertex and tracking detectors. Detailed geometry parameters of the EIC project detector vertex and tracking subsystem are shown in Table 1, Table 2 and Table 3. (\(DCA_{2D}\)) resolution in the pseudorapidity regions of \(0\leq|\eta|<1\), \(1\leq|\eta|<2.5\) and \(2.5\leq|\eta|<3.5\) using the EIC project detector reference design with single charged particle simulation. Precise electron tagging is desired for Deeply Inelastic Scattering (DIS) measurements at EIC, therefore the EIC yellow report has a more stringent detector requirement on tracking in the electron beam going direction (the negative pseudorapidity region) than the hadron nucleon/nucleus beam going direction (the positive pseudorapidity region). Compared to the EIC yellow report detector requirements [1], the tracking performance of the EIC project detector reference design is not far away from the desired performance in the pseudorapidity region of \(0\leq|\eta|<2.5\) especially in the high momentum or high transverse momentum region. However, tracking performance in the pseudorapidity region of \(2.5\leq|\eta|<3.5\) needs improvement to align with the EIC yellow report requirements. Similar features happen in the negative pseudorapidity regions as well [2]. Material budgets of the EIC project detector reference design have been scanned in simulation. Fig. 4 shows pseudorapidity dependent material budget for different detector subsystems of the EIC project detector reference design. Although the total material budgets of the MAPS based silicon vertex and tracking detector and the MPGD based tracking detector are below 5% radiation length per the EIC yellow report requirements, the material budgets of the associated service cone are around 10-15% radiation length in the pseudorapidity region of \(1<|\eta|<1.6\). These non-negligible material budget of the service cone will impact on the tracking performance in the corresponding kinematic regions and further studies, which include detector geometry optimization and service cone routing optimization, are needed to improve the associated tracking performance. ### Heavy flavor physics with the EIC project detector reference design A series of heavy flavor physics simulation studies [11, 12, 13] have been carried out with the detector performance of the EIC project detector reference design. The physics simulation framework Figure 2: Track momentum dependent momentum resolution of the EIC project detector reference design in the pseudorapidity regions of \(0\leq|\eta|<1\), \(1\leq|\eta|<2.5\) and \(2.5\leq|\eta|<3.5\). The tracking performance is evaluated with the 1.4T Babar magnet. The EIC yellow report tracking requirements in the respective pseudorapidity regions are highlighted in brown dashed lines. consists of event generation in PYTHIA6/8 [14, 15], smearing package of detector response extracted from the GEANT4 simulation, and particle finding and reconstruction algorithms. The electron and Figure 4: Pseudorapidity dependent material budget scan of the EIC project detector reference design in simulation [2]. In addition to active detector volume, material budgets of a service cone which consists of the support structure, cooling components and service parts for the inner vertex and tracking detector have been included as well. Figure 3: Track transverse momentum dependent transverse Distance of Closest Approach (\(DCA_{2D}\)) resolution of the EIC project detector reference design in the pseudorapidity regions of \(0\leq|\eta|<1\), \(1\leq|\eta|<2.5\) and \(2.5\leq|\eta|<3.5\). The tracking performance is evaluated with the 1.4T Babar magnet. The EIC yellow report tracking requirements in the respective pseudorapidity regions are highlighted in brown dashed lines. proton/nucleus beam has a crossing angle of 25 mrad at the EIC project detector Interaction Region (IP) to mitigate the related EIC background. This beam crossing angle introduced in the latest EIC accelerator design has been included in the simulation configuration. Fig. 5 shows the mass distributions of reconstructed \(D^{\pm}\), \(D^{0}\) (\(\bar{D^{0}}\)), \(B^{\pm}\), \(B^{0}\) (\(\bar{B^{0}}\)), and \(B^{0}_{s}\) (\(\bar{B^{0}_{s}}\)) with the detector performance of the EIC project detector reference design in 10 GeV electron and 100 GeV proton collisions [12]. Due to the great precision of the displaced vertex and tracking momentum resolution of the EIC reference design (see Fig. 2 and Fig. 3), Clear and pronounced signals can be obtained for these reconstructed D and B mesons with only one year of EIC operation. The extracted signal over background ratios are significantly better than those values measured in heavy ion collisions, which is due to the better detector performance and lower beam/collision backgrounds at EIC. In addition to open heavy flavor hadron reconstruction, heavy quarkonia such as \(J/\psi\)[13] and heavy flavor jets [11, 12], which have larger masses and a larger number of decay particles or constituents, have been successfully reconstructed or tagged with the EIC project detector reference design in electron+proton and electron+nucleus collisions. The unique kinematic coverage and high precision to be obtained by the future EIC heavy flavor hadron and jet measurements will provide significantly better sensitivities and new insights to explore the inner structure of nucleon and nucleus within the little constrained high and low Bjorken-x region and shed light on the mass/matter formation puzzle such as the hadronization mechanism. The precision of the proposed EIC heavy flavor measurements mainly relies on the high-granularity and low-material-budget EIC inner vertex and tracking detector based on the MAPS technology to provide precise primary and displaced vertex reconstruction and track momentum determination. The AC-LGAD based ToF detector plays an important role in identifying the PID information in the low momentum region for heavy flavor particle reconstruction at the EIC. Figure 6: Side view of the ePIC detector current design. The ePIC detector current design includes the tracking, PID, electromagnetic calorimeter and hadron calorimeter subsystems. The ePIC detector is 8.2 m long along the z axis and 5.34 m high along the y axis. It uses a new 1.7 T magnet with similar dimensions as the Babar magnet. Figure 7: Left: geometry of the current design of the ePIC MAPS vertex and tracking detector (golden) and its service cone (purple) implemented in simulation. Right: the side view of the ePIC barrel detector, in which includes the ePIC barrel MAPS vertex and tracking geometry parameters. EIC silicon vertex and tracking subsystem technical design development After the EIC project detector reference design selection, a new detector collaboration: ePIC has been formed in July 2022 to lead the EIC project detector technical design based on the EIC reference design. The EIC project detector is expected to start operate in 2032, and the existing Babar magnet may not be fully functional during the EIC data collection period. Therefore a new magnet with the same dimension of the Babar magnet but slightly higher magnetic field at 1.7 T is introduced to be adapted by the ePIC detector design. Several detector geometry optimizations have been performed for the vertex and tracking subsystem, the PID subsystem and calorimeter subsystem of the current ePIC detector design. Fig. 6 shows the sideview of the updated ePIC detector geometry with a new magnet, which has similar dimension as the Babar magnet and its maximum magnetic field is 1.7 T. The ePIC detector is asymmetric along the z axis. The length in the hadron beam going direction from the primary vertex is 5.0 m long and the length in the electron beam going direction is 3.2 m. The height of the current ePIC detector design is 5.34 m. Following the EIC project detector reference design, the ePIC vertex and tracking detector still includes the MAPS inner vertex and tracking subsystem, the MPGD tracking subsystem and the AC-LGAD outer tracker. Fig. 7 shows the geometry of the ePIC MAPS vertex and tracking detector and its service cone, which are implemented in simulation. The optimized geometry parameters of the ePIC MAPS vertex and tracking detector are shown in Table 4, Table 5 and Table 6. Following the EIC project detector reference design, the ePIC MAPS barrel vertex and tracking detector still has 5 layers, however the updated design covers a relatively larger active detector area and consists of a larger number of channels. The inner two vertex layers are moved to radii at 3.6 cm and 4.8 cm considering the EIC beam bake out requirement (5 mm clearance from the EIC beam pipe) and the new ITS3-like sensor size limitation [8, 18, 19]. The third vertex layer radius is changed to 12 cm to provide a good level arm for track reconstruction. As listed in Table 4, the 4th and 5th MAPS barrel layers are moved further away to improve the track momentum and spatial resolutions. The ePIC MAPS hadron endcap detector (geometry parameters listed in Table 5) is composed of 5 disks with expanded z coverage from 25 cm to 135 cm. The inner radius of the hadron endcap disks varies from 3.67 cm to 7.01 cm and the outer radius varies from 23 cm to 43 cm. Compared to the EIC project detector reference design, the ePIC MAPS electron endcap detector (geometry parameters listed in Table 6) includes one more disk and has updated longitudinal z and radius r parameters to make full usage of the allocated space. The ePIC silicon vertex and tracking detector design aims to utilize the next generation 65 nm MAPS technology and the ePIC MAPS layers/disks assume the sensor pixel pitch is 10 \(\mu m\). The ePIC collaboration plans to use the ITS-3 type bent sensor for the inner three barrel layers to obtain low material budgets and high spatial resolution. The MAPS outer two barrel layers and endcap disks will use new flat MAPS sensors with similar feature of the ITS-3 type sensor. Ongoing EIC detector R&D focuses on developing the EIC MAPS sensor design, the silicon detector readout architecture, detector mechanical structure and service part material budget reduction. Figure 8: Tracking performance of the current ePIC detector design in the pseudorapidity regions of \(-3.5<\eta<-2.5\), \(-1.0<\eta<1.0\) and \(2.5<\eta<3.5\). Track momentum dependent momentum resolution (top row) and transverse momentum dependent transverse Distance of Closest Approach (\(DCA_{2D}\)) resolution (bottom row) are evaluated with the new 1.7 T ePIC magnet. The EIC yellow report tracking requirements in the respective pseudorapidity regions are highlighted in blue dashed lines. In addition to the MAPS vertex and tracking detector, other subsystems of the ePIC detector design and the estimated service parts have been implemented in GEANT4 simulation. Fig. 8 shows the tracking performance of the current ePIC detector design in GEANT4 simulation. With the optimized ePIC detector design, the track momentum dependent momentum resolution in the pseudorapidity region of \(-1<\eta<1\) and \(2.5<\eta<3.5\) meet the EIC yellow report detector requirement as shown in the top row of Fig. 8. Although the tracking momentum resolution of the current ePIC detector design in the pseudorapidity region of \(-3.5<\eta<-2.5\) is worse than the EIC yellow report detector requirement, further studies to reduce the detector material budgets in the electron endcap region and applying particle flow method with joint tracking and calorimeter information are underway. The transverse momentum dependent transverse Distance of Closest Approach (\(DCA_{2D}\)) resolution in most pseudorapidity regions meets the EIC yellow report requirements. As the EIC backgrounds such as the beam gas and synchrotron radiation background, are still under study in simulation, the ePIC tracking detector geometry may be updated to provide sufficient number of hits for tracking pattern recognition. The ePIC tracking performance will be further updated with new simulation samples implemented with EIC backgrounds. ## 4 EIC silicon detector R&D status The EIC project R&D program, which focuses on the needs of detector components that are baselined in the project detector, ePIC, started in 2022 and is supported through the Brookhaven National Laboratory EIC center by the US Department of Energy. For the ePIC silicon vertex and tracking detector, the 65 nm MAPS technology has been recommended. The AC-LGAD technology has been proposed to be applied for the ePIC barrel and hadron endcap outer tracker. Alternative silicon detector technologies have been considered such as the Depleted MAPS (DMAPS) technology [16] for the EIC detector. Existing advanced silicon detector R&D [17] can provide good reference for the EIC project detector R&D. Progress of the 65 nm MAPS and AC-LGAD R&D, which are supported by the EIC project detector R&D programs, will be discussed in the following sections. ### EIC MAPS detector R&D The MAPS prototype sensor for the ePIC silicon vertex and tracking detector is still under design. The related technical knowledge and aspects have been learnt from the parallel R&D of the 65 nm MAPS by the ALICE ITS3 project [8]. Three R&D projects (eRD104, eRD11, eRD113) have been supported by the EIC project detector R&D program for the 65 nm MAPS related detector design and R&D. The near-term goals of these three MAPS R&D projects include (1) providing technical solutions to adapt the ITS3 based mechanical and electrical characteristics to the nominal EIC vertex layer radii based on the available reticle sizes, and optimization of the bending and interconnection techniques for the resulting configuration; (2) delivering the sensor stave configuration for the 4th and 5th middle layers and disk configurations for the endcap regions; (3) optimizing the number of stitched sensor units into a module composed of an aluminum flex PCB and optimized number of sensors; and (4) performing a survey about the requirements for the large-area sensor design in a multiple sensor to a single readout chain scheme. The eRD104 project is investigating methods to significantly reduce the services load of the readout and power system for the EIC MAPS vertex and tracking detector. The eRD111 project focuses on developing the full detector mechanical design composed of the next generation 65 nm MAPS sensors. Fig. 9 shows the first version of the mechanical design of the MAPS vertex and tracking detector according to the EIC project detector reference design. The innermost three MAPS vertex layers are composed of curved MAPS sensors with a design similar to the proposed ITS3 sensor [18, 19]. The 4th and 5th MAPS barrel layers are made of MAPS sensor staves. As shown in Fig. 9, the current ePIC MAPS detector mechanical design implements stitched MAPS sensors to construct its hadron and electron endcap disks. Besides the support structure of detector active volume, routines of the readout and power cables and the cooling system are included in the ePIC MAPS detector mechanical design. As the ePIC detector design is evolving, the silicon vertex and tracking detector mechanical design will be updated accordingly. A new EIC project detector R&D project, eRD113, was recently set up to work on the EIC MAPS sensor design and associated sensor characterization. ### EIC AC-LGAD detector R&D The ePIC AC-LGAD barrel and hadron endcap detector design has been implemented with the support structure and cooling system as shown in Fig. 10. The ePIC barrel AC-LGAD tracker consists of around 2.4 million channels to cover around 10.9 \(m^{2}\) area. The pixel size of the barrel AC-LGAD detector is 0.5 mm by 1.0 mm. The ePIC hadron endcap AC-LGAD tracker contains around 8.8 million pixels to cover around 2.22 \(m^{2}\) area. The pixel size of the hadron endcap AC-LGAD detector is 0.5 mm by 0.5 mm. The AC-LGAD sensors are fabricated on thin high-resistivity silicon p-type substrates (around 50 \(\mu\)m thickness) and use a large and shallow n+ implant to cover the deep p+ layer for p-n junctions. Multiplication of the electrons traversing the device can reach from 5 to 100 and the current pulses at the terminals are created due to the drift holes through the substrate [7]. Moreover, the electrodes of the AC-LGAD sensors to connect to the readout electronics, are dilelectric metal pads separated from the n+ layer by a thin insulator. This feature allows adjacent pixels of the AC-LGAD sensor to share charges, which can improve the hit spatial resolution. The BNL group has fabricated both the 0.5 mm by 0.5 mm AC-LGAD pixel sensors and AC-LGAD strip sensors with Figure 9: The mechanical design of the MAPS vertex and tracking subsystem by the EIC eRD111 project according to the EIC project detector reference design. The hadron endcap disks locate on the right side and the electron endcap disks are in the left side. The current disk mechanical design utilizes the stitched MAPS sensors and the layout of the 3rd disk in the hadron endcap region is shown inside the red box. different strip widths and pitches. With the 0.5 mm by 1.0 mm strip sensor design, the ePIC barrel AC-LGAD tracker can achieve around 30 \(\mu m\) spatial resolution in the \(r\varphi\) plane and around 30 ps timing resolution. The ePIC hadron endcap AC-LGAD tracker can obtain around 30 \(\mu m\) spatial resolution in the \(xy\) plane and around 25 ps timing resolution with the 0.5 mm by 0.5 mm pixel sensor design. A new EIC project detector R&D project, eRD112, has been formed in 2022 to lead the ePIC AC-LGAD detector related R&D. The project goals include (1) producing medium/large-area AC-LGAD sensors with different doping concentration, pitch and gap sizes between electrodes; and (2) providing a prototype ASIC design to readout AC-LGAD prototype sensors with around 30 ps timing resolution and low power consumption. Fig. 11 shows the progress of the EIC AC-LGAD detector R&D. New AC-LGAD prototype sensors with different pixel sizes have been produced at BNL and HPK. Beam tests of these new AC-LGAD sensors have been set up at Fermilab Test Beam Facility. From the beam tests, the AC-LGAD prototype sensors can achieve around 30 \(\mu m\) spatial resolution and better than 30 ps timing resolution [20]. Although the EIC backgrounds are under evaluation, the radiation tolerance of new produced EIC silicon technologies should be characterized. Irradiation tests of EIC AC-LGAD prototype sensors have been performed at LANL LANSCE facility with 500 MeV proton beam. The irradiation doses delivered to these samples are \(10^{13}-10^{16}\)\(n_{eq}cm^{-2}\) and the radiation impacts on the performance of AC-LGAD sensors will be studied. ## 5 Summary and Outlook As the EIC project enters the new phase towards its realization, good progresses of the EIC project detector design and related R&D have been achieved. The EIC project detector reference design includes the MAPS and AC-LGAD silicon vertex and tracking subsystems and its tracking performance can meet the EIC yellow report detector requirements in most kinematic regions. The newly formed ePIC collaboration is leading the EIC project detector technical design. The optimized ePIC detector design can achieve better tracking performance than the EIC project detector reference design. New EIC project detector R&D projects have been established to perform critical detector R&D for the EIC project detector. The EIC MAPS and AC-LGAD prototype sensor design, detector R&D and related detector mechanical design have obtained good progresses. New collaborators are encouraged to join the ePIC collaboration to deliver a low material budget and high precision silicon vertex and tracking detector for the EIC project. Figure 10: The current design of the ePIC AC-LGAD barrel detector (left) and hadron endcap detector (right). In addition to the detector active volume layout, the associated readout module, support structure and cooling system have been implemented.
2310.08012
AutoFHE: Automated Adaption of CNNs for Efficient Evaluation over FHE
Secure inference of deep convolutional neural networks (CNNs) under RNS-CKKS involves polynomial approximation of unsupported non-linear activation functions. However, existing approaches have three main limitations: 1) Inflexibility: The polynomial approximation and associated homomorphic evaluation architecture are customized manually for each CNN architecture and do not generalize to other networks. 2) Suboptimal Approximation: Each activation function is approximated instead of the function represented by the CNN. 3) Restricted Design: Either high-degree or low-degree polynomial approximations are used. The former retains high accuracy but slows down inference due to bootstrapping operations, while the latter accelerates ciphertext inference but compromises accuracy. To address these limitations, we present AutoFHE, which automatically adapts standard CNNs for secure inference under RNS-CKKS. The key idea is to adopt layerwise mixed-degree polynomial activation functions, which are optimized jointly with the homomorphic evaluation architecture in terms of the placement of bootstrapping operations. The problem is modeled within a multi-objective optimization framework to maximize accuracy and minimize the number of bootstrapping operations. AutoFHE can be applied flexibly on any CNN architecture, and it provides diverse solutions that span the trade-off between accuracy and latency. Experimental evaluation over RNS-CKKS encrypted CIFAR datasets shows that AutoFHE accelerates secure inference by $1.32\times$ to $1.8\times$ compared to methods employing high-degree polynomials. It also improves accuracy by up to 2.56% compared to methods using low-degree polynomials. Lastly, AutoFHE accelerates inference and improves accuracy by $103\times$ and 3.46%, respectively, compared to CNNs under TFHE.
Wei Ao, Vishnu Naresh Boddeti
2023-10-12T03:28:14Z
http://arxiv.org/abs/2310.08012v1
# AutoFHE: Automated Adaption of CNNs for Efficient Evaluation over FHE ###### Abstract Secure inference of deep convolutional neural networks (CNNs) under RNS-CKKS involves polynomial approximation of unsupported non-linear activation functions. However, existing approaches have three main limitations: 1) _Inflexibility:_ The polynomial approximation and associated homomorphic evaluation architecture are customized manually for each CNN architecture and do not generalize to other networks. 2) _Suboptimal Approximation:_ Each activation function is approximated instead of the function represented by the CNN. 3) _Restricted Design:_ Either high-degree or low-degree polynomial approximations are used. The former retains high accuracy but slows down inference due to bootstrapping operations, while the latter accelerates ciphertext inference but compromises accuracy. To address these limitations, we present AutoFHE, which automatically adapts standard CNNs for secure inference under RNS-CKKS. The key idea is to adopt layerwise mixed-degree polynomial activation functions, which are optimized jointly with the homomorphic evaluation architecture in terms of the placement of bootstrapping operations. The problem is modeled within a multi-objective optimization framework to maximize accuracy and minimize the number of bootstrapping operations. AutoFHE can be applied flexibly on any CNN architecture, and it provides diverse solutions that span the trade-off between accuracy and latency. Experimental evaluation over RNS-CKKS encrypted CIFAR datasets shows that AutoFHE accelerates secure inference by 1.32\(\times\) to 1.8\(\times\) compared to methods employing high-degree polynomials. It also improves accuracy by up to 2.56% compared to methods using low-degree polynomials. Lastly, AutoFHE accelerates inference and improves accuracy by 103\(\times\) and 3.46%, respectively, compared to CNNs under TFHE. ## 1 Introduction **MLaaS**, machine learning as a service, is a rapidly growing market with many commercial offerings like Amazon Web Services (AWS), Google Google Cloud Platform (GCP), and Microsoft Azure. Its growth has been driven by the widespread success of deep learning on many tasks like vision [15, 24], language [14, 56], games [49, 51], science [17, 28], and many more. Figure 1 shows a typical MLaaS scenario. The Cloud (Alice) holds deep learning models, while the customer (Bob) has private data and requests service from Alice. Bob wants to protect his private data and does not want Alice to learn sensitive information. On the other hand, deep learning models, including _neural architectures_ and _trained weights_, are properties of Alice. Alice spends considerable efforts to design neural architectures, like ResNets [24], ViT [15], and MLP-Mixer [55] and consumes huge computational resources to search for novel neural architectures [39, 60] or train network weights [22, 57]. **Homomorphic Encryption (HE)**: Secure inference of deep learning models under leveled homomorphic encryption (LHE) [6, 20, 41] or fully homomorphic encryption (FHE) [33, 34, 18] is a promising approach for resolving security concerns between Bob and Alice in the context of MLaaS. FHE enables us to evaluate a circuit with arbitrary depth, including modern deep CNNs [33]. Figure 1 shows secure Figure 1: AutoFHE can automatically adapt the standard CNN with non-linear activations into a set of polynomial CNNs that span the trade-off between accuracy and latency for ciphertext inference. AutoFHE solutions can be deployed on the Cloud server to satisfy a range of customer requirements. inference of CNNs under FHE. First, Bob generates a public key to encrypt his private data and sends Alice the ciphertext. Second, Alice applies neural networks to process the ciphertext input, yielding an encrypted result. Finally, Bob uses the secret key to decrypt the encrypted result. Under FHE, Bob cannot learn Alice's neural architectures and weights, while Alice is also not exposed to Bob's data or the outcome. **Polynomial CNNs:** Non-arithmetic activation functions, such as \(\text{ReLU}(x)=\max(x,0)\), are a core component of modern CNNs, aiding in learning non-linear decision boundaries between classes. For example, residual networks (ResNets) [24] are composed of Conv-BN-ReLU triplets [24]. Since FHE only supports _multiplications_ and _additions_, ReLU must be replaced by polynomial approximations to evaluate CNNs under FHE. Existing methods to generate polynomial CNNs fall into two categories; _manual design of low-degree and high-degree polynomial approximations_. **(1)** A number of approaches adopt low-degree (typically \(\leq 3\)) polynomials [6, 12, 46, 45, 41, 20, 41] to substitute non-arithmetic activation functions and _train_ the resultant polynomial neural networks _from scratch_. For instance, CryptoNets [20], LoLa [6] and Delphi [45] employ a simple quadratic activation function \(x^{2}\). Faster CryptoNets [12] exploit more accurate low-degree approximation \(2^{-3}x^{2}+2^{-1}x+2^{-2}\). SAFENet [42] adopts \(a_{1}x^{3}+a_{2}x^{2}+a_{3}x+a_{4}\) or \(b_{1}x^{2}+b_{2}x+b_{3}\) and HEMET [41] uses \(ax^{2}+bx+c\). After low-degree polynomials are plugged into networks like ResNets both _network weights_ and _polynomial coefficients_ are trained from scratch using stochastic gradient descent (SGD). However, polynomial layers often lead to unstable training since they may dramatically amplify activations during forward propagation and gradients during backward propagation. For example, gradient explosion was observed in prior works [45, 42]. As such, low-degree approaches suffer from a _dilemma_. On the one hand, since low-degree polynomials cannot precisely approximate ReLU, polynomial networks have to be trained from scratch and suffer from poor prediction accuracy. On the other hand, using a higher degree polynomial approximation of ReLU leads to training instability due to exploding gradients. In either case, low-degree approaches achieve lower accuracy than ReLU networks, _e.g_. on CIFAR-10 HEMET [41] and SAFENet [42] report 83.7% and 88.9% Top-1 accuracy, respectively. To mitigate gradient explosion, AESPA [46] normalized the outputs of each polynomial basis separately, leading to improved predictive accuracy. **(2)** A few approaches use high-degree polynomials to approximate ReLU precisely. So, high-degree approaches do not need to train polynomial networks from scratch and can inherit weights from pretrained ReLU networks. One representative polynomial approximation of ReLU is Minimax composite polynomials [36, 32]. By expressing ReLU as \(\text{ReLU}(x)=x\cdot(0.5+0.5\cdot\text{sgn}(x))\), a composite polynomial is used to approximate \(\text{sgn}(x)\). The approximation of ReLU is defined as \(\text{AppReLU}(x)=x\cdot(0.5+0.5\cdot p_{\alpha}(x)),x\in[-1,1]\). \(p_{\alpha}(x)\) is the composite Minimax polynomial, and \(\alpha\) quantifies approximation precision, _i.e_., \(|p_{\alpha}(x)-\text{sgn}(x)|\leq 2^{-\alpha}\). Given \(x\in[-B,B]\), the scaled AppReLU is defined as \(B\cdot\text{AppReLU}(x/B)\), with a precision of \(B\cdot 2^{-\alpha}\). However, high-degree polynomials consume many multiplicative levels and require numerous bootstrapping operations, leading to a high computational burden. MPCNN [33], the state-of-the-art approach for secure inference of CNNs under RNS-CKKS, adopts Minimax composite polynomials with a precision of \(\alpha=13\). By choosing to approximate each ReLU function with high precision, MPCNN results in prediction accuracy that is comparable to ReLU networks. However, the same high-degree AppReLU replaces all ReLUs and consumes \(\sim\!\mathbf{50}\%\) levels. The ciphertext quickly exhausts levels and uses bootstrapping to refresh the zero-level ciphertext before every AppReLU layer (refer to Figure 2). As such, the homomorphic evaluation architecture needs to be customized for each CNN architecture. Moreover, bootstrapping operations consume \(>\!\mathbf{70}\%\) of inference time (refer to Table 5) and result in prohibitively high latency. **Motivation:** We propose AutoFHE to address the above-mentioned limitations of existing methods for secure inference of CNNs. It is based on layerwise mixed-degree polynomials and a hybrid of approximation and training methods. The goal is to \(\underline{\text{Automatically generate polynomial CNNs}}\) and associated homomorphic evaluation architecture spanning a trade-off front of accuracy and latency under \(\underline{\text{FHE}}\). **(1)** Layerwise Mixed-Degree Polynomials: A plausible solution to decrease the computational burden of secure inference is to assign layerwise mixed-degree polynomials to different ReLUs across a network based on the observation that different layers in a network have varying degree of sensitivity to approximation errors. SAFENet [42] demonstrated that mixed-degree polynomials can exploit layerwise sensitivity. However, current mixed-degree search frameworks are limited in two aspects. i) _small search space_: _e.g_. SAFENet only provides two polynomials, degree 2 and degree 3. The small search space cannot include all possible solutions from low-degree to high-degree polynomials. ii) _scalarization of multiple objectives_: a weighted sum is used to balance multiple objectives [45, 42]. However, this approach requires a pre-defined preference to weigh the different objectives. So, they cannot generate diverse solutions to meet different requirements in a single optimization run. **(2)** Hybrid of Approximation and Training: Precisely approximating ReLU allows MPCNN to achieve the state-of-the-art ciphertext inference accuracy. Low-degree approaches train polynomial CNNs from scratch to compensate for loss in accuracy. We posit that by taking advantage of approximation and training, we can inherit weights from ReLU networks and fine-tune polynomial networks to adapt learnable weights to layerwise mixed-degree polynomials. In principle, such a design can allow for high ciphertext inference accuracy while reducing latency. However, realizing the above goals presents multiple chal lenges. The design space, which includes layerwise mixed-degree polynomials and the associated homomorphic evaluation architectures in terms of placement of bootstrapping operations, is prohibitively large for effective manual design. Therefore, in this paper, we advocate for automated optimization of the joint polynomial and homomorphic architecture. **Contributions:** We outline our contributions from two perspectives, _design_ and _system_. From a _design_ perspective, our contributions are, 1. Flexibility: We jointly search for polynomial approximation and a compatible homomorphic evaluation architecture (placement of bootstrapping operations) so we can automatically adapt _any_ convolutional neural networks for secure evaluation over FHE. 2. Optimal Approximation: We approximate the end-to-end function represented by the CNN instead of a standalone non-linear activation function. This allows us to exploit the varying sensitivity of different layers to approximation error, and obtain polynomial approximations with high accuracy and efficient evaluation over FHE. 3. Seamless Design: We allow for layerwise mixed-degree polynomials, which enables us to find a range of models that span the accuracy and latency trade-off. From a _system_ perspective, our contributions are, 1. Search Space: We design search space to include all possible low-degree and high-degree polynomials to enable us to discover better solutions. 2. Search Objective: We formulate the search problem as a multi-objective optimization. We automatically generate diverse polynomial networks spanning the trade-off front between accuracy and latency in a single optimization run. 3. Search Algorithms: We propose combining search and training algorithms to search over large search spaces efficiently, optimize coefficients of arbitrary polynomials, and fine-tune layerwise mixed-degree polynomial networks. Specifically, we propose: * _MOS_, a multi-objective search algorithm to search for solutions within a large search space. * _R-CCDE_, a gradient-free search algorithm to optimize coefficients of composite polynomials. * _PAT_, a fine-tuning algorithm that adapts network weights to polynomial activations. **Experimental Results** on encrypted CIFAR datasets show that AutoFHE has a better trade-off of accuracy and latency compared to high-degree and low-degree approaches under RNS-CKKS. Compared to high-degree MPCNN [33], AutoFHE accelerates inference by \(\mathbf{1.32\times\sim 1.8\times}\) while improving accuracy by \(\mathbf{+0.08\%\sim 0.3\%}\) on CIFAR10. AutoFHE speeds up inference by \(\mathbf{1.1\times\sim 1.4\times}\) while increasing accuracy by \(\mathbf{+0.36\%\sim 0.75\%}\) on CIFAR100. Compared to low-degree AESPA [46], AutoFHE improves ciphertext accuracy by \(\mathbf{+2.56\%}\) and \(\mathbf{+2.44\%}\) based on ResNet32 and ResNet44 backbones on CIFAR10 with similar latency. Furthermore, we compare the accuracy-latency trade-off of networks across two FHE schemes, namely RNS-CKKS and TFHE. Specifically, we compare AutoFHE networks designed for RNS-CKKS with REDsec [18], which is designed for TFHE. We observe that AutoFHE improves accuracy by \(\mathbf{+10.06\%}\) and \(\mathbf{+3.46\%}\) compared to REDsec BNets and BNet, respectively, while simultaneously reducing the corresponding latency by \(\mathbf{24\times}\) and \(\mathbf{103\times}\), respectively. **Code**: [https://github.com/human-analysis/AutoFHE](https://github.com/human-analysis/AutoFHE). ## 2 Preliminaries **RNS-CKKS:** The full residue number system (RNS) variant of Cheon-Kim-Kim-Song (RNS-CKKS) [8, 10] is a leveled homomorphic encryption (LHE) scheme for approximate arithmetic. Under RNS-CKKS, a ciphertext \(\mathbf{c}\in\mathcal{R}^{2}_{Q_{\ell}}\) satisfies the decryption circuit \([\langle\mathbf{c},\mathbf{s}\rangle]_{Q_{\ell}}=m+e\), where \(\langle\cdot,\cdot\rangle\) is the dot product and \([\cdot]_{Q}\) is the modular reduction function. \(\mathcal{R}_{Q_{\ell}}=\mathbb{Z}_{Q_{\ell}}[X]/(X^{N}+1)\) is the residue cyclotomic polynomial ring. The modulus is \(Q_{\ell}=\prod_{i=0}^{\ell}q_{\ell}\), where \(0\leq\ell\leq L\). \(\ell\) is a non-negative integer, referred to as _level_, which denotes the capacity of homomorphic multiplications. \(\mathbf{s}\) is the secret key with Hamming weight \(\mathbf{h}\). \(\mathbf{m}\) is the original plaintext message, and \(e\) is a small error that provides security. A ciphertext has \(N/2\) slots to accommodate \(N/2\) complex or real numbers. RNS-CKKS supports homomorphic addition and multiplication: \[\begin{split}\mathrm{Decrypt}(\mathbf{c}\oplus\mathbf{c}^{\prime})& =\mathrm{Decrypt}(\mathbf{c})+\mathrm{Decrypt}(\mathbf{c}^{\prime})\approx m +m^{\prime}\\ \mathrm{Decrypt}(\mathbf{c}\otimes\mathbf{c}^{\prime})& =\mathrm{Decrypt}(\mathbf{c})\times\mathrm{Decrypt}(\mathbf{c}^{\prime}) \approx m\times m^{\prime}\end{split} \tag{1}\] **Bootstrapping:** LHE only allows a finite number of homomorphic multiplications, with each multiplication consuming one level due to rescaling. Once a ciphertext's level reaches zero, a bootstrapping operation is required to refresh it to a higher level and allow more multiplications. The number of levels needed to evaluate a circuit is known as its _depth_. RNS-CKKS with bootstrapping [7] is an FHE scheme that can evaluate circuits of arbitrary depth. It enables us to homomorphically evaluate deep CNNs on encrypted data. Conceptually, bootstrapping homomorphically evaluates the decryption circuit and raises the modulus from \(Q_{0}\) to \(Q_{L}\) by using the isomorphism \(\mathcal{R}_{q_{0}}\cong\mathcal{R}_{q_{0}}\times\mathcal{R}_{q_{1}}\times \cdots\times\mathcal{R}_{q_{L}}\)[4]. Practically, bootstrapping [7] homomorphically evaluates modular reduction \([\cdot]_{Q}\) by first approximating it by a scaled sine function, which is further approximated through polynomials [7, 35]. Bootstrapping [4] has four stages: ModRaise, CoeffToSlot, EvalMod, and SlotToCoeff. Bootstrapping incurs a lot of key switching operations (KSO), which are the most time-consuming operation on the RNS-CKKS scheme [33]. The refreshed ciphertext has level \(\ell=L-K\), where \(K\) levels are consumed by bootstrapping [4] for polynomial approximation of modular reduction. **Threat Model:** In this paper, we assume the same threat model as prior works under HE, like HEMET [41], MPCNN [33], REDsec [18]. As discussed in the MLaaS scenario, a customer uploads encrypted data to a Cloud server and requests ML services. The Cloud uses neural networks to process the ciphertext without decryption and send back an encrypted result. Only the customer holds the secret key and can decrypt the encrypted result. The Cloud cannot learn sensitive information from the customer's data and the result. The customer is also not exposed to the Cloud's neural networks, including architectures and weights. ## 3 AutoFHE Given a neural network \(g(\mathbf{a},\mathbf{\omega}_{0}(\mathbf{a}))\) with architecture \(\mathbf{a}\) and _pretrained1_ weights \(\mathbf{\omega}_{0}(\mathbf{a})\) and \(M\) ReLU layers, AutoFHE generates polynomial networks on a trade-off front by maximizing accuracy and minimizing latency. For a given architecture \(\mathbf{a}\) during the search, every solution is represented by a triplet of variables \(\mathbf{S}(\mathbf{a})=(\mathbf{D}(\mathbf{a}),\mathbf{\Lambda}(\mathbf{a}),\mathbf{\omega}(\mathbf{a}))\). We will drop the architecture \(\mathbf{a}\) from hereon for ease of notation. \(\mathbf{D}\) is the degree vector of all EvoReLU layers, \(\mathbf{\Lambda}\) is the coefficients of all EvoReLU layers, and \(\mathbf{\omega}\) are the trainable weights of the neural network which are initialized with \(\mathbf{\omega}_{0}\) and fine-tuned for adaptation to the layerwise mix-degree polynomials. We will assign each solution with the minimization objective \(\mathbf{o}(\mathbf{S})=\{1-\operatorname{Acc}(\mathbf{S}),\operatorname{Boot}(\mathbf{S})\}\). \(1-\operatorname{Acc}(\mathbf{S})\) is the validation error, and \(\operatorname{Boot}(\mathbf{S})\) is the number of bootstrapping operations. Note that the accuracy and the number of bootstrapping operations depend on all the variables \(\mathbf{S}\). For instance, appropriately adapting the network weights \(\mathbf{\omega}\) could allow us to use lower-degree polynomial approximations of the activation functions, which reduces the required number of bootstrapping calls. Footnote 1: In our paper, _pretrained_ neural networks especially refer to neural networks with ReLU activation. AutoFHE is a search-based approach which comprises of three main components: _search space_ (Section 3.1), _search objective_ (Section 3.2) and _search algorithm_ (Section 3.3). We describe each of these in detail next. ### Search Space **EvoReLU** is a genetic polynomial function used to replace the non-arithmetic ReLU function. We model it as follows, \[y=\operatorname{EvoReLU}(x)=\begin{cases}x,&d=1\\ \alpha_{2}x^{2}+\alpha_{1}x+\alpha_{0},&d=2\\ x\cdot\left(\mathcal{F}(x)+0.5\right),&d>2\end{cases} \tag{2}\] where \(\mathcal{F}(x)\) is a composite polynomial with \(K\) sub-polynomials \(f_{k}^{\alpha_{k}}\) with degree \(d_{k}\) and seeks to approximate \(0.5\cdot\operatorname{sgn}(x)\) for \(d>2\). \[\mathcal{F}(x)=(f_{K}^{d_{k}}\circ\cdots\circ f_{k}^{d_{k}}\circ\cdots\circ f _{1}^{d_{1}})(x),1\leq k\leq K \tag{3}\] The total degree of \(\mathcal{F}(x)\) is \(\prod_{k=1}^{K}d_{k}\), and consequently the degree of EvoReLU is \(d=\prod_{k=1}^{K}d_{k}+1\). For \(d>2\) the **multiplicative depth** of EvoReLU is \(1+\sum_{k=1}^{K}\left\lceil\log_{2}(d_{k}+1)\right\rceil\) when using the Baby-Step Giant-Step (BSGS) algorithm [35, 4] to evaluate composite polynomial \(\mathcal{F}(x)\). For \(d=1\) and \(d=2\), the multiplicative depth is \(0\) and \(2\), respectively. EvoReLU is modeled to allow for _automatic_ discovery of common solutions in the literature for improving the latency of inference on ciphertexts. For instance, for \(d=1\), EvoReLU\((x)=x\) is equivalent to removing the corresponding ReLU layer through pruning [45, 27, 42]. Similarly, for \(d=2\), EvoReLU\((x)=\alpha_{2}x^{2}+\alpha_{1}x+\alpha_{0}\) is equivalent to using low-degree approximations of ReLU through quadratic functions [42, 45, 42, 12, 10, 6, 12]. Finally, for \(d>2\), EvoReLU\((x)=x\cdot\left(\mathcal{F}(x)+0.5\right)\) is equivalent to high-degree polynomial approximations [33, 32] of ReLU. In this case, EvoReLU bears similarity to the Minimax composite polynomial in Lee _et al._[33, 36]. However, the objective for optimizing the coefficients differs significantly. While Lee _et al._[33, 36] seek to approximate a single ReLU function precisely, our goal is to jointly optimize all EvoReLU functions in a neural network \(\mathbf{a}\) to approximate its corresponding function \(g(\mathbf{a},\mathbf{\omega})\). For the quadratic version of EvoReLU, we allow the coefficients \(\alpha_{0}\), \(\alpha_{1}\), and \(\alpha_{2}\) to differ _channel-wise_ when fine-tuning polynomial CNNs. Such a design improves performance and makes optimizing model weights more stable. For this case, we also introduce a BatchNorm layer after the quadratic EvoReLU for recentering and reshaping the distribution of output activations. We can integrate the BatchNorm parameters into the polynomial coefficients. So, there is no extra consumption of multiplicative levels. We **represent** the composite polynomial \(\mathcal{F}(x)\) by its degree vector \(\mathbf{d}=\{d_{k}\}_{k=1}^{K},d_{k}\in\mathbb{N}\). Each sub-polynomial \(f_{k}^{d_{k}}(x)\) as a linear combination of Chebyshev polynomials of degree \(d_{k}\), \[f_{k}^{d_{k}}(x)=\frac{1}{\beta_{k}}\sum_{i=1}^{d_{k}}\alpha_{i}\mathrm{T}_{i} (x) \tag{4}\] where \(\alpha_{i}\in\mathbb{R}\) and \(\beta_{k}\in\mathbb{R}\). \(\mathrm{T}_{i}(x)\) is the Chebyshev basis of the first kind, \(\alpha_{i}\) are the coefficients for linear combination, and scaling parameter \(\beta_{k}\) is a parameter to scale the output. The coefficients \(\mathbf{\alpha}_{k}=\{\alpha_{i}\}_{i=1}^{d_{k}}\) control the polynomial's shape, while \(\beta_{k}\) controls its amplitude. A composite polynomial with the degree vector \(\mathbf{d}\) has learnable parameters: \[\mathbf{\lambda}=\{\mathbf{\alpha}_{1},\beta_{1},\cdots,\mathbf{\alpha}_{k},\beta_{k}, \cdots,\mathbf{\alpha}_{K},\beta_{K}\} \tag{5}\] A neural network with \(M\) ReLU activations needs \(M\) EvoReLU polynomial activations. \(\mathbf{D}=\{\mathbf{d}_{1},\mathbf{d}_{2},\cdots,\mathbf{d}_{M}\}\) is the degree vector of all EvoReLUs and the corresponding coefficient parameters are \(\mathbf{\Lambda}=\{\mathbf{\lambda}_{1},\mathbf{\lambda}_{2},\cdots,\mathbf{\lambda}_{M}\}\). **Homomorphic Evaluation Architecture** refers to the placement of ConvBN, polynomial, and bootstrapping. In MPCNN, AppReLU depth is 14, ConvBN depth is 2, and the remaining levels after bootstrapping are \(L-K=16\). So, bootstrapping is placed after every AppReLU and ConvBN (Figure 2). AESPA [46] uses degree 2 Hermite polynomial (HerPN) with depth 2 to replace ReLU and Batchnorm. Therefore, every 4 Conv-HerPN should be followed by one bootstrapping operation (Figure 2). In AutoFHE, EvoReLU is layerwise and mixed-degree. The multiplicative depth of EvoReLU varies layer by layer. AutoFHE introduces a flexible evaluation architecture where bootstrapping can be called after ConvBN or EvoReLU (Figure 2). The flexible evaluation architecture of AutoFHE can better fit layerwise mix-degree EvoReLU. **Linear Scaling:** Since the domain of Chebyshev polynomials and bootstrapping is \([-1,1]\), we need to scale the ciphertext to \([-1,1]\) before Chebyshev polynomials and bootstrapping and reverse it after Chebyshev polynomials and bootstrapping. To avoid consuming levels, scaling is integrated into other operations. In MPCNN, the polynomial neural network with AppReLU can be roughly regarded as _piece-wise linear_ due to high precision approximation of high-degree AppReLU. MPCNN estimates domain of AppReLU using the training dataset and uses the maximum number of all AppReLU domains to scale down input images by \(\times 1/B\) and scale up ciphertext in the fully connected layer by \(\times B\). In AESPA, HerPN uses degree 2 Hermite polynomial and does not need to scale input of HerPN. However, the input of bootstrapping should be scaled to \([-1,1]\). The domain \([-B,B]\) of each bootstrapping can be estimated on the training dataset, then integrate \(\times 1/B\) into the previous HerPN and integrate \(\times B\) into the next Conv layer. In AutoFHE, bootstrapping can be placed before EvoReLU and after EvoReLU. So, we need to estimate domain \([-B_{in},B_{in}]\) and range \([-B_{out},B_{out}]\) of each EvoReLU on the training dataset. Figure 3 shows possible scaling scenarios in AutoFHE. We can integrate scaling into the previous or next operation. Specifically, we can multiply convolutional weight, batch norm weight and bias, and polynomial coefficients by the scaling constant. The scaling operation is conducted in plaintext and will not introduce either extra level consumption. With the scaling EvoReLU can now be expressed as, \[\text{EvoReLU}(x)=\begin{cases}x,&d=1\\ \alpha_{2}x^{2}+\alpha_{1}x+\alpha_{0},&d=2\\ x\cdot\left(\mathcal{F}(x/B)+0.5\right),&d>2\end{cases} \tag{6}\] **Search Space:** Our search space includes the number of sub-polynomials (\(K\)) in our composite polynomial, the choice of degrees for each sub-polynomial (\(d_{k}\)), and the coefficients of \begin{table} \begin{tabular}{l c} \hline \hline Variable & Option \\ \hline \# polynomials (\(K\)) & 6 \\ poly degree (\(d_{k}\)) & \(\{0,1,3,5,7\}\) \\ coefficients (\(\mathbf{\Lambda}\)) & \(\mathbb{R}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Search variables and options. Figure 3: Scaling in AutoFHE. Color Key: integrate scaling into previous operation; integrate scaling into next operation. Figure 2: Homomorphic evaluation architectures for Residual Networks (ResNets). 1st row: standard Conv-BN-ReLU triplet [24]. 2nd row: MPCNN [33] which uses high-degree polynomials. 3rd row: AESPA [46] which uses low-degree polynomials. 4th row: The proposed AutoFHE, which uses layerwise mixed-degree polynomials. Dashed rectangles indicate the plausible locations where bootstrapping can be placed. Both EvoReLU and the placement of bootstrapping operations are searched. the polynomials \(\mathbf{\Lambda}\). Table 1 shows each variable's options. Note that choice \(d_{k}=0\) corresponds to an identity placeholder, so theoretically, the composite polynomial may have fewer than \(K\) sub-polynomials. Furthermore, when the degree of \((p_{k}^{d_{k}}\circ p_{k-1}^{d_{k-1}})(x)\) is less than or equal to 31 (maximum degree of a polynomial supported on RNS-CKKS [32, 36]), we merge the two sub-polynomials into a single sub-polynomial \(p_{k}^{d_{k}}(p_{k-1}^{d_{k-1}})(x)\) with degree \(d_{k}\cdot d_{k-1}\leq 31\) before computing its depth. This helps reduce the size of the search space and leads to smoother exploration. Table 2 lists the number of ReLUs of our backbone models and the corresponding dimension and size of search space for \(\mathbf{D}\). Searching for layerwise EvoReLU is a challenging high-dimensional optimization problem within a vast search space. ### Search Objective AutoFHE formulates the search problem as _a multi-objective_ optimization \[\begin{split}\underset{\mathbf{D}}{\text{min}}&\quad \left\{1-\text{Acc}_{val}\left(g\left(\mathbf{\omega}^{*}\mid\mathbf{D},\mathbf{\Lambda}( \mathbf{D})\right)\right),\text{Boot}(\mathbf{D})\right\}\\ \text{s.t.}&\quad\mathbf{\omega}^{*}=\underset{\mathbf{ \omega}}{\text{min}}\,\mathcal{L}_{train}\left(g\left(\mathbf{\omega}\mid\mathbf{D}, \mathbf{\Lambda}(\mathbf{D})\right)\right)\\ &\quad\quad\mathcal{L}_{train}=(1-\tau)\mathcal{L}_{CE}+\tau \mathcal{L}_{KL}\end{split} \tag{7}\] where \(g(\mathbf{\omega})\) is a neural network with \(M\) activation layers and the trainable network weight \(\mathbf{\omega}\). The outer multi-objective minimization formulation \(\underset{\mathbf{D}}{\text{min}}\left\{1-\text{Acc}_{val}\left(g\left(\mathbf{ \omega}^{*}\mid\mathbf{D},\mathbf{\Lambda}(\mathbf{D})\right)\right),\text{Boot}(\mathbf{D})\right\}\) for \(\mathbf{D}\) is to _maximize_ the validation accuracy \(\text{Acc}_{val}\) as well as _minimize_ the number of bootstrapping operations. The coefficient vector \(\mathbf{\Lambda}\) is formulated as a function of \(\mathbf{D}\). In Equation 7, \(\text{Acc}_{val}\) is the Top-1 accuracy on a validation dataset _val_, Boot is the number of bootstrapping operations. To determine the number of bootstrapping operations, we count the level consumption of all EvoReLU's to determine where we need to call bootstrapping. By minimizing the number of bootstrapping operations, we search for the placement of bootstrapping and minimize the wasted levels. For example, consider that we have a ciphertext with a level equal to 2, but the next operation consumes 10 levels. We must waste 2 levels and call bootstrapping to refresh the ciphertext first. AutoFHE can minimize the wasted levels by adjusting the depth of EvoReLU. \(\{\mathbf{D}_{i},\mathbf{\Lambda}_{i}\}\) has its corresponding network weight \(\mathbf{\omega}_{i}\) that can compensate errors introduced by layerwise EvoReLU \(\{\mathbf{D}_{i},\mathbf{\Lambda}_{i}\}\). We initialize \(\mathbf{\omega}_{i}\) with the weight \(\mathbf{\omega}_{0}\) from the pretrained ReLU network and then fine-tune the network \(g(\mathbf{\omega}_{i})\) to minimize the training loss \(\mathcal{L}_{train}(\mathbf{\omega}_{i})\) on the training dataset. In summary, the objective in Equation 7 guides the search algorithm to i) explore layerwise EvoReLU, including its _degrees_ and _coefficients_; 2) discover the placement of bootstrapping to work well with layerwise mixed-degree EvoReLU; 3) trade-off validation accuracy and inference latency to return diverse polynomial networks. The training loss \(\mathcal{L}_{train}\) used to optimize the weight \(\mathbf{\omega}\) is \((1-\tau)\mathcal{L}_{CE}+\tau\mathcal{L}_{KL}\), where \(\mathcal{L}_{CE}\) is the cross-entropy loss and \(\mathcal{L}_{KL}\) is the Kullback-Leibler (KL) divergence loss. \(\tau\) is a predefined parameter to balance CE and KL loss. In Equation 7, we omit the variable of \(\mathcal{L}_{train}\). Given EvoReLU \((\mathbf{D},\mathbf{\Lambda})\), the variable is weight \(\mathbf{\omega}\). The KL loss computes the distance between distributions of logits of the polynomial network and the ReLU network. We introduce the KL loss because it can push the output of the polynomial network close to the ReLU network. It can be regarded as knowledge distillation (KD) [25]. We transfer knowledge from the ReLU network to polynomial networks. ### Search Algorithms #### 3.3.1 Multi-Objective Search **Multi-Objective Optimization:** Given two solutions with minimization objectives \(\mathbf{\sigma}_{1},\mathbf{\sigma}_{2}\in\mathbb{R}^{d}\), we want to minimize all items of \(\mathbf{\sigma}_{1}\) and \(\mathbf{\sigma}_{2}\). If \(\mathbf{\sigma}_{1,j}\leq\mathbf{\sigma}_{2,j},\forall i\in\{1,2,\cdots,d\}\) and \(\mathbf{\sigma}_{1,j}<\mathbf{\sigma}_{2,j},\exists j\in\{1,2,\cdots,d\}\), \(\mathbf{\sigma}_{1}\)_dominates_\(\mathbf{\sigma}_{2}\)[13, 52]. It means \(\mathbf{\sigma}_{1}\) is better than \(\mathbf{\sigma}_{2}\). It is denoted as \(\mathbf{\sigma}_{1}\prec\mathbf{\sigma}_{2}\). A set of solutions \(\mathbf{O}=\{\mathbf{\sigma}_{i}\}_{i=1}^{N}\) can be grouped into sub-sets, \(\{\mathbf{\sigma}_{i}\}_{i=1}^{N_{1}}\), \(\{\mathbf{\sigma}_{i}\}_{i=1}^{N_{2}}\), \(\cdots\), corresponding to the 1st trade-off front, the 2nd trade-off front and so on. Solutions within the same trade-off front are not dominated by each other. The 1st trade-off front dominates the 2nd trade-off front, and so on. **MOS:** Figure 4 and Algorithm 1 show our **M**ulti-**O**bjective Search framework for AutoFHE. MOS is an evolutionary search algorithm to solve the multi-objective optimization in Equation 7. It maintains a population of solutions distributed on trade-off fronts of accuracy and the number of bootstrapping operations. We crossover and mutate solutions to improve every generation's trade-off fronts, as shown in Figure 4. During the search, we define the population size, namely the number of solutions, \(N\). These \(N\) solutions may be grouped into multiple trade-off fronts, which can improve exploration ability during search. We design operations to search for layerwise mixed-degree polynomials: 1. Select: Solutions of the current population are first grouped to different trade-off fronts by non-dominated sorting [13]. Solutions on the same trade-off front have the same fitness. The 1st trade-off front has a higher fitness than the 2nd trade-off front, and so on. We apply the tournament selection [21] to improve the diversity of offspring. Specifically, we randomly select three solutions from the current population and keep the solution with the highest fitness. We choose \(N^{\prime}\) solutions from the current population to build the offspring. In our paper, we set \(N^{\prime}=6N\). 2. Crossover enables network-level information exchange. As shown in Figure 4, we randomly and uniformly exchange EvoReLU layers between two solutions (parents) to generate two new solutions. However, new solutions cannot inherit weights from parent solutions because weights are adapted to the parent's layerwise mixed-degree polynomials. So, we initialize weights with pretrained weights and then fine-tune new solutions. * R-CCDE optimizes EvoReLU coefficients. * PAT fine-tunes new polynomial CNNs. * Mutation is to locally explore EvoReLU. We randomly increase or decrease the degree to smoothly change polynomials, as shown in Figure 4. We randomly increase or decrease the degree of a sub-polynomial of EvoReLU with predefined probabilities. * Pareto refers to non-dominated sorting and crowding distance sorting [13]. We apply Pareto to select \(N\) solutions from both population and offspring (\(N+N^{\prime}\) solutions) to build a new population (\(N\) solutions). #### 3.3.2 R-Ccde **Coevolution:** The composite polynomial used by EvoReLU is: \(y_{1}=f_{1}^{d_{1}}(x|\mathbf{\alpha}_{1},\beta_{1}),\cdots,y_{K}=f_{K}^{d_{K}}(y_{ K-1}|\mathbf{\alpha}_{K},\beta_{K})\). The forward architecture of the composite polynomial, \(x\mapsto y_{1}\mapsto y_{2}\cdots\mapsto y_{K-1}\mapsto y\) is suitable for _coevolution_[43, 44, 58] and provides a natural _decomposition_. We can sequentially adjust every sub-polynomial to push the output \(y\) close to the target non-arithmetic function. Given the degree \(\mathbf{d}\), the learnable parameter of EvoReLU \(\mathbf{\lambda}=(\mathbf{\alpha}_{1},\beta_{1},\cdots,\mathbf{\alpha}_{K},\beta_{K})\) is grouped into \(\{\mathbf{\alpha}_{1}\},\{\beta_{1}\},\cdots,\{\mathbf{\alpha}_{K}\},\{\beta_{K}\}\). The coefficient \(\mathbf{\alpha}\) controls the shape of the sub-polynomial output, while the scaling parameter \(\beta\) controls the amplitude. We sequentially update \(\{\mathbf{\alpha}_{k}\}\) followed by \(\beta_{k}\), \(1\leq k\leq K\), since i) sub-polynomials close to input will have a larger effect on the output, and ii) it is easier to learn coefficients by decoupling the amplitude from the coefficients. **Differentiable Evolution:** The EvoReLU variables \(\{\mathbf{\alpha}_{k}\}_{k=1}^{K}\) and \(\{\beta_{k}\}_{k=1}^{K}\) are in the continuous space. We adopt a simple yet effective search algorithm to optimize these variables. Differentiable evolution (DE) [48] only uses the _difference_ between solutions to optimize continuous variables. Given the following minimization problem in the continuous space \[\mathbf{x}^{*}=\operatorname*{arg\,min}_{\mathbf{x}}\mathcal{F}\left(\mathbf{x}\right) \tag{8}\] where \(\mathbf{x}\in\mathbb{R}^{d}\) and \(\mathcal{F}\) is the minimization objective. DE maintains a set of solutions \(\mathbf{X}=\{\mathbf{x}_{i}\}_{i=1}^{N},\mathbf{x}_{i}\in\mathbb{R}^{d}\). The mutation, crossover, and selection of DE are defined as: \[\text{Mutation:}\ \mathbf{v}=\mathbf{x}_{i}+F\cdot\left(\mathbf{x}_{j}-\mathbf{x}_{k} \right),1\leq i,j,k\leq N \tag{9}\] \[\text{Crossover:}\ \mathbf{u}[t]=\begin{cases}\mathbf{v}[t],&\mathcal{U}(0,1) \leq CR\\ \mathbf{x}_{i}[t],&\text{Otherwise}\end{cases},1\leq t\leq d\] \[\text{Selection:}\ \mathbf{u}=\begin{cases}\mathbf{u},&\mathcal{F}\left(\mathbf{u} \right)\leq\mathcal{F}\left(\mathbf{x}_{i}\right)\\ \mathbf{x}_{i},&\text{Otherwise}\end{cases}\] Figure 4: Multi-objective search of AutoFHE. where \(F\in\mathbb{R}\) is the scaling factor, \(CR\in\mathbb{R}\) is the crossover rate, and \(\mathcal{U}(0,1)\) is the uniform distribution between 0 and 1. Equation 9 shows a simple strategy to update solutions by only using difference. First, mutation updates \(\mathbf{x}_{i}\) with the scaled difference \(F\cdot(\mathbf{x}_{j}-\mathbf{x}_{k})\). Then, we randomly select items from \(\mathbf{v}\) or \(\mathbf{x}_{i}\) to generate a new solution \(\mathbf{u}\). Finally, we evaluate \(\mathcal{F}(\mathbf{u})\) and use \(\mathbf{u}\) to replace \(\mathbf{x}_{i}\) if \(\mathcal{F}(\mathbf{u})\leq\mathcal{F}(\mathbf{x}_{i})\). DE only uses difference and does not suffer from gradient exploding. It maintains a set of solutions and is not sensitive to initialization. **R-CCDE:** We propose **R**egularized **C**ooverative **C**oevolution **D**ifferentiable **E**volution, called R-CCDE, to search for parameters of \(\text{EvoReLU}(x,\mathbf{\lambda};\mathbf{d})\), namely \(\mathbf{\lambda}=\{\mathbf{\alpha}_{k},\beta_{k}\}_{k=1}^{K}\). The scaling parameters \(\{\beta_{k}\}_{k=1}^{K}\) are used to adjust the amplitude of sub-polynomials during the search. After the search, \(\{\beta_{k}\}_{k=1}^{K}\) will be used to scale \(\{\mathbf{\alpha}_{k}\}_{k=1}^{K}\) and obtain coefficients of polynomials. The decomposition makes the search easier by decoupling the shape and amplitude of polynomials. We detail the implementation of R-CCDE in Algorithm 2. R-CCDE takes as input a composite polynomial \(\mathcal{F}(x)=(f_{K}^{dx}\circ f_{k-1}^{d_{k-1}}\circ\cdots\circ f_{1}^{d_{1} })(x)\) with parameters \(\mathbf{\lambda}=\{\mathbf{\alpha}_{1},\beta_{1},\cdots\mathbf{\alpha}_{k},\beta_{k} \cdots\mathbf{\alpha}_{K},\beta_{K}\}\). Because EvoReLU is defined as \(y=\text{EvoReLU}(x)=x\cdot(\mathcal{F}(x)+0.5)\) in Equation 3, we use the composite polynomial \(\mathcal{F}(x)\) to approximate \(q(x)=0.5\cdot\text{sgn}(x)\). We set the number of generations and the scaling decay parameter to \(T\) and \(\gamma\), respectively. The objective function \(\mathcal{L}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F }_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{ \mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{ \mathcal{F}_{\mathcal{F}_{\mathcal{F}}}}}}}}}}}}}}}}})\) is the \(\ell_{1}\) distance between the composite polynomial \(\mathcal{F}(x)\) and the target function \(q(x)\). R-CCDE maintains a _context vector_[44]\(\mathbf{\lambda}^{*}=(\mathbf{\alpha}_{1}^{*},\beta_{1}^{*},\cdots\mathbf{\alpha}_{K},\cdots,\mathbf{ \alpha}_{K}^{*},\beta_{K}^{*})\) as the best solution so far. \(\mathbf{\lambda}^{*}\) is initialized via Latin hypercube sampling (LHS). In Algorithm 2, \(\mathbf{\alpha}_{k}\) and \(\beta_{k}\)\(1\leq k\leq K\) are optimized using DE sequentially and alternatively. In generation \(t\), given the \(k\)-th position, we optimize \(\mathbf{\alpha}_{k}\) as \[\begin{split}\mathbf{\alpha}_{k}^{*}&=\operatorname*{ arg\,min}_{\mathbf{\alpha}_{k}}\mathcal{L}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{ \mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}}}}}}}}(\mathbf{\alpha}_{k}|\mathbf{ \lambda}^{*})\\ \text{s.t.}&\mathbf{\alpha}_{k}|\mathbf{\lambda}^{*}=(\mathbf{ \alpha}_{1}^{*},\beta_{1}^{*},\cdots\mathbf{\alpha}_{k},\cdots,\mathbf{\alpha}_{K}^{* },\beta_{K}^{*})\end{split} \tag{10}\] where \(\mathbf{\alpha}_{k}\) is a variable, while other \(\mathbf{\alpha}^{*}\)'s and \(\beta\)'s are fixed. A candidate solution of \(\mathbf{\alpha}_{k}\) is plugged into \(\mathbf{\lambda}^{*}\). Then, we evaluate the candidate solution \(\mathbf{\alpha}_{k}\) by evaluating \(\mathcal{L}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{ \mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}}}}}}}}}}(\mathbf{ \alpha}_{1}^{*},\beta_{1}^{*},\cdots\mathbf{\alpha}_{k},\cdots,\mathbf{\alpha}_{K}^{* },\beta_{K}^{*})\). We adopt DE to solve the single-objective optimization problem in the continuous space. We maintain a set of candidate solutions of \(\mathbf{\alpha}_{k}\), namely \(\mathbf{X}=\{\mathbf{x}_{i}\}_{i=1}^{N},\mathbf{x}_{i}\in\mathbb{R}^{d_{k}}\). Mutation, crossover, and selection defined in Equation 9 are applied to update solutions in \(\mathbf{X}\). Then, the best solution in \(\mathbf{X}\) is assigned to \(\mathbf{\alpha}_{k}^{*}\). We use \(\mathbf{\alpha}_{k}^{*}\) to replace \(\mathbf{\alpha}_{k}^{*}\) in the context vector \(\mathbf{\lambda}^{*}\) to update \(\mathbf{\lambda}^{*}\) \[\mathbf{\lambda}^{*}=(\mathbf{\alpha}_{1}^{*},\beta_{1}^{*},\cdots\mathbf{\alpha}_{k}^{*},\cdots,\mathbf{\alpha}_{K}^{*},\beta_{K}^{*}) \tag{11}\] In summary, i\(\{\mathbf{\alpha}_{k}\}_{k=1}^{K}\) and \(\{\beta_{k}\}_{k=1}^{K}\)_separately_ maintain their sets of solutions that are optimized by DE; ii) the _context_ vector \(\mathbf{\lambda}^{*}\) is not only the best solution so far. It allows different variables to share information. When evolving \(\{\beta_{k}\}_{k=1}^{K}\), the objective introduces a _regularization_ term \[\begin{split}&\beta_{k}^{*}=\operatorname*{arg\,min}_{\beta_{k}} \underbrace{\mathcal{L}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{ \mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}}}}}}}}}}_{\ell_{1}\text{ Distance}}\quad+\underbrace{\gamma\cdot{\beta_{k}}^{2}}_{\text{ Regularization}}\\ &\text{s.t.}\quad\beta_{k}|\mathbf{\lambda}^{*}=(\mathbf{\alpha}_{1}^{*}, \beta_{1}^{*},\cdots,\beta_{k},\cdots,\mathbf{\alpha}_{K}^{*},\beta_{K}^{*}) \end{split} \tag{12}\] where \(\gamma\cdot\beta_{k}^{2}\) is the regularization term and \(\gamma\) is the scaling decay parameter. Without the regularization \(\gamma\cdot\beta_{k}^{2}\), we observe \(\{\beta_{k}\}_{k=1}^{K}\) prefers _large_ numbers. Because \(p_{k}^{d_{k}}(x)=\frac{1}{\beta_{k}^{*}}\sum_{i=1}^{d_{k}}\alpha_{i}T_{i}(x)\), large \(\{\beta_{k}\}_{k=1}^{K}\) numbers can make the composite polynomial _numerical stable_ because the polynomial output is scaled to a small number. However, it is hard to distinguish different solutions of \(\{\mathbf{\alpha}_{k}\}_{k=1}^{K}\). By introducing the regularization term \(\gamma\cdot\beta_{k}^{2}\), DE prefers large numbers in earlier generations and gradually reduces \(\beta_{k}\). Therefore, DE is not biased toward solutions with large \(\beta_{k}\) numbers. We use R-CCDE to optimize coefficients of quadratic and high-degree EvoReLU. For quadratic EvoReLU in Equation 2, \(\alpha_{2}\) is obtained by R-CCDE and \(\alpha_{1}=0.5\), \(\alpha_{0}=0\). #### 3.3.3 Polynomial-Aware Training Replacing ReLU with EvoReLU in pretrained neural networks injects _minor_ approximation errors, which leads to performance loss. Fine-tuning can mitigate this performance loss by allowing the learnable weights (e.g., convolution or fully connected layers) to adapt to the approximation error. However, backpropagation through EvoReLU easily leads to exploding gradients since the gradients may be amplified exponentially due to many composite polynomials. From Equation 2, high-degree EvoReLU can precisely approximate ReLU, while ReLU pruning (degree 1) and quadratic EvoReLU (degree 2) have a larger approximation error. For low-degree EvoReLU (degree \(\leq\) 2), we can use SGD to compute gradients because they do not amplify gradients. For high-degree EvoReLU (degree \(>\) 2), we can use gradients from the original non-arithmetic ReLU function for _backpropagation_. Specifically, during _the forward_ pass, EvoReLU objects slight errors captured by the training loss. During the _backward_ pass, we bypass high-degree EvoReLU and use ReLU to compute gradients to update the weights of the linear trainable layers. We refer to this procedure as **P**olynomial-**A**ware **T**raining (PAT). PAT is inspired by STE [3] and QAT [26], which uses two different functions for forward- and back-propagation. PAT is defined as: \[\frac{\partial\text{EvoReLU}(x)}{\partial x}=\begin{cases}1,&d=1\\ 2\alpha_{2}x+\alpha_{1},&d=2\\ \partial\text{ReLU}(x)/\partial x,&d>2\end{cases} \tag{13}\] ## 4 Experiments **Datasets:** We benchmark AutoFHE on CIFAR10 and CIFAR100 [31]. Both datasets have 50,000 training and 10,000 validation images at a resolution of \(32\times 32\). CIFAR10 has 10 classes, while CIFAR100 includes 100 classes. The validation images are treated as private data and used only for evaluating the final networks. To guide the search process, we randomly select 10,000 images from the training split as a _minival_[53] dataset and use the Top-1 accuracy on the minival dataset to optimize Equation 7. In addition, PAT uses the training split to fine-tune polynomial networks. Finally, as our final result, we report the Top-1 accuracy on the encrypted validation dataset under RNS-CKKS. **Parameters: (1)** Training Parameters: We train ReLU networks (used by MPCNN and AutoFHE) and AESPA using SGD optimizer with batch size 128, epochs 200, learning rate 0.1, momentum 0.9 and weight decay 0.0005. We use a cosine learning rate scheduler. We clip gradients to 1 when we train polynomial networks (AESPA or AutoFHE). **(2)** Search Parameters: For MOS, we set the number of generations to 10. Population size is proportional to the number of variables. We set population size to 10, 20, 30 and 40 for VGG11, ResNet20, ResNet32 and ResNet44, respectively. The offspring size is \(6\times\) as the population size. When we mutate a polynomial, its degree is decreased by \(-2\) with a probability of 0.5 and is increased by \(+2\) with a probability of 0.3. For R-CCDE, we set the search domain of \(\boldsymbol{\alpha}\) to \([-5,5]\) and that of \(\beta\) to \([1,5]\). We use the set of 20 solutions for optimizing \(\beta\). For \(\boldsymbol{\alpha}\), we set the number of solutions equal to \(20\times\) the number of variables. We set both the scaling factor \(F\) and the crossover rate \(CR\) to 0.5. We set the scaling decay to \(\gamma=0.01\) and the number of iterations to 100. We run R-CCDE 10 times with different random seeds and retain the best solution. **(3)** Finetuning Parameters: To fine-tune AutoFHE polynomial networks, we train them using PAT with batch size 128, learning rate 0.02, momentum 0.9, weight decay 0.0005, and KL weight \(\tau=0.9\). We clip gradients to 1. During the search, to quickly estimate the accuracy of polynomial networks, we set epochs to 5. After the search, we set epochs to 90 and use the cosine annealing learning rate scheduler. **(4)** Cryptographic Parameters: We followed MPCNN to set the same cryptographic parameters [33] of RNS-CKKS for MPCNN, AESPA and AutoFHE. The cyclotomic polynomial uses degree \(N=2\)[16]. The Hamming weight of the secret key is 192. The ciphertext level is \(L=30\), while bootstrapping uses 14 levels (\(K=14\)). Base modulus, special modulus, and bootstrapping modulus are set to 51 bits, while default modulus is set to 46 bits [33]. The cryptographic parameters satisfy _128-bit security_[33, 9]. **Hardware and RNS-CKKS Library: (1)** Search: On one NVIDIA RTX A6000 GPU, the search process for ResNet-20/32/44 and VGG11 on CIFAR10 took 44 hours, 64 hours, 88 hours, 13 hours, respectively. The search for ResNet32 and VGG11 on CIFAR100 took 67 and 12 hours, respectively. To accelerate R-CCDE, we use 100 CPU threads of AMD EPYC 7502 32-core processor. **(2)** FHE Inference: We evaluate latency under FHE on the publicly available Amazon AWS instance, r5.24xlarge, which has 96 CPU threads and 768 GB RAM. We build C++ implementation of AutoFHE under RNS-CKKS on top of MPCNN using Microsoft SEAL library [50]. We adopt MPCNN implementations of Conv, BN, Downsample, AvgPool, and FC layers. **Baselines:** We compare the proposed AutoFHE with two recent state-of-the-art approaches under RNS-CKKS, high-degree polynomial and approximation-based approach MPCNN [33] and low-degree polynomial and training-based approach AESPA [46], as shown in Table 3. We also benchmark against REDsec [18] under TFHE to compare different FHE schemes. Minimax composite polynomial approximation of ReLU and reports high ciphertext accuracy (refer to Appendix A). AESPA applies degree 2 Hermite polynomial to replace ReLU, which reduces the multiplicative depth of polynomials and greatly reduces the consumption of bootstrapping. Because AESPA was originally proposed under secure MPC, we implement AESPA under RNS-CKKS (refer to Appendix B). We use the same training parameters for MPCNN, AESPA and AutoFHE. We estimate scaling parameters of MPCNN on training datasets: 21.26 (ResNet20), 21.99 (ResNet32), 17.80 (ResNet44), 29.82 (VGG11) on CIFAR10 dataset and 63.40 (ResNet32) and 54.97 (VGG11) on CIFAR100. TFHE Baseline: The fast fully homomorphic encryption scheme over the torus (TFHE) [11] provides very fast bootstrapping by using bootstrapped binary gates. REDsec applies efficient ternary networks (TNNs) under TFHE. REDsec reports performance of BNet\({}_{5}\) and BNet on CIFAR10 under CPU TFHE [54]. BNet\({}_{5}\) and BNet observe ciphertext accuracy 81.9% and 88.5%, and latency 1,081 and 4,622 seconds per image on CIFAR10 dataset on AWS r5.24large instance using 96 CPU threads [18]. The TFHE cryptographic parameters used by REDsec satisfy 128 bits of security [18]. Please note that REDsec uses a different parallel acceleration strategy. REDsec takes advantage of all 96 CPU threads to process one encrypted image, while RNS-CKKS approaches allocate one thread to each image. Therefore, REDsec has the same latency and amortized latency. When we evaluate the latency of AutoFHE, MPCNN and AESPA on AWS r5.24large, we input 96 encrypted images using 96 CPU threads. The amortized latency is \(\frac{1}{96}\times\) as the latency under RNS-CKKS. So, the amortized latency of REDsec and RNS-CKKS approaches is comparable. ### AutoFHE under RNS-CKKS **Trade-offs under FHE:** Figure 5 shows trade-offs between ciphertext accuracy and amortized latency on CIFAR10 dataset. We benchmark the performance of ResNet and VGGNet backbones on CIFAR10 and CIFAR100 datasets, as shown in Table 4. For TFHE baseline REDsec, it reported 81.9% ciphertext accuracy with latency 1,081s per image and 88.5% ciphertext accuracy with latency 4,622s per image on CIFAR10 [18]. We have the following observations: CIFAR10 and 1.7% \(\sim\) 2.4% on CIFAR100 compared to the corresponding plaintext backbone models. AutoFHE takes advantage of layerwise mixed-degree polynomials to reduce bootstrapping consumption while maintaining high accuracy. From Figure 6 and Table 4, AutoFHE shows a better trade-off between ciphertext accuracy and latency than MPCNN and AESPA. Compared to MPCNN on CIFAR10, **AutoFHE** accelerates encrypted image inference by **1.32\(\times\)\(\sim\) 1.8\(\times\)** while improving accuracy by **+0.08**% \(\sim\) 0.3% in comparison to MPCNN. Similarly, on CIFAR100, **AutoFHE** speeds up inference by **1.1\(\times\)\(\sim\) 1.4\(\times\)** while increasing accuracy by **+0.36**% \(\sim\) 0.75**%. From Figure 6, compared to AESPA, AutoFHE improves ciphertext accuracy by **+2.56**% and **+2.44**% for ResNet32 and ResNet44 Figure 6: Trade-offs between ciphertext accuracy and amortized latency of ResNet and VGG backbones on CIFAR10. backbones on CIFAR10 with similar amortized latency. However, AESPA's low-degree polynomials lead to an increasing drop in accuracy with model depth. This starkly contrasts with plaintext models, where deeper models are known to improve performance. AutoFHE can improve performance with depth while maintaining the same latency as AESPA. As such, we observe from Figure 5 and Table 4 that AutoFHE enjoys a much better trade-off between accuracy and latency. In summary, the challenge of navigating the vast joint design space of polynomial approximations of non-linear activation functions and homomorphic evaluation architectures limits manual approaches like MPCNN and AESPA to simplify solutions like approximating non-linear activation functions and uniform placement of bootstrapping operations. In contrast, AutoFHE algorithmically navigates the design space and identifies solutions that significantly _pareto dominate_ manual approaches in accuracy, latency, or both. Algorithm 1: High-precision function approximation can preserve plaintext accuracy without training, while low-precision function approximation with training leads to a loss in accuracy. AutoFHE is a hybrid method that inherits the representation learning ability of ReLU networks and adapts the network's learnable weights to layerwise polynomials. On the one hand, MPCNN has a drop in accuracy of \(0.07\sim 0.15\%\) and \(0.22\sim 0.69\%\) compared to plaintext backbone models on CIFAR10 and CIFAR100, respectively. This demonstrates that high-degree polynomials still introduce slight approximation errors. On the other hand, AESPA has a significant drop in accuracy, especially for deeper networks, which was also observed by AESPA's authors [46]. The results demonstrate that the representation learning ability of (low-degree) polynomial neural networks is inferior to ReLU networks [37]. Unlike AESPA, AutoFHE inherits the representation learning ability from ReLU networks by using pretrained weights and transferring knowledge. Furthermore, we fine-tune polynomial networks using very small learning rates to adapt learnable network weights to layerwise mixed-degree polynomial EvoReLU. Therefore, AutoFHE can achieve the _high-accuracy_ of ReLU networks and the _low-latency_ of low-degree polynomial networks. As such, compared to MPCNN and AESPA, AutoFHE improves both prediction accuracy and reduces inference latency over all ResNet and VGG backbones. **Operations under RNS-CKKS:** Table 5 shows the latency of different operations under RNS-CKKS. AppReLU is a high-degree polynomial, so its evaluation latency is higher than degree 2 HerPN. The latency of EvoReLU is roughly between AppReLU and HerPN. Low-bootstrapping solutions of AutoFHE further speed up evaluation of polynomial compared to HerPN, _e.g._ ResNet32 with eight bootstrapping, ResNet44 with eight bootstrapping and VGG11 with one bootstrapping. In MPCNN, bootstrapping dominates latency with \(74\sim 77\%\) of total inference time. In AESPA, linear layers and bootstrapping operations consume similar runtime. The latency of AutoFHE is similar to AESPA for low-bootstrapping solutions and to MPCNN for high-bootstrapping solutions. Linear operations of MPCNN are faster than AESPA and AutoFHE since they are being evaluated at a lower level. For example, MPCNN ConvBN always takes level 2 ciphertext as input. The evaluation of low-level ciphertexts is faster than high-level ciphertexts. For AESPA and AutoFHE, polynomial activations (HerPN and EvoReLU) have smaller multiplicative depth, and their linear operations take ciphertexts at higher levels as input. From Table 5, we observe that i) AutoFHE can effectively accelerate neural network inference on RNS-CKKS by removing time-consuming bootstrapping operations; ii) Layerwise mixed-degree EvoReLU effectively explores how to reduce the multiplicative depth of polynomials and further decrease consumption of bootstrapping operations. ### Layerwise AutoFHE **Depth Distribution:** To analyze the layerwise mixed-degree EvoReLU discovered by AutoFHE, we study (see Figure 7) the distributions of multiplicative depth for different backbones on CIFAR10. In contrast to the uniform allocation used by MPCNN and AESPA, the optimal layerwise allocation of EvoReLU discovered by AutoFHE is highly non-uniform. Such a distribution is challenging to design manually, thus further motivating the need for automated design of layerwise mixed-degree polynomial approximations of activation functions. From the distribution in Figure 7, we identify the following design principles that can guide the design of polynomial neural networks under RNS-CKKS. Algorithm 1: Low and high bootstrapping solutions share a similar distribution of multiplicative depth. In Figure 7, we provide two solutions with low and high bootstrapping operations for each backbone. These two solutions share similar depth distributions. Specifically, high-degree polynomials are preferred in the same layers of low and high bootstrapping solutions. Algorithm 2: Depth distribution is linearly scalable. Consider the depth distributions of VGG11(4) and ResNet20(5), ResNet20(11) and ResNet32(19) in Figure 7. They have different numbers of layers. If we scale their distributions horizontally to match the number of layers, their depth distributions are very similar. This demonstrates that the number of layers and position of activations are the most important factors affecting the sensitivity of layerwise approximation. Therefore, the depth distributions are linearly scalable to the neural network's depth. Many networks have consecutive linear (depth 0) layers, especially ResNet44(22). In this case, it is possible to integrate successive linear layers into a single linear layer, which decreases the multiplicative depth and removes bootstrapping operations. Table 5 demonstrates that reducing linear operations is an effective way to further accelerate inference. **Layerwise EvoReLU:** Figure 8 shows layerwise mixed-degree EvoReLU functions for VGG11 with 4 bootstrapping operations. High-degree EvoReLU functions can precisely approximate ReLU, _e.g._ layer 0, 1 and 2. Medium-degree EvoReLU functions (layer 3, 5, 8) observe oscillation but still are very close to ReLU. Degree 2 EvoReLU is a quadratic function and introduces more approximation errors compared to high-degree and medium-degree functions. For high-degree and medium-degree polynomials, input will be scaled to \([-1,1]\) so we can prevent exploding activations. Figure 8 qualitatively shows that high-degree and medium-degree EvoReLU functions can precisely approximate ReLU, and we can use gradients of ReLU in PAT to prevent exploding gradients during backpropagation. Since degree 2 EvoReLU has a relatively big approximation error, we use SGD rather than approximate gradients from ReLU (refer to Equation 13). ## 5 Related Work In this paper, we focus on secure inference under FHE. Secure multiparty computation (MPC) is an alternative approach for secure inference [19, 29, 30, 42, 45, 47, 40]. It is usually employed in a hybrid protocol involving both MPC and HE. MPC primitives include secret sharing, Yao's garbled circuits [2, 59], and Beaver's multiplicative triples [1], _etc._ For example: (1) Gazelle [29] adopts packed additively homomorphic encryption (PAHE) to evaluate linear layers (Conv and FC layers) and garbled circuits to evaluate non-linear layers (ReLU and MaxPooling layers). (2) Delphi [45] uses Garbled circuits to evaluate ReLU, and Beaver's multiplicative triples to evaluate the polynomial approximation \(x^{2}\) of ReLU. (3) Iron [23] employs the Brakerski-Fan-Vercauteren (BFV) scheme [5, 16] for matrix multiplication. Non-linear operations, like SoftMax, GELU, and LayerNorm, are evaluated through secret sharing. Secure MPC-based approaches for secure inference must carefully consider the trade-off between computation and communication [29] since both the customer and the Cloud Figure 8: EvoReLU functions of AutoFHE for VGG11 with 4 bootstrapping operations. The top row shows EvoReLU for layer 0 \(\sim\) 4, and the bottom row shows layer 5 \(\sim\) 9. Figure 7: Multiplicative depth of layerwise mixed-degree EvoReLU layers. perform computations. In some practical scenarios, sufficient communication and computing resources on the client side may not be available. Pure FHE-based approaches provide customers a _fire-and-forget_[18] service, where they are not involved in the computations and simply wait for the encrypted result. However, FHE-based approaches may have higher latency and memory footprint than secure MPC approaches. In terms of polynomial neural networks, FHE approaches have to replace _all_ ReLU activations with polynomials since FHE only supports multiplications and additions. Secure MPC approaches usually replace only a fraction of ReLUs to reduce online communication and computation costs and retain a few ReLU layers to preserve accuracy, _e.g_. Delphi [45] and SAFENet [42]. These approaches, however, only use low-degree polynomials and observe a significant drop in accuracy when most ReLU activations are replaced. AutoFHE can also be employed for secure MPC schemes by changing the search objective from the number of bootstrapping operations to online communication or computation costs of secure MPC. ## 6 Conclusion Non-interactive end-to-end inference of homomorphically encrypted ciphertext images over convolutional networks is an attractive solution for mitigating the security and privacy concerns of cloud-based MLaaS offerings. Adapting CNNs for inference over FHE ciphertexts presents several challenges, including the optimal design of polynomial approximations of non-linear activation functions and associated homomorphic evaluation architecture. Existing solutions primarily rely on manual designs, which are neither scalable nor flexible enough to be applied to any architecture and cater to the needs of different MLaaS customers. To overcome these challenges, this paper introduced AutoFHE, an automated approach for adapting any convolutional neural network for secure inference under RNS-CKKS. It is a multi-objective search algorithm that generates a set of polynomial networks and their associated homomorphic evaluation architecture under FHE by trading off accuracy and latency. It exploits layerwise mixed-degree polynomial activations across different layers in a network and jointly searches for placement of bootstrapping operations for evaluation under RNS-CKKS. We designed a custom search space for layerwise mixed-degree polynomials and adopted multiple objectives for optimization. We also proposed a combination of search and training algorithms, including multi-objective search algorithm _MOS_, composite polynomial coefficient optimization method _R-CCDE_, and polynomial aware network training strategy _PAT_. We extensively evaluate AutoFHE on ResNets and VGGNets over encrypted CIFAR datasets. Compared to high-degree MPCNN, AutoFHE accelerates inference by 1.32\(\times\sim 1.8\times\). Compared to low-degree AESPA, AutoFHE improves accuracy by up to 2.56%. Finally, models under RNS-CKKS (AutoFHE) accelerate inference by 103\(\times\) and improve accuracy by 3.46% compared to models under TFHE (REDsec). Our results demonstrate the effectiveness of automated search-based algorithms in navigating the large search space of adapting convolution neural networks for inference over FHE ciphertexts and discovering networks that Pareto-dominate manually designed ones. In summary, an integrated and automated design of polynomial approximations and homomorphic evaluation architecture is an effective and flexible approach for seamlessly adapting CNNs for inference on FHE ciphertexts.
2305.05958
Propagation Modeling for Physically Large Arrays: Measurements and Multipath Component Visibility
This paper deals with propagation and channel modeling for physically large arrays. The focus lies on acquiring a spatially consistent model, which is essential, especially for positioning and sensing applications. Ultra-wideband, synthetic array measurement data have been acquired with large positioning devices to support this research. We present a modified multipath channel model that accounts for a varying visibility of multipath components along a large array. Based on a geometric model of the measurement environment, we analyze the visibility of specular components. We show that, depending on the size of the reflecting surface, geometric visibility and amplitude estimates obtained with a super-resolution channel estimation algorithm show a strong correspondence. Furthermore, we highlight the capabilities of the developed synthetic array measurement system.
Thomas Wilding, Benjamin J. B. Deutschmann, Christian Nelson, Xuhong Li, Fredrik Tufvesson, Klaus Witrisal
2023-05-10T08:00:39Z
http://arxiv.org/abs/2305.05958v1
# Propagation Modeling for Physically Large Arrays: Measurements and Multipath Component Visibility ###### Abstract This paper deals with propagation and channel modeling for physically large arrays. The focus lies on acquiring a spatially consistent model, which is essential, especially for positioning and sensing applications. Ultra-wideband, synthetic array measurement data have been acquired with large positioning devices to support this research. We present a modified multipath channel model that accounts for a varying visibility of multipath components along a large array. Based on a geometric model of the measurement environment, we analyze the visibility of specular components. We show that, depending on the size of the reflecting surface, geometric visibility and amplitude estimates obtained with a super-resolution channel estimation algorithm show a strong correspondence. Furthermore, we highlight the capabilities of the developed synthetic array measurement system. Propagation modeling, radio measurements, radio positioning. ## I Introduction In recent years, the number of mobile devices, commonly termed user equipment (UE), and the size and number of infrastructure have increased. Such distributed infrastructure is very promising for future communication systems and has been widely researched in, e.g., XL-MIMO [1], radio stripes [2], large intelligent surface (LIS) [3] or reflective intelligent surface (RIS) [4], as well as RadioWeaves (RW) [5, 6]. In the context of RW, the infrastructure is seen as federation of contact service points (CSPs), highlighting its extended capabilities. The very large overall system aperture is envisioned to yield the required high system performance in terms of throughput, positioning accuracy, or wireless power transfer as well as efficiency. Due to this large aperture, the channels for different regions of the CSP will become nonstationary. In [7], this nonstationary was analyzed utilizing scattering cluster visibility regions for massive MIMO channels, similar to [8], where types of nonstationarities and strategies to exploit these were discussed. In [9] the COST2100 model was extended by visibility regions for scatterers, including a point process model for the birth and death of clusters. When the employed channel model is not accurate enough, model-based estimators will suffer from errors due to model mismatch, which was analyzed in [10]. In this paper, we apply the notion of visibility regions to multipath components (MPCs) and present an ultrawideband (UWB) synthetic array measurement system that allows to collect data for propagation analysis and algorithm development. We briefly outline a channel model including MPC visibility and analyze the feasibility of this approach, performing non-parametric and parametric analyses of the measurements. The remainder of this paper is structured as follows. Section II outlines the channel model, Section III introduces the developed measurement system, Section IV discusses the measurement results and relations to the channel model. Section V concludes the paper. ## II System Model We consider a system with a single physically large array (PLA) characterized by a large aperture relative to the propagation distances of interest. This is commonly referred to as propagation in the array near-field, where the wavefront curvature is noticeable, requiring accurate modeling for position related algorithms. An illustration of the system is given in Fig. 1, containing a single PLA representing a CSP at position \(\mathbf{a}=[a_{x},a_{y},a_{z}]^{\mathsf{T}}\) that receives the signal from a single UE at location \(\mathbf{p}=[p_{x},p_{y},p_{z}]^{\mathsf{T}}\) equipped with a single antenna. The PLA is equipped with \(M\) antenna elements at locations \(\mathbf{a}^{(m)}=[a_{x}^{(m)},a_{y}^{(m)},a_{z}^{(m)}]^{\mathsf{T}}\), defined relative to the reference point \(\mathbf{a}\). The orientation of the PLA is known in the global coordinate system, with the orientation axes indicated as red, green, and blue lines. To model deterministic, also termed specular, multipath propagation, we employ a mirror source model [11] to model reflecting surfaces. An exemplary mirror source located at \(\mathbf{a}_{k}\) with array elements at \(\mathbf{a}_{k}^{(m)}\) (relative Fig. 1: Visualization of a generic system setup, including a BS-PLA at position \(\mathbf{a}\), an exemplary mirror PLA \(\mathbf{a}_{k}\), and a UE at position \(\mathbf{p}\). Exemplary multipath components corresponding to a wall segment of a limited extent are included, showing the resulting geometric visibility via ray-tracing. to \(\mathbf{a}_{k}\)) is included, obtained my mirroring the PLA at wall segment \(k\) (dashed blue). Note that the orientation at the mirror PLA is mirrored. Considering a limited extent of the wall segment, not all array elements \(m\) will receive a corresponding specular MPC, indicated for array elements \(m\) and \(m^{\prime}\) at the outer edges of the PLA. The following section briefly outlines a signal model that accounts for the varying visibility. The signal model includes specular MPCs as well as scattering and diffuse propagation in the environment. At each frequency, the received signal vector \(\mathbf{y}(f)\) for a baseband frequency \(f\) is denoted by \[\mathbf{y}(f)=\sum_{k=1}^{K}\mathbf{h}_{k}(f)s(f)+\mathbf{w}_{\text{s}}(f)+\mathbf{w}(f)\in \mathbb{C}^{M} \tag{1}\] consisting of a deterministic signal component in the form of a (finite) sum of \(K\) MPCs attributable to specific environment features, a stochastic signal component \(\mathbf{w}_{\text{s}}\) and additive white Gaussian noise (AWGN) \(\mathbf{w}(f)\), representing the. The latter represents the sampled receiver noise, plus the a stochastic signal-related component denoted by \(\mathbf{w}_{\text{s}}\). The stochastic signal-related component in \(\mathbf{w}_{\text{s}}\) is commonly used to represent any form of stochastic multipath propagation such as scattering [9, 12] or a dense multipath component [13, 14, 15]. In the deterministic signal component, each MPC \(k\) is described by the channel vector \(\mathbf{h}_{k}(f)\in\mathbb{C}^{M}\) and the known transmit signal \(s(f)\) in complex baseband. In realistic environments, the number of MPCs \(K\) is unknown and varies depending on the locations of the transmitting and receiving devices. The entries of the channel vector are defined as [16] \[[\mathbf{h}_{k}(f)]_{m}=\alpha_{k,m}\mathrm{exp}(-j2\pi(f+f_{\text{c}})\tau_{k,m}) \tag{2}\] with carrier frequency \(f_{\text{c}}\), MPC delay \(\tau_{k,m}=\|\mathbf{a}_{k}^{(m)}-\mathbf{p}\|/c\), speed of light \(c\), and the MPC amplitudes \(\alpha_{k,m}\) defined as \[\alpha_{k,m}\propto\frac{\tilde{\alpha}_{k,m}b(\varphi_{k,m},\theta_{k,m})}{ c\tau_{k,m}}v_{k,m}^{\text{vis}}. \tag{3}\] The factor \(b(\varphi,\theta)\in\mathbb{C}\) represents the complex-valued antenna gain pattern in direction of the azimuth \(\varphi\) and elevation \(\theta\) of the corresponding MPC at the \(m\)th array element. The quantity \(\tilde{\alpha}_{k,m}\) on the right-hand side of (3) represents the environment-related, path loss compensated MPC amplitude, i.e., the amplitude related to transmit power and attenuation due to reflection at different materials. Note that for a specific MPC \(k\), the amplitude \(\tilde{\alpha}_{k,m}\) can vary per array element, e.g., due to angle dependent reflection coefficients of surfaces and their electromagnetic material properties, or when the size of the array becomes large, such that classical array processing assumptions such as plane wave propagation and negligible propagation attenuation along the array do not apply to the full PLA. The visibility is considered by the factor \(v_{k,m}^{\text{vis}}\), which is \(v_{k,m}^{\text{vis}}=1\) if the component \(k\) is visible at array element \(m\), and \(v_{k,m}^{\text{vis}}=0\) otherwise. Note that this factor can be absorbed into the amplitudes but is expressed here to highlight the relation to the environment. ## III Measurement System The developed measurement system for synthetic PLAs consists of a mechanical positioner allowing to form 2-dimensional arrays with arbitrary planar geometry and a Rohde & Schwarz ZVA24 vector network analyzer (VNA) equipped with UWB antennas for performing the channel measurements. RF hardware and antennasThe VNA covers a frequency range of \(24\,\mathrm{GHz}\) (from DC) and was calibrated with a through-open-short-match (TOSM) calibration kit. The antennas used in the measurements are dipole antennas, manufactured from cent coins (see [17, App. B.3]), and dipole-slot antennas that were manufactured according to the XETS-antenna design from [18]. While the VNA covers a wider frequency range, measurements are only performed in the frequency band of \(3-10\,\mathrm{GHz}\) for which the antennas are designed. Due to the measurement aperture allowing reception from azimuth and elevation, the slot antennas' design was chosen as it exhibits an approximately omnidirectional pattern. When mounted on the absorber and reflector plate construction, the antennas only receive from the half-space facing away from the absorber. Mechanical positionerThe mechanical positioner allows to form arbitrary planar synthetic array geometries in an automated fashion with sub-millimeter accuracy for the relative antenna positions. The maximum measurement area that can be used spans roughly1\(2.5\,\mathrm{m}\times 1.5\,\mathrm{m}\), shown in Fig. 1(a). The horizontal and vertical axes are equipped with a _dryve(r)D1_ motor controller driving two _drylin(r)NEMA24_ stepper motors, allowing motion with variable acceleration, deceleration, and velocity. Each stepper motor has a holding break to keep the position during stop intervals. The vertical axis is equipped with a slide to mount the antennas and is itself attached to the horizontal axes. To emulate a PLA mounted directly on a surface, e.g., a CSP in a RW system, the mechanical positioner is equipped with a wideband absorber and a reflector plate on the slide behind the antenna (see Fig. 1(b)), removing the reflection from the surface on which the positioner is mounted on. Due to this construction, the distance from the antenna phase center to the back wall is roughly \(43\,\mathrm{cm}\) (including the antenna length of \(10\,\mathrm{cm}\)). Footnote 1: Depending on the installation in the environment, the actual usable measurement area can be smaller due to necessary safety margins. Measurement environmentsMeasurements were performed in two environments of different sizes: in a medium size room representing a typical lab/office environment and in a large environment with a high ceiling and larger surfaces. Photos of measurement regions of the PLAs are shown in Fig. 1(a) and 1(d). The number of array elements per PLA were \((112\times 75)\) and \((88\times 32)\), forming uniform rectangular arrays (URAs) with \(\lambda/2\)-spacing at \(f_{\text{c}}=6.95\,\mathrm{GHz}\) and \(f_{\text{c}}=6\,\mathrm{GHz}\) in the medium and large environments, respectively, with \(\lambda=c/f_{\text{c}}\) denoting the wavelength. In each environment, measurements at five UE positions have been conducted, which are labeled M heights of \(h_{\text{M}}=\{1.546,0.895,2.235,1.478,1.202\}\,\mathrm{m}\) and \(h_{\text{L}}=\{1.145,1.317,1.162,1.590,1.592\}\,\mathrm{m}\), respectively. To aid the measurement analysis, 3-dimensional models were generated for both environments, shown in Fig. 1(c) and 1(f) for medium and large size, respectively. These allow to make use of the mirror source model to represent the MPCs from (1). The use of a mirror source model is widespread in literature [11], and in the context of this work enables to compare the model-based component visibility, with estimated components obtained by position-based beamforming or a super-resolution channel estimation algorithm. ## IV Measurement Results and Analysis This section presents the results from the performed measurement analysis and summarizes the results for the models in (1), (2) and (3). For the processing of measurements, we compare non-parametric and parametric approaches, both aided by a mirror source model based on the environment models (see Fig. 2). The non-parametric approach is a standard spherical wave beamformer, computed for the entire array and a bandwidth of \(3\,\mathrm{GHz}\). The parametric approach is a super-resolution sparse Bayesian learning (SBL) algorithm to estimate MPCs, based on [19]. For processing the measurements with the SBL algorithm, a bandwidth of \(500\,\mathrm{MHz}\) is used and the PLA is separated into subarrays of size (\(4\times 4\)) or (\(8\times 8\)). This allows to make use of the plane-wave assumption per subarray and to analyse the MPC visibility and amplitudes. ### _Spherical wave beamforming_ An exemplary spherical beamformer spectrum computed for position M1 is shown in Fig. 3 in terms of the marginal spectra for combinations of delay, azimuth, and elevation. The azimuth-elevation power spectrum in Fig. 2(a) represents the view from the PLA shown in Fig. 1(c). The azimuth-elevation spectrum in Fig. 2(a) shows a number of specular MPCs in addition to diffuse components. The azimuth-delay and elevation-delay spectra show similar combinations of specular and diffuse components. Locations of high power in the full spectrum represent locations of origin of specular MPCs from (2), i.e., of the corresponding mirror sources. A downside of the use of the full array for spherical beamforming is that the visibility information of MPCs is lost, with partially visible MPCs simply resulting in lower power at the corresponding spectrum bin. ### _Geometry-based analysis_ By exploiting the known measurement position in combination with the environment model (at the example of position M1), the received power per-array element is computed to show local variations over the area of the PLA. Results are shown in Fig. 4 for three components: the direct path, the window reflection, and the whiteboard reflection, with the corresponding environment segments highlighted by red, dashed outlines in Fig. 1(c). The direct path amplitudes in Fig. 3(a) show the expected attenuation due to the distance-dependent path loss and the gain pattern of the antenna. As patch antennas were used at the PLA and the UE, the Fig. 2: Visualization of measurement locations in medium and large size environments. (a) points distributed in medium size environment, (b) trajectory including obstruction in medium size environment, (c) distributed points and trajectory in large size environment. gain pattern affects the amplitudes twice. The full PLA is in line-of-sight (LoS) condition with respect to the direct path. The window component (see Fig. 3(b)) shows much stronger amplitude variations and a distinct visibility region. The whiteboard component (see Fig. 3(c)) shows significant amplitude fading, which relates to the computed geometric visibility regions (c.f. Fig. 4(d)). ### _Super-resolution channel estimation_ For the parametric analysis, the SBL algorithm [19] is applied to the signals per subarray. This allows to make use of the plane wave assumption and assume negligible propagation attenuation along the subarrays [16], due to the small subarray size w.r.t. the propagation distance to modify (1) accordingly. Furthermore, components are assumed to be visible for all subarray elements, using square URAs with \(M=\{16,64\}\) array elements. Increasing \(M\) improves the analysis of the component visibility due to the array gain, e.g., compared to the results from Sec. IV-B. We restrict the maximum number of components to be estimated in the SBL-algorithm to \(\hat{K}=20\) to reduce the computation time. #### Iv-C1 Model representation A summary of the SBL-estimates for the measurement positions in both environments is given in Table I. The table compares the estimated component energy (as a percentage of the total received signal energy) for the six strongest components at all measurement positions. The results are shown in terms of mean and standard deviation taken over the resulting \(N_{\text{s}}=500\) subarrays. Comparing the total estimated energy with the residual energy gives a metric on how well the channel estimator captures the total signal energy. The results for different measurement positions show that a high percentage of the signal energy is covered by all estimated components, with usually close to \(95\,\%\) energy contained in all estimated components in the medium environment. In the large environment, the fixed number of \(K=20\) components is not sufficient, capturing usually below \(80\,\%\) or the signal energy. The significantly lower percentage of \(82\,\%\) for M4 can be explained by the proximity to metal shelves, from which a larger number of scattered components can be expected, resulting in more diffuse components not well represented by the deterministic MPC model. Similarly, locations L3 and L5 which experience strong obstructions due to the metal shelves only capture below \(40\,\%\) of the total signal energy on average, which is also attributable to a larger number of diffuse components not well represented by the model. #### Iv-C2 Component visibility To analyze the visibility and spatial amplitude distribution of estimated components, we rely on the environment models shown in Fig. 1(f) and 1(f) to compute image source positions for expected MPCs. For each subarray, the estimates obtained from the SBL-algorithm (component delay, azimuth, and elevation) are associated with the components corresponding to the computed image sources using the data association (DA) algorithm from [20]. Medium environmentThe estimated amplitudes per subarray associated with modeled components are shown alongside the computed geometric visibility using the model (and ray tracing) in Fig. 5, shown by the example of M1 and selected components. The direct path is visible at all subarrays independent of their size, showing a similar distribution of estimated amplitudes (see Fig. 4(b) and 4(c)). The combined effect of the antenna gain patterns is again observable, with the shown estimated amplitudes compensated for the distance-dependent path loss. The whiteboard component is only visible Fig. 3: Spherical wave beamformer spectra for position M1 in the medium size environment using the full data (full bandwidth, all array elements). The azimuth-elevation power spectrum (Fig. 2(a)) represents the view from the synthetic PLA into the room (see Fig. 1(c)). in a rectangular area in the upper right corner of the full array (see Figure (d)d), which represents the subarray regions where components were found that could be associated to the corresponding image source. The differences between small and large subarray size in Fig. (e)e and (f)f can be attributed to the increased array gain of the larger ones, as components with lower signal-to-noise ratio (SNR) can be detected. Large environmentSimilar results are obtained for the large-sized environment. At the example of measurement position L1, Fig. 6 shows the results for subarrays of dimension \((4\times 4)\) and a bandwidth of \(500\,\mathrm{MHz}\). Note that the measurement area is smaller due to the position in the environment. Additionally, the lower carrier frequency of \(f_{\mathrm{c}}=6\,\mathrm{GHz}\) results in a slightly larger inter-element spacing. The figure shows the computed visibility (top row of subplots) and the amplitude estimates obtained with the SBL algorithm (bottom row) for the direct path, and the walls to the left and right of the array (as seen from the array). The direct path (Fig. (a)a and (d)d) is generally in LoS condition for the full PLA area. Nonetheless, variations in the estimated amplitude can be observed, which are likely caused by overlap between the left wall component (see the model in Fig. (f)f and photo in (d)d), as the \((4\times 4)\) URAs have a comparably low spatial resolution. For the reflection via the right wall (Figure (b)b and (e)e), the visibility computed from the environment model shows a triangular region of component invisibility in the bottom right corner of the PLA, which coincides with the region of low estimated amplitudes of similar shape. Note that the geometric shape of the visibility region is due to the measurement area located underneath the right wall (see Fig. (e)e), with the lower wall edge representing the boundary between the visibility regions for the corresponding MPC. The left wall, in turn, shows stronger amplitude fading and less clear correspondence with the computed geometric visibility. Again, these variations could be attributed to the geometric configuration of the measurement setup: as the measurement location is close to the corresponding image source, path overlap in the delay and the angle domain again likely causes fading. ## V Conclusion and Future Work The synthetic PLA measurements and the analysis presented in this paper have shown the feasibility and necessity of employing visibility regions for MPC-based models in the context of PLAs. Based on the channel measurements, we have shown that it is possible to perform super-resolution channel estimation for subarrays for which local stationarity can be assumed, inherently accounting for visibility. This is important in location-aware applications, where the measured channel is used to estimate the user location or infer environment parameters. Consequently, robust algorithms must consider the limited Fig. 4: Signal power for each antenna position (e.g., PLA element) computed by performing position-based, i.e., spherical wave, beamforming for the direct path, window, and whiteboard reflections at the example of position M1. The (image) source positions are computed from the model in Fig. (c)c. Fig. 5: Estimated amplitudes obtained with the SBL-algorithm [19] for \((4\times 4)\) and \((8\times 8)\) subarrays and 500 MHz bandwidth. Data association is performed for direct path and whiteboard components. component visibility and varying number of components and received signal power. Future work will deal with algorithms that exploit the described propagation conditions as well as refinement of the outlined channel model that considers visibility on a per-array, or at least per-subarray level. Furthermore, algorithms for the optimal fusion of local measurements/estimates obtained by subarrays to attain the performance of a corresponding fully coherent PLA aperture are under development based on the channel measurements. ## Acknowledgment The project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101013425 (Project "REINDEER").
2307.02696
Multi-Foci Acoustic Field Generation Using Dammann Gratings for Phased Array Transducers
Phased array transducers can shape acoustic fields for versatile manipulation, however, generating multiple foci typically requires complex optimization. In this study, we show that Dammann gratings - binary phase gratings used in optics to generate arrays of equal intensity spots - can be adapted for acoustics to produce multiple equal strength foci using a phased array transducer. The transducer elements were assigned phases of 0 or pi based on a Dammann grating defined by transition points. Simulations results show that simple gratings with two transition points can generate fields with up to 12 foci with nearly equal acoustic pressures. Compared to conventional multi focus phase optimization techniques, the Dammann grating approach is computationally efficient and enables facile reconfiguration of the focal pattern by adjusting the single focus or grating hologram. This study demonstrates that adapting binary phase functions from photonics can expand the capabilities of ultrasound for versatile acoustic manipulation tasks that require parallel actuation at multiple points.
Tatsuki Fushimi
2023-07-05T23:59:57Z
http://arxiv.org/abs/2307.02696v1
# Multi-Foci Acoustic Field Generation Using Dammann Gratings for Phased Array Transducers ###### Abstract Phased array transducers can shape acoustic fields for versatile manipulation, however, generating multiple foci typically requires complex optimization. In this study, we show that Dammann gratings - binary phase gratings used in optics to generate arrays of equal intensity spots - can be adapted for acoustics to produce multiple equal-strength foci using a phased array transducer. The transducer elements were assigned phases of 0 or \(\pi\) based on a Dammann grating defined by transition points. Simulations results show that simple gratings with two transition points can generate fields with up to 12 foci with nearly equal acoustic pressures. Compared to conventional multi-focus phase optimization techniques, the Dammann grating approach is computationally efficient and enables facile reconfiguration of the focal pattern by adjusting the single focus or grating hologram. This study demonstrates that adapting binary phase functions from photonics can expand the capabilities of ultrasound for versatile acoustic manipulation tasks that require parallel actuation at multiple points. + Footnote †: preprint: APS/123-QED Acoustic radiation force has emerged as a powerful tool for remotely manipulating and controlling small particles in diverse fields. Unlike other remote manipulation techniques, such as photophoretic or electrostatic forces, acoustic force offers a distinct advantage due to its ability to exert force on target objects irrespective of their material properties. There have been significant advances in the field of acoustophoresis, driven by the development of phased array transducers (PAT) in recent years [1; 2; 3; 4]. Specifically, the dynamic multi-focal capabilities of PAT have garnered considerable interest due to their potential in enabling parallelization [5; 6]. The ability to simultaneously focus on multiple targets can enhance performance and efficiency in applications such as experiment automation [7; 8], acoustophoretic displays [9; 10; 11], and ultrasonic haptic displays [5; 12]. One Figure 1: Comparison of multi-focal acoustic field generation techniques. Conventional methods: (a) standing wave fields easily generate foci, but require two transducers, (b) phase optimization methods realize custom fields but require optimization. Proposed method: (c) the Dammann grating phase profile assigns 0 or \(\pi\) phases to elements, simply generating multiple equivalent foci. of the most straightforward methods for generating a multi-focal field involves using a standing wave field, as illustrated in Fig. 1. A standing wave can be generated by the superposition of two counter-propagating waves, typically generated by a transducer and reflector or by another set of transducers. Although this approach is conceptually simple, it necessitates counter-propagating waves, which may not always be practical due to spatial constraints or accessibility limitations. An alternative approach involves specifying the desired focal position and using a phase retrieval algorithm (or acoustic hologram optimizers, Fig. 1) to determine the appropriate phase sets that produce the desired field. A wide range of techniques have been developed to realize multi-focal fields, including Eigensolver approach[5; 13], iterative backpropagation (IBP)[6], GS-PAT[13], and Diff-PAT[14; 15]. Although, these algorithms can be efficient, they require computational resources as they must be optimized numerically. In this study, we introduce the use of acoustic lenses based on Dammann gratings to generate multi-focal fields. Dammann gratings[16] are binary phase gratings that produce one- or two-dimensional arrays of equal intensity in optics and have been widely used in Fourier optics. However, to date, their implementation for acoustics has remained largely unexplored. A notable advantage of the proposed method is that the multi-focal field can be directly specified by the Dammann grating function. This eliminates the need for optimization algorithms and expands the method's applicability to a range of applications. It should be noted that petal beams[17; 18] can also generate multi-foci fields, yet Dammann grating presents an alternate strategy. The expansion of methodological diversity benefits the study of acoustic hologram. Moreover, where petal beams create a focused field around the propagation axis: Dammann grating forms a standing wave-like field along propagation axis. Dammann grating can be defined as follows[19; 16]: \[g(x)=\sum_{n=0}^{N}{(-1)^{n}\text{rect}\left[\frac{x-0.5(x_{n+1}+x_{n})}{x_{n+ 1}-x_{n}}\right]}, \tag{1}\] where \(x_{n}\) denotes a matrix in \(x_{n}=[x_{0},x_{1},...,x_{n}]\) and contains N number of transition points (the matrix must be in ascending order, \(x_{0}\) = 0, \(x_{n+1}=0.5\)). Furthermore, the rectangular function is defined as \(\text{rect}(x)=1\) if \(|x|<0.5,\quad 0\) if \(|x|\geq 0.5\). The function \(g(x)\) returns a binary output (-1 or 1). The coordinate \(x\) is a normalized source position with 100 equally spaced points spanning [0, 0.5] (to create high resolution Dammann grating grid from which the transducer array can be interpreted from). The obtained function \(g(x)\) is replicated multiple times to generate a square matrix, \(G_{x}(x,y)=[g(x),g(x),...,g(x)]\), and same sets of square matrix, \(G_{y}(x,y)\), is also created in \(y\) axis by repeating the process using the same transition points in the \(y\) axis. The Dammann grating for the whole array is identified by; \(H(x,y)=G_{x}(x,y)+G_{y}(x,y)\). In \(H(x,y)\), the locations with \(0,-2\), and \(2\) are replaced with values \(1,-1\), and \(-1\), respectively [20]. Then, the nearest point interpolation function is applied to \(H(x,y)\) to identify the phase at the closest point to the transducer in normalzied form. Finally, \(+1\) and \(-1\) are replaced by \(0\) and \(\pi\), respectively to create the Dammann lens (\(\phi_{Dammann}\)). A visual aid for the generation process is available in the supplementary, and codes to replicate the process will be in the Data Availability section. To determine the ultimate capability of Dammann gratings in the context of mid-air acoustics, we first consider an ideal case where the resolution of phased array transducer is high (81 by 81 transducers with 2-mm pitch, 40 kHz) with two transition points (\(N=2\)). The pressure field Figure 2: Combination of transition points that yields “valid” multi-foci fields. The colour coding shows the number of foci (\(n_{p}\)). Valid fields have focal points with amplitude > 32.5% of the single focus and within \(-3\)dB (\(0.707p_{m}ax\)) of the highest focal point. generated by the Dammann gratings is calculated using the Huygens' linear superposition method (\(p(\mathbf{x},\mathbf{x_{t}})=|\Sigma_{t}^{T}p_{t}(\mathbf{x},\mathbf{x_{t}})|\)), where \[p_{t}(\mathbf{x},\mathbf{x_{t}},\mathbf{x_{f}})=\frac{P_{A}}{R(\mathbf{x}, \mathbf{x_{t}})}e^{j(kR(\mathbf{x},\mathbf{x_{t}})+\phi_{Dammann}+\phi_{focal} (\mathbf{x_{t}},\mathbf{x_{f}}))} \tag{2}\] where \(P_{A}=1\), \(k=\frac{2\pi f_{0}}{c_{0}}\), \(\mathbf{x}\) and \(\mathbf{x_{t}}\) denote the field and transducer position, respectively. Furthermore, \(\phi_{focal}(\mathbf{x_{t}},\mathbf{x_{f}})=-\left(\frac{2\pi f_{0}}{c_{0}} \right)\left[d_{tf}(\mathbf{x_{t}},\mathbf{x_{f}})-||\mathbf{x_{f}}||\right]\) is the acoustic hologram for a single focus, and \(d_{tf}(\mathbf{x_{t}},\mathbf{x_{f}})=||\mathbf{x_{f}}-\mathbf{x_{t}}||\). Additionally, \(f_{0}\) and \(c_{0}=346\)ms\({}^{-1}\) denote the acoustic frequency and speed of sound in mid-air (\(\lambda=0.00865\)m), respectively. The acoustic pressure fields generated by Dammann gratings, with two transition points (\(x_{1}\) and \(x_{2}\)) incremented in 30 equally spaced points between 0 and 0.5, are shown in the supplementary material. The focal point was fixed at \((0,0,0.1)\) m, and the depicted pressure field is at \(z=0.1\) m. The pressure amplitude is normalized to the acoustic pressure amplitude (\(p_{max}\)) at the focal point when a single focus is specified. Given that \(x_{1}\leq x_{2}\), only the upper half of the combination matrix is applicable. An examination of each combination reveals that Dammann gratings can specify a wide range of acoustic pressure fields, and even subtle changes in \(x_{1}\) or \(x_{2}\) can drastically change the outcome of the field. Multiple focal spots with nearly equal pressure amplitudes are observed for certain transition point combinations. To identify useful Dammann gratings, criteria defining desired multi-focal field properties must be established. A useful multi-focal field is considered to have multiple focal spots with significant acoustic pressure and focal spots of nearly equal acoustic pressure strength. A peak finding algorithm was utilized to identify focal spots in the 2D acoustic pressure fields (x-y plane at z = 0.1 m) generated by the Dammann gratings. The algorithm initially detects all local maxima in the 2D field matrix, including those in flat regions. The local maxima are then sorted in descending order of acoustic pressure magnitude. The algorithm iteratively discards any local maxima that is within 5 mm of a higher pressure local maxima, retaining only the local maxima that are sufficiently separated and exhibit the highest acoustic pressures. Focus spot groups are considered to produce a "valid" multi-focal field when the maximum acoustic pressure at a focal spot (\(p^{peak}\)) is greater than 32.5% of the single focus pressure (\(0.325p_{max}\)). Additionally, the acoustic pressures at the focal spots are in the range between \(0.707p^{peak}\) and \(p^{peak}\)(within -3dB). These criteria ensure that the focal spots have significant acoustic pressures and are of nearly equal acoustic pressure strength, producing a useful multi-focal field. The combinations of transition points that yielded valid focal spots are summarized in Fig. 2. The majority of combinations were considered invalid (75.3%), but valid multi-focal fields with 4, 5, 8, 9 and 12 peaks (\(n_{p}\)) were obtained. The percentages of fields with 4, 5, 8, 9, and 12 peaks were 18.3%, 3.23%, 1.29%, 1.72%, and 0.215%, respectively. Based on these groups, one grating with the highest pressure amplitude, \(p^{peak}\), was selected for each number of focal spots, \(n_{p}\). This narrowed down the selection to one grating for each \(n_{p}\). The results for \(n_{p}=4,8,9\), and 12 are shown in Fig. 3, where the white crosses indicate the locations of the identified focal points (see the supplementary material for \(n_{p}=5\)). The phase Figure 3: Optimal multi-focus gratings that satisfy the set criteria. The transition points are shown as \((x_{1},x_{2})\), where \(x_{1}\) and \(x_{2}\) denote the normalized source positions. (a) Four focal points with transition points \((0.138,0.379)\), (b) 8 focal points with \((0.103,0.241)\), (c) 9 focal points with \((0.0862,0.483)\), and (d) 12 focal points with \((0.0172,0.224)\) profiles and 3D visualization of acoustic field for each grating are shown in the Supplementary Material. Although the field with \(n_{p}=4\) and 12 (Fig. 3 (a)-(d)) appear similar to each other, this is inevitable due to the relatively simple filtering process. The gratings with a large number of focal points are of interesting as they can potentially serve as "array-generating" multi-focal fields for experimental automation in biology, chemistry, and medicine, effectively replacing standing wave fields [8]. Until now, an ideal PAT with high spatial resolution is assumed, but the identified traps should translate well into a conventional phased array. Specifically, a 16 by 16 phased array with Murata MA40S4S transducer (40 kHz, 10-mm diameter, \(p_{0}=0.221\)Pa m V\({}^{-1}\) with 20 V\({}^{8}\)) was assumed. A directivity function (\(D(\theta)=\frac{2J_{1}(kr\sin\theta)}{kr\sin\theta}\)) was added to simulate piston-source transducers and the results are as shown in Fig. 4. In principle, the acoustic trap, as simulated in Fig. 3 (a)-(d), are well-recovered in PAT. However, it is less focused and has more ambient noise than ideal case. In particular, the difference in the pressure amplitude between peaks are evident in Fig. 4 (b) and (d). We further note that \(n_{p}=5\) in the supplementary material does not fully recover the same pressure distribution as that in Fig. 4. However, despite these challenges; the conventional PAT is sufficiently resolved to generate Dammann gratings. One of the distinguishing advantage of the Dammann grating, when compared to the hologram optimziation method, is the capability to translate the multi-focus with the change in single-focus hologram, and does not require re-optimization of the field. This is due to the fact that Dammann grating is a binary phase hologram with 0 and \(\pi\), and it creates phase singularities [1]. The translation capability of the multi-focus lens using \(n_{p}=4\) is as shown in Fig. 5 (a)-(b), and the mean pressure amplitude at the peaks were 1948, 2260, and 2686 Pa at \(-5\lambda\), \(-2.5\lambda\), and 0 shift, respectively. The field can also be rotated by rotating the the Dammann grating phase (using "imrotate" function in MATLAB, the applied phase is shown in the supplementary material). The field rotations with \(\frac{\pi}{8}\), \(\frac{\pi}{4}\) radians are shown in Fig. 5 (c)-(d), respectively. The mean pressure amplitude at the focus stays relatively constant with pressure amplitude of 2209 and 2531 Pa. These characteristics share the similarity with the trap lens (such as twin, vortex, bottle trap lens). A characteristics that is not confirmed to be shared between the trap signature and Dammann grating is the ability to combine the lens with the optimized lens such as IBP [6] (see supplementary). This may be related to phase encoding limits as discussed by Memoli et al. [21]. However, further investigation is necessary. When multi-focus fields are generated using phased array transducers, a rule of thumb used to set "sensible" target acoustic pressure amplitude involves assuming that the sum of acoustic pressure amplitude at each focus should not exceed the single focus pressure amplitude \(p_{max}\) (i.e., each peak shares the acoustic pressure amplitude from the single focus amplitude). Thus, for a conventional multi-focus optimizer, the target normalized pressure amplitude for multi-focus with the same pressure amplitudes are the reciprocal of the number of peaks (\(n_{p}\)) [14]. The examination of Fig. 3 reveals that the mean normalized pressure amplitudes are 0.325, 0.284, 0.287, 0.307, and 0.269 for \(n_{p}=4,5,8,9\), and 12, respectively. This demonstrates that a higher target pressure amplitude can be set than the rule of thumb. This is an important insight into the limits of PAT, and aids in designing more appropriate performance test for acoustic hologram optimizers. Although, in this paper, relatively simple Dammann gratings with two transition points (\(N=2\) Figure 4: Dammann grating applied to a 16x16 phased array transducer. The configurations are the same as Fig. 3: (a) 4 focal points, (b) 8 focal points, (c) 9 focal points, and (d) 12 focal points. The focal spots are less defined due to the lower resolution of the phased array. were investigated, the number of transition points can be further increased. Hence, this still leaves the probability of discovering Dammann gratings with large \(n_{p}\). The limitation of the Dammann grating is imposed by the spatial resolution. For the PAT, the limitation is the physical size of the transducers, and it can be improved by the application of metamaterials [21]. Application of Dammann gratings in underwater acoustics can also be envisioned. Acoustic field that can act in lieu of standing wave exhibits potentials in creating arrays for biobgy/chemistry [22], additive manufacturing, and medical applications [23; 24] where it is difficult to generate standing waves. In summary, this paper presents a method that uses Dammann gratings to simply and effectively Figure 5: Translation and rotation of the four focal points (\(n_{p}=4\)) with a 16x16 phased array transducer. (a)–(b) show horizontal translation of the focal points by shifting the focal position by \(-2.5\lambda\) and \(-5\lambda\), respectively. (c)–(d) show rotation of the Dammann grating and corresponding rotation of the focal points by \(\pi/8\) and \(\pi/4\) radians, respectively. generate multi-focal acoustic fields. The Dammann grating function directly specifies the multi-focal field, eliminating the need for computational optimization. The findings of the study provide new insights into multi-focal acoustic pressure field and indicate that the Dammann gratings are promising for applications requiring PAT. ## Supplementary Material See supplementary material for visual guide and pressure field output for all combination of Dammann Grating, \(n_{p}=5\) field, hologram, and 3D visualization of acoustic field for the selected gratings, holograms for translation/rotation, and attempt with IBP optimizer. ## Acknowledgement We gratefully acknowledge the support of AI tools, OpenAI's GPT-4, and Anthropic's Claude. The authors have diligently reviewed and verified all generated outputs to ensure their accuracy and relevance. We would like to thank Editage [[http://www.editage.com](http://www.editage.com)] for editing and reviewing this manuscript for English language. ## Conflict of Interest The authors have no conflicts to disclose. ## Author Contributions Tatsuki Fushimi: Conceptualization; Methodology; Software; Validation; Visualization; Writing. ## Data Availability The data that supports the findings of this study are available within the article and its supplementary material. Codes are openly available in Github at [[https://github.com/DigitalNatureGroup/Dammann_Grating_Acoustics](https://github.com/DigitalNatureGroup/Dammann_Grating_Acoustics)].
2307.12485
Web3.0 Security: Privacy Enhancing and Anonym Auditing in Blockchain-based Structures
The advent of Web 3.0, underpinned by blockchain technologies, promises to transform the internet's landscape by empowering individuals with decentralized control over their data. However, this evolution brings unique security challenges that need to be addressed. This paper explores these complexities, focusing on enhancing privacy and anonymous auditing within blockchain structures. We present the architecture of Web 3.0 based on the blockchain, providing a clear perspective on its workflow and security mechanisms. A security protocol for Web 3.0 systems, employing privacy-preserving techniques and anonymous auditing during runtime, is proposed. Key components of our solution include the integration of privacy-enhancing techniques and the utilization of Tor for anonymous auditing. We discuss related work and propose a framework that meets these new security requirements. Lastly, we offer an evaluation and comparison of our model to existing methods. This research contributes towards the foundational understanding of Web 3.0's secure structure and offers a pathway towards secure and privacy-preserving digital interactions in this novel internet landscape.
Danyal Namakshenas
2023-07-24T02:33:34Z
http://arxiv.org/abs/2307.12485v1
# Web3.0 Security: Privacy Enhancing and Anonym Auditing in Blockchain-based Structures ###### Abstract The advent of Web 3.0, underpinned by blockchain technologies, promises to transform the internet's landscape by empowering individuals with decentralized control over their data. However, this evolution brings unique security challenges that need to be addressed. This paper explores these complexities, focusing on enhancing privacy and anonymous auditing within blockchain structures. We present the architecture of Web 3.0 based on the blockchain, providing a clear perspective on its workflow and security mechanisms. A security protocol for Web 3.0 systems, employing privacy-preserving techniques and anonymous auditing during runtime, is proposed. Key components of our solution include the integration of privacy-enhancing techniques and the utilization of Tor for anonymous auditing. We discuss related work and propose a framework that meets these new security requirements. Lastly, we offer an evaluation and comparison of our model to existing methods. This research contributes towards the foundational understanding of Web 3.0's secure structure and offers a pathway towards secure and privacy-preserving digital interactions in this novel internet landscape. ## 1 Introduction One of the recent new and hot areas for research and investigation in our digital world is related to Web 3.0 [1]. With the advent of Web 3.0, which encompasses the semantic web and decentralized technologies, cybersecurity has emerged as a critical focus area [2]. The concept of a decentralized web seeks to radically transform our digital interactions, introducing both unique challenges and opportunities for ecosystem security. Web 3.0, a potential future iteration of the internet, leverages public blockchains primarily recognized for enabling cryptocurrency transactions. The allure of Web 3.0 lies in its decentralization, wherein users bypass corporate intermediaries like Google, Apple, or Facebook to access the internet, instead personally owning and managing their internet segments. Essentially, Web 3.0 utilizes a fresh suite of blockchain-based technologies that aid in the creation of decentralized web applications, granting users command over their identity, content, and data. If we want to have a deeper sight into Web 3.0, blockchain is a bounding heart for Web 3.0 and is a key solution to delivering Web 3.0 services. Indeed, the blockchain protocol is the foundational layer of Web 3.0 since blockchain, by nature, can provide security and privacy in any environment [3]. Many applications and technology, such as IoT (Internet of Things), SDN (Software-Defined Networking), and NFV (Network Functions Virtualization) are combined with that to tackle security threats [4; 5]. Also, these days, generally accepted that security is one of the most critical issues, and the security concept will be such a crucial problem due to the broad scope of Web 3.0 applications. Therefore, in Web 3.0 security, a vulnerability refers to anything a hacker can leverage to exploit the protocols in this area. For example, this might be something in the blockchain structure, a flaw in the underlying code, or utilizing blockchain features and characteristics. Also, in moving beyond Web 2.0, Web 3.0 should resolve many of the inherent vulnerabilities in Web 2.0 technology. However, this process is not completely painless as Web 3.0 brings with it its own set of vulnerabilities and inherits many of the problems of Web 2.0 [6]. However, Web 3.0 promises to provide some unique features like protecting identity and data rights by allowing users to be completely anonymous. This is well seen in cryptocurrencies, where user wallets and transactions, whilst fully visible on the blockchain, are not connected to their identity. Therefore, blockchain technology is one of the foundations of Web 3.0 and should support Web 3.0 goals while users might not even notice it. Indeed, people will not care about the underlying infrastructure and its security; they focus on the services and protocols which are used. Additionally, the security of Web 3.0 can hinge on the vulnerabilities of smart contracts. These are programs or scripts operating on the blockchain, adhering to predetermined rules when certain conditions are met. If a smart contract's code is susceptible to attacks, it can have grave consequences on Web 3.0 services. This issue is further complicated by the absence of legal precedents safeguarding smart contracts. Thus, in many instances, it's impossible to insure or recover potential losses, such as cryptocurrencies and NFTs, in the event of a cyber attack. One of the recent new and hot areas for research and investigation in our digital world is related to Web 3.0. With the advent of Web 3.0, which encompasses the semantic web and decentralized technologies, cybersecurity has emerged as a critical focus area [7]. As the decentralized web aims to revolutionize how we interact with the digital realm, it brings forth new challenges and opportunities in terms of securing the ecosystem [8, 9]. Our motivation at this point is shaped by the advent of Web 3.0, necessitating the exploration of alternative blockchain platforms and protocols that meet emerging security demands. Consider the Oasis Network, which identifies itself as "the first scalable, privacy-centric blockchain." Its Oasis Protocol facilitates 'data tokenization,' a feature that ensures user control over data usage. The organization asserts that this capability will pave the way for more user-friendly Decentralized Finance (DeFi) applications. Findora is another blockchain platform and protocol that merges "transactional privacy" with selective information disclosure to regulators and auditors. This distinguishes it from privacy-oriented cryptocurrencies, or 'privacy coins' like Monero, often the go-to for top-tier ransomware criminals due to its untraceability. Findora secured an "eight-figure" investment round last year, and in October, it introduced a 100 million fund to strengthen its developer community. [10]. In other words, motivated by these items, we move forward that should design a security protocol for Web 3.0 during getting services from blockchain-based structures. To be secure and remove privacy issues, in Web 3.0 scope, it is necessary to utilize security protocols that provide more privacy and anonymity, especially in blockchain environments [11]. An auditing feature in Web 3.0 scope an excellent tool to assess a user's operations and ensure that the records are as accurate as possible. While information sourced externally and internally should have a privacy-preserving feature [12]. As a result, our goal is secure the entire Web 3.0 architecture during getting services from blockchain layers that proposed protocol provide professional privacy, security, and audit [13]. Despite the considerable attention Web 3.0 has garnered, precise definitions and design outlines are still largely lacking. Some studies have examined consensus-level aspects, but they haven't given a comprehensive view of other equally vital components and architectural designs intrinsic to Web 3.0. This lack of clear definitions and consensus suggests that Web 3.0 is either a concept overhyped without practical development or has multiple potential directions for growth. This paper bypasses broad discourse, instead focusing on the architecture of Web 3.0 and its interplay with blockchain. To summarize, the paper makes the following contributions: * Design and present Web 3.0 architecture based on blockchain structure. The workflow and security mechanisms of Web 3.0 has been clear. * Proposed a security protocol for Web 3.0 system that utilizes privacy-preserving and anonym auditing in the run-time. * Applying privacy enhancing techniques as * Using Tor for anonym auditing Following is a breakdown of the rest of the paper. In Section 2, we discuss work related to blockchain and Web 3.0. Section 3 describes the Web 3.0 System Architecture. In Section 4, the Anonym auditing in Web 3.0, and in Section 5, Evolution of Web 3.0. Section 6 presents the integration of Web 3.0 with New Technology, and conclusions are drawn, and research is suggested going forward. ## 2 Related work Web3 has become a dominant concept from 2020 onwards, significantly stimulating the growth of the Internet of Value and Metaverse. However, there still lacks robust protocols, security methodologies, and standards in this field. This section will scrutinize the most pertinent research concerning Web3 protocols and security mechanisms in blockchain-based environments. The study in [14] introduces a Web3 protocol deemed secure if a user can retrieve the correct state and transaction on the blockchain anytime post-block confirmation. The security model is predicated on a strong blockchain, with security assured by persistence and liveness. Persistence refers to uniform views among different nodes at a specific block height, while liveness focuses on the finality of a block within the valid longest chain. We disregard other chain structures like the directed acyclic graph (DAG) [15]. In another research, the scholars in [16] illustrate a potential foundation for Web3, elucidating its core components, design principles, and the expansive Web3 design space. They propose that any Web3 execution can be explicated through three primary elements: tree, ledger, and cloud, which adhere to a set of Web3 architectural principles, hence creating a comprehensive Web3 design space. Additionally, [17] describes a Security Protocol for distributed IoT Microservice, where they apply Web 3.0 technologies in an IoT setting and address its security vulnerabilities using robust security design practices. As explained in Wickstrom et al.'s research [18], smart contracts are executed on the Ethereum Virtual Machine (EVM) where user and device authentication and authorization take place. Given the immutable nature of smart contracts, it allows for the creation of a permanent activity log for the protocol. They ensure the privacy of network devices by avoiding the storage of geographical or physical device information in its contracts. In his work, Yang [19] discusses Timed-Release Encryption in Web3 and provides an Efficient Dual-Purpose Proof-of-Work Consensus, encrypted via an asymmetric key encryption scheme on a blockchain. In a related work [20], Marcus contemplates the influence of Web3 on Privacy and Personal Data Management, conducting a thorough analysis of various Decentralized Cloud Storage (DCS) solutions and a case study of interactions with a basic social media application in a Web3 context. In their study, the authors in [21] propose a distributed protocol for data aggregation enhancing privacy, founded on blockchain and homomorphic encryption. On the other hand, Uzair et al. [22] put forward a Scalable Protocol for Managing Trust in the Internet of Vehicles with Blockchain, employing smart contracts, physically unclonable functions (PUFs), certificates, and a dynamic Proof-of-Work (dPoW) consensus algorithm. In another investigation [23], the authors delve into Smart Contract Deployment on the Ethereum Platform using Web3.js and Solidity with Blockchain, concluding with a methodological approach for the creation, deployment, and interaction of smart contracts using Node.js, the Web3 library, and Infura API. ## 3 Web 3.0 System Architecture The Web 3.0 system architecture is designed to be open, allowing developers to build applications that are compatible with various devices and systems [24; 25]. It also integrates various services. Additionally, Web 3.0 System Architecture supports the development of user-generated content and the seamless integration of social networks and other communication protocols. This allows for developing applications that can be accessed from various devices and systems. Based on Web 3.0 concepts and its security goals, we design an architecture for the Web 3.0 system, as presented in Fig.1. This system architecture shares between users and blockchain infrastructure with the Web 3.0 protocol to deliver Web 3.0 services. In this regard, Fig 1 also illustrates the workflow of the Web 3.0 system.The processing and storage of user data in Web 3.0 is executed in a decentralized and community-controlled network via open protocols, moving away from a centralized TTP [26]. An enticing feature of Web 3.0 is its system of immediate rewards, ensuring users receive an equitable proportion of revenue when they contribute to the network. The proposed system structure is logically segmented into four distinct layers: blockchain, application, client, and wallet. The blockchain layer provides enduring ledger storage via mainnet nodes, and its smart contracts deliver the capability to access and modify on-chain data. Distributed storage solutions such as IPFS and Swarm incentivize clients with blockchain tokens for providing distributed storage services. This layer comprises single, homogeneous, and heterogeneous blockchains. The application layer implements the necessary business and interaction logic for the client layer and centrally stores data that does not require on-chain documentation. This layer encompasses various Web3 systems and employs adaptively scalable technology to "upgrade" the underlying blockchain of Web3, thereby augmenting the usability of Generic-NFT [27]. The client layer presents web browser interfaces for operations such as minting, buying, and selling NFTs. Every client connects to the blockchain using a web wallet like MetaMask and employs it for signing client transactions. Clients are tasked with handling user requests. For a minimal amount of requests, traditional blockchain systems can engage a single browser as the client to interact with the wallet. However, when there's a sharp increase in requests within a short timeframe, an agent is necessitated to manage the sudden surge of requests. A browser-based wallet provides an intuitive method for users to adopt Web3 services. Users only need to add an extension tool to their browser and import their private key into this embedded wallet. When visiting a Web3-supported website, users can directly connect the wallet, and any functions clicked on the website will invoke the back-end methods through APIs under the user's account. An agent-based wallet, on the other hand, facilitates batch processing during high-density user request scenarios. Similar to traditional Web1/Web2, users should initially grant a trusted agent with appropriate permissions. The authentication procedure is initiated once users formally register with the agents. Protocols are essential to the security of Web 3.0. They act as a set of rules governing how data is transferred between networks, ensuring that data remains secure and confidential. Protocols provide a secure connection between the client and the server when sensitive data is being transmitted. Protocols also provide authentication, which is a way of verifying the user's identity. Authentication requires users to enter a unique username and password that is only known to them. This prevents unauthorized access to data and protects against malicious attacks. Additionally, protocols allow for data encryption, ensuring that the data is unreadable or unusable by anyone other than the intended recipient. This prevents attackers from intercepting data, as they won't be able to decipher the encrypted information. Without protocols, the web would be much less secure, as attackers could easily access and use sensitive information. In the following, we clarify the entities participating in Web 3.0 security protocols. ### Privacy Enhancing Blockchain transactions empower users to manage their data via private and public keys, thereby giving them ownership. This technology prevents third-party intermediaries from misusing or gaining unauthorized access to the data. When personal information is stored on the blockchain, the owners can dictate the conditions under which third parties can access it. Moreover, blockchain ledgers intrinsically come with an audit trail that guarantees the validity of transactions. Web 3.0 deploys privacy-enhancing technologies with the intent to safeguard user data and privacy while simultaneously facilitating secure and personalized online interactions.One method for achieving this is through the use of decentralized identity systems. Decentralized identity systems use blockchain technology to enable individuals and organizations to own and control their own digital identity. This is in contrast to traditional centralized identity systems, in which a single entity, such as a government or corporation, holds and controls the data. One example of a decentralized identity system is the Identity Hub. The Identity Hub allows users to store their personal information, such as name, address, and date of birth, on a decentralized platform. This information is then encrypted and stored on the blockchain, making it secure and immutable. Users can then choose to share their personal information with third parties, such as websites or online services, through the use of identity credentials. These credentials, which can be in the form of a digital certificate or a blockchain-based token, allow users to prove their identity without revealing all of their personal information. Decentralized identity systems also offer the ability to manage and revoke access to personal information. If a user no longer wishes to share their information with a particular party, they can simply revoke the credentials, effectively cutting off access to their data. Another privacy enhancing technology in Web 3.0 is the use of zero-knowledge proofs. Zero-knowledge proofs allow one party, known as the prover, to prove to another party, known as the verifier, that they possess certain knowledge or information without revealing the actual knowledge or information. For example, a user may want to prove that they are over the age of 18 to a website without revealing their actual age. Through the use of a zero-knowledge proof, the user can prove that they are over 18 without revealing their actual age. Zero-knowledge proofs have the potential to greatly enhance privacy in online transactions, as they allow individuals to prove their identity or qualifications without revealing sensitive personal information. In addition to decentralized identity systems and zero-knowledge proofs, Web 3.0 also introduces the concept of self-sovereign identity. Self-sovereign identity refers to the idea that individuals should have Figure 1: Workflow of A Web3 System full control and ownership over their own digital identity and personal data. This concept is based on the principle that individuals should have the right to determine how their personal information is collected, used, and shared. Self-sovereign identity systems allow individuals to store their personal information on a decentralized platform and control access to it through the use of identity credentials and smart contracts. Overall, privacy enhancing technologies in Web 3.0, such as decentralized identity systems, zero-knowledge proofs, and self-sovereign identity, offer the potential to greatly enhance privacy and security online while still allowing for personalized and secure experiences. ### Formulate Privacy Privacy is a fundamental and intricate concept, quintessential for maintaining the balance of personal freedom and societal interplay in our information-rich society. Privacy can be formulated as the right of an individual or group to exclude information about themselves, and thereby express themselves selectively. Its importance spans from the social dimension to personal, ethical, political, and legal facets. In formal terms, privacy is multidimensional, embracing various aspects like information privacy, physical privacy, and decisional privacy. Information privacy refers to the rights an individual has to control the collection and usage of their personal information. Personal information encompasses both Personally Identifiable Information (PII) and non-PII, which include any information that can be used directly or indirectly to identify an individual. Physical privacy, on the other hand, pertains to an individual's right to maintain their physical space and personal property free from intrusion. It covers the right not to be subjected to unlawful searches or seizures and the right to bodily integrity. Decisional privacy refers to the individual's right to make personal choices without governmental intervention. This includes making decisions about one's own body, family, and lifestyle without outside interference. Within the sphere of a data-centric world, privacy often gravitates around principles such as data minimization, purpose limitation, storage limitation, accuracy, integrity, and confidentiality. Data minimization stresses the collection of only the minimal personal data that's necessary for a particular objective. Purpose limitation necessitates that personal data is gathered for well-defined, explicit, and legitimate purposes, and not processed in a manner incompatible with those purposes. The principle of storage limitation suggests personal data should not be retained in an identifiable form for longer than needed for the purposes for which the data is processed. The principle of accuracy underscores the importance of maintaining personal data that is both accurate and up-to-date, while the principle of integrity and confidentiality ensures that personal data is handled in a manner that provides sufficient security. This includes protection against unauthorized or illegal processing, as well as against accidental loss, destruction, or damage. ### Privacy Preserving Privacy preserving is a critical concern in the digital age, which deals with the protection of individual or collective information in an environment of proliferating digital data. In a formal sense, privacy preserving techniques are methodologies designed to protect personal or sensitive data from unauthorized access or disclosure, while still allowing useful computations on the data. This concept is instrumental in fields like data mining, machine learning, and cloud computing, where vast amounts of data are analyzed to glean insights while still maintaining the privacy of the individuals represented in the data. Differential privacy stands out as a significant technique for preserving privacy. It provides a mathematically-backed method to measure privacy leakage. By adding noise to data query results, it ensures that an individual's data presence or absence doesn't greatly influence the output, thereby providing robust privacy assurances. Other techniques for preserving privacy encompass homomorphic encryption and secure multi-party computation. Homomorphic encryption permits operations to be executed on encrypted data without the need for decryption. Secure multi-party computation, on the other hand, enables multiple parties to perform computations on their combined data, all the while keeping their individual inputs hidden from each other. Furthermore, anonymization and pseudonymization are key strategies in privacy preservation. Anonymization involves the complete removal of personally identifiable information from data sets, making the identification of individuals impossible. Pseudonymization, however, replaces identifiers with pseudonyms, permitting the identification of individuals under certain conditions. ## 4 Anonym Auditing in Web 3.0 Anonymous auditing is a critical element of ensuring privacy compliance and confidentiality in a data-driven society. It pertains to the inspection and evaluation of systems, processes, or data sets to ensure that privacy laws and standards are being met, while the identity of the individuals remains concealed. Auditing anonymously incorporates techniques that preserve privacy to ensure that the identity of data subjects remains undisclosed during the audit. Techniques such as k-anonymity, l-diversity, and t-closeness are utilized to keep the anonymity of individuals in the dataset intact, yet provide valuable information for the audit. K-anonymity safeguards each individual in a released dataset by making them indistinguishable from at least k-1 other individuals. L-diversity takes it a notch higher by ensuring that within each group of k-indistinguishable individuals, there are at least 'l' diverse and "well-represented" sensitive attributes. T-closeness furthers the concept by requiring that the distribution of a sensitive attribute within any equivalence class closely mirrors the attribute's distribution in the complete dataset. ### Formulate Auditing Defining auditing within a formal framework requires a clear understanding of its fundamental purpose and methodologies. Essentially, auditing is a standalone, objective assurance and consulting activity aimed at enhancing and refining an organization's operations. By employing a systematic, disciplined approach, it aids an organization in achieving its goals through the evaluation and enhancement of risk management, control, and governance processes. When applied to information systems, auditing is founded on several essential principles. The first is the principle of independence, which asserts that the auditor must maintain an objective distance from the process, system, or organization under audit to guarantee an unbiased assessment. The second principle is evidence-based reporting. This principle obliges auditors to root their findings and conclusions in the evidence collected throughout the audit process. This ensures the reliability of the conclusions and their ability to withstand critical examination. The third is the principle of relevance. The audit must focus on those aspects that are relevant to the audit scope and objectives, and have potential to improve the organization's operations. The formulation of auditing also encompasses the use of various audit methodologies and tools. These may include control self-assessment, risk assessment, benchmarking, and data analysis. In the digital age, auditors also use advanced technologies such as artificial intelligence, machine learning, and data analytics to analyze large volumes of data and detect anomalies or trends. In the context of data privacy, auditing is formulated around privacy laws and regulations such as the General Data Protection Regulation (GDPR) in Europe, the California Consumer Privacy Act (CCPA) in the United States, and others. Auditors evaluate the organization's compliance with these laws and regulations, and assess the effectiveness of the organization's data protection controls. This may include evaluating the organization's data privacy policies, procedures, and practices, as well as the technical and organizational measures in place to protect personal data. ### Auditing in Web 3.0 The inherent transparency and immutability of blockchain technology make it a viable tool for auditing and verifying transactions in Web 3.0 systems. The term "audit" traditionally refers to an official examination of an organization's accounts or systems, typically by an independent body. However, in the context of Web 3.0, the concept of auditing extends beyond just financial transactions. Auditing in Web 3.0 systems can include the review and verification of smart contract execution, compliance with decentralized governance protocols, and the confirmation of data integrity within decentralized applications (dApps). One of the critical benefits of auditing in Web 3.0 is the potential for "real-time auditing," where transactions and activities on the blockchain can be monitored and verified continuously. This is facilitated by the public availability of all transactions on the blockchain, which allows auditors, regulators, and users to view and confirm transactions as they are added to the blockchain. Furthermore, with the integration of zero-knowledge proofs and other cryptographic tools, auditing in Web 3.0 can provide assurance of the integrity and authenticity of transactions without compromising the privacy of the parties involved. This presents a significant advancement over traditional auditing methods, which often involve intrusive inspections and potential privacy risks. In addition to these benefits, Web 3.0 auditing could also support decentralized governance models. Many blockchain platforms and Web 3.0 applications are implementing decentralized governance protocols, where token holders or network participants can vote on proposals and decisions regarding the network. Auditing plays a crucial role in these governance models by verifying the legitimacy of votes and enforcing compliance with the agreed-upon protocols. To leverage the benefits of Web 3.0 auditing, various tools and frameworks have been developed. For instance, blockchain explorers allow users to view and track transactions on the blockchain, while smart contract analysis tools can help in the detection of vulnerabilities and the verification of smart contract behavior. Also, decentralized auditing platforms are emerging, which aim to provide independent auditing services for blockchain-based systems and dApps. However, despite its potential benefits, auditing in Web 3.0 also presents several challenges. For one, the complexity and technical nature of blockchain technology and smart contracts can make the auditing process difficult for those without specialized knowledge. Moreover, while transparency is a key feature of blockchain technology, it also raises privacy concerns that need to be addressed. The integration of privacy-preserving technologies, like zero-knowledge proofs, could help alleviate these concerns, but these technologies are still in their early stages and may introduce additional complexity. As Web 3.0 systems continue to evolve, the role of auditing will become increasingly significant. Not only can auditing provide assurances of integrity and compliance in these systems, but it could also play a vital role in supporting decentralized governance models and facilitating the wider adoption of Web 3.0 technologies. ## 5 Evolution of Web 3.0 The onset of Web 3.0 has engendered transformative digital experiences, facilitating semantic data interoperability and collaborative networks. In conjunction with these advancements, Web 3.0's security dynamics are evolving to accommodate new paradigms of privacy and anonymity, particularly in blockchain-based structures. This evolution encompasses privacy-enhancing techniques and anonymized auditing measures that fortify the confidentiality and integrity of information exchanges. Privacy in Web 3.0 is no longer a mere supplement; it is a foundational necessity. The decentralization inherent to the blockchain technology imbues the Web 3.0 with a fundamental shift from centralized data repositories to peer-to-peer networks. This scenario necessitates robust privacy-enhancing techniques that protect users' sensitive information without jeopardizing the collaborative essence of Web 3.0. Differential privacy, homomorphic encryption, and secure multi-party computation have emerged as paramount techniques that enable computations on encrypted data, mitigate privacy leakage, and allow collaboration while preserving privacy. Differential privacy introduces algorithmic noise into data queries to guarantee that the output is minimally affected by the inclusion or exclusion of an individual's data. Homomorphic encryption allows data to remain encrypted while computations are performed, obviating the need for decryption and thereby reducing the risk of privacy breaches. Secure multi-party computation enables multiple parties to collaborate on data analysis without revealing individual inputs, fostering a shared information environment that up-holds individual privacy. Concurrently, the ascendance of blockchain technology in Web 3.0 users in an era of transparent, immutable transactions. While this transparency fosters trust and accountability, it could potentially infringe on users' privacy if their identities are associated with their blockchain transactions. Anonymity becomes a vital counterweight to maintain privacy, prompting the development of anonymized auditing methodologies. Anonymous auditing in blockchain-based structures involves inspecting and validating transaction integrity and compliance with privacy standards while keeping the identities of the parties involved concealed. Techniques such as k-anonymity, l-diversity, and t-closeness are implemented to provide statistical guarantees of privacy. These methodologies ensure that sensitive attributes within datasets are well-represented and that the distribution of a sensitive attribute in any equivalence class closely approximates the overall dataset distribution. In essence, the evolution of Web 3.0 security is characterized by a dynamic interplay between privacy enhancement and anonymous auditing within the blockchain ecosystem. The objective is to construct a secure, collaborative, and privacy-preserving environment where users can engage with digital services confidently. As we continue to traverse the terrain of Web 3.0, these security measures will become increasingly sophisticated and integral to the architecture of the decentralized web. ## 6 Integration of Web3.0 with New Technology Web 3.0, also known as the semantic or decentralized web, is designed to be more secure, private, and efficient than its predecessor. It incorporates cutting-edge technologies like blockchain, machine learning, and Internet of Things (IoT) devices to create a more streamlined and personalized online experience. Among the key technologies poised to play a significant role in the development and operation of Web 3.0 are Software-Defined Networking (SDN), Federated Learning (FL), and IoT. ### Software-Defined Networking (SDN) The core principle of SDN lies in its separation of the control plane from the data plane. This separation allows network administrators to shape traffic from a centralized control console without having to touch individual switches within the network [28]. The control plane, responsible for deciding how packets should be forwarded, communicates with the data plane that carries out these decisions, enabling a more dynamic and responsive network architecture [29]. In the context of Web 3.0, this separation becomes crucial. As the number of devices and applications within the network grows, the complexity of managing data flow increases exponentially. SDN, with its centralized control, can manage this complexity effectively, ensuring that data packets are routed optimally, reducing latency, and improving the overall performance of the network [30]. Moreover, SDN's programmability extends beyond simple traffic management. It can be used to implement sophisticated network functions, such as load balancing, intrusion detection, and firewalling, directly into the network infrastructure [31]. These functions can be crucial in a Web 3.0 environment, where security and reliability are paramount. By integrating these functions into the network control, SDN can provide a robust and secure infrastructure for the decentralized Web 3.0 applications [30; 32]. Furthermore, the abstraction provided by SDN can be beneficial for blockchain networks. By abstracting the network infrastructure, SDN allows blockchain nodes to focus on their core tasks, such as transaction verification, without worrying about the underlying network conditions. This abstraction can lead to more efficient use of resources and improved performance of the blockchain network [31]. In conclusion, SDN's unique features of control-data plane separation, direct programmability, and infrastructure abstraction make it an ideal choice for managing the complex and dynamic nature of Web 3.0 and blockchain networks. Its integration can lead to optimized data pathways, efficient traffic management, and enhanced network performance, thus facilitating faster transaction verification and increasing overall network throughput [30; 32]. ### Federated Learning (FL) FL's decentralized nature aligns well with the principles of Web 3.0, where data ownership and control are distributed among users rather than centralized authorities. In traditional machine learning models, data from all sources is collected and processed in a central location, which can lead to potential privacy breaches and misuse of data [33]. FL, on the other hand, keeps the data on the original device and only shares model updates, significantly reducing the risk of data leakage [34]. Moreover, FL is not just about privacy. It also offers benefits in terms of efficiency and scalability. By training models on the edge devices where data is generated, FL reduces the need for data transmission, which can be a significant bottleneck in large-scale machine learning applications. This can lead to faster model training and lower communication costs, making FL a more efficient and scalable solution for machine learning in a Web 3.0 environment [35]. FL also enables more personalized and accurate models. Since the models are trained on local data, they can capture the unique patterns and characteristics of the data at each node. This can lead to more personalized models that can provide better predictions for each user, enhancing the user experience in Web 3.0 applications [36]. Furthermore, the integration of FL with blockchain technology can further enhance the security and privacy of the system. Blockchain can provide a transparent and immutable record of model updates, ensuring that no malicious changes are made to the model. This can increase the trustworthiness of the FL system and encourage more users to participate in the learning process, further improving the performance of the model [37]. FL is a promising approach for machine learning in a Web 3.0 environment. Its ability to train models on decentralized nodes while preserving privacy, reducing communication costs, and providing personalized predictions makes it an ideal solution for the challenges of Web 3.0. By integrating FL with other Web 3.0 technologies like blockchain and SDN, we can create a secure, efficient, and user-centric Web 3.0 ecosystem [36]. ### Internet of Things (IoT) IoT, or the Internet of Things, refers to a network of physical devices or "things" embedded with sensors, software, and various technologies aimed at connecting and sharing data with other devices and systems via the Internet [38; 39]. The fusion of IoT with Web 3.0 and blockchain technologies presents intriguing opportunities [40; 41]. For example, data generated by IoT devices can be logged on a decentralized ledger, enhancing both transparency and security. Blockchain can serve as a trustworthy and unchangeable platform for IoT devices to securely exchange information in a Web 3.0 setting, thereby guaranteeing the authenticity and integrity of the data. Moreover, smart contracts on the blockchain can be used to automate IoT operations. For example, a smart fridge could automatically order groceries when they're running low, or an IoT-connected car could self-execute leasing agreements. All these transactions can be recorded on the blockchain for transparency and non-repudiation, making IoT devices smarter and more autonomous. In conclusion, the integration of SDN, FL, and IoT with Web 3.0 can significantly enhance the efficiency, privacy, and functionality of the Web 3.0 environment. SDN can optimize the network performance, FL can provide privacy-preserving machine learning, and IoT can offer seamless connectivity between various devices, all under the secure and transparent umbrella of blockchain technology. ### Hardware Security for Web 3.0 While the integration of various technologies plays a crucial role in Web 3.0, it is essential to address hardware security considerations to ensure the overall integrity and resilience of the decentralized web. Hardware security encompasses measures and techniques designed to protect the physical components and underlying infrastructure that power Web 3.0. Here are some key aspects of hardware security in the context of Web 3.0: 1- Trusted Execution Environments (TEEs): TEEs are hardware-based security mechanisms that provide isolated execution environments for sensitive computations [42]. They offer secure enclaves where critical operations, such as cryptographic key [43] management and secure transaction processing, can be performed. TEEs protect against attacks targeting the integrity and confidentiality of data, ensuring that critical operations are shielded from unauthorized access. 2- Secure Hardware Components: In Web 3.0, the security of hardware components becomes paramount. This includes ensuring the authenticity and integrity of hardware devices, such as IoT sensors and gateways, by implementing secure boot processes, tamper-resistant designs, and secure element integration [44, 45]. Hardware components should be resistant to physical attacks, such as tampering, reverse engineering, and side-channel attacks, to safeguard the overall security of the Web 3.0 ecosystem. 3- Hardware-based Key Storage: As Web 3.0 heavily relies on cryptographic mechanisms to ensure privacy and secure transactions, the secure storage of cryptographic keys becomes crucial. Hardware Security Modules (HSMs) or specialized secure chips can be utilized to store and manage cryptographic keys securely. These hardware-based solutions provide tamper-resistant environments and protect against key extraction or unauthorized use. 4- Supply Chain Security: Ensuring the integrity of the hardware supply chain is critical to prevent the insertion of malicious components or tampering during manufacturing or distribution processes [46]. Implementing mechanisms to verify the authenticity and integrity of hardware components, such as utilizing trusted suppliers, secure manufacturing practices, and tamper-evident packaging, helps mitigate supply chain-related risks[37, 47]. 5- Hardware-based Attestation: Hardware attestation mechanisms allow for the verification and attestation of the integrity and identity of hardware devices. These mechanisms enable trust establishment and can be utilized to ensure that the participating hardware devices in Web 3.0 networks meet the desired security requirements and are not compromised [38]. 6- Hardware-based Random Number Generators (RNGs): RNGs are essential for various cryptographic operations in Web 3.0, such as key generation and digital signatures. Hardware-based RNGs provide a source of true randomness, which is more secure than software-based pseudo-random number generators. They are resistant to prediction and manipulation, thereby enhancing the security of cryptographic operations [48]. 7- Physical Unclonable Functions (PUFs): PUFs are unique features of hardware devices that are inherently random and irreproducible, even by the manufacturer. They can be used to generate device-specific cryptographic keys, providing a high level of security against cloning and reverse engineering attacks. PUFs can be particularly useful in IoT devices, which form a significant part of the Web 3.0 ecosystem [49]. 8- Hardware Security in Edge Computing: With the rise of edge computing in Web 3.0, ensuring the security of edge devices becomes crucial. This includes protecting the data stored and processed at the edge, securing the communication between edge devices, and ensuring the integrity of edge computations. Hardware-based security measures, such as TEEs and secure boot, can provide robust protection for edge devices [42, 50]. 9- Post-Quantum Cryptography Hardware: With the advent of quantum computing, traditional cryptographic algorithms that rely on the difficulty of factoring large numbers or solving discrete logarithm problems can be broken. Post-quantum cryptography aims to develop new algorithms that can resist quantum attacks. Implementing these algorithms in hardware can provide a higher level of security and performance [51]. 10- Hardware Trojans Detection: Hardware Trojans are malicious alterations to hardware components that can cause undesired effects, such as information leakage or system failure. Detecting these Trojans is challenging due to their stealthy nature. Techniques such as side-channel analysis, functional testing, and hardware attestation can be used to detect and mitigate Hardware Trojans [52]. Hardware security forms a critical pillar of the overall security framework for Web 3.0. By integrating robust hardware security measures, Web 3.0 can ensure the integrity, confidentiality, and availability of its services, providing a secure and trustworthy environment for users. The combination of software and hardware security measures will provide a comprehensive security solution that can withstand the diverse and evolving threats in the Web 3.0 ecosystem [49; 52]. By addressing hardware security considerations, Web 3.0 can build a robust foundation that enhances the overall security posture of the decentralized web. Combining software-level security measures with hardware-level protections ensures a holistic approach to safeguarding critical operations, data confidentiality, and the trustworthiness of the infrastructure supporting Web 3.0 applications and services. ## 7 Conclusion In conclusion, the emergence of Web 3.0, backed by blockchain technology, stands to reshape the digital world by shifting power dynamics and offering more privacy and control to users. However, it also brings with it an array of unique security challenges that require comprehensive solutions. This paper shed light on these issues, explicitly defined Web 3.0 architecture based on the blockchain, and proposed a security protocol designed to enhance privacy and support anonymous auditing. Our work presented the practical design and operation of Web 3.0, with a focus on overcoming its inherent security challenges. The proposed security protocol emphasized privacy preservation and anonymous auditing, vital factors in securing Web 3.0 platforms. We also demonstrated the application of privacy-enhancing techniques and the use of Tor for anonymous auditing in our proposed model. The comparative analysis further validated the efficacy of our approach as compared to existing methods. However, as the landscape of Web 3.0 is still evolving, so too must the security solutions we propose. Future work will focus on refining and expanding this protocol as the field continues to evolve. As we transition from Web 2.0 to 3.0, the importance of security and privacy will only grow, and we believe our research contributes significantly to the continuous dialogue in this space.
2308.02013
Federated Representation Learning for Automatic Speech Recognition
Federated Learning (FL) is a privacy-preserving paradigm, allowing edge devices to learn collaboratively without sharing data. Edge devices like Alexa and Siri are prospective sources of unlabeled audio data that can be tapped to learn robust audio representations. In this work, we bring Self-supervised Learning (SSL) and FL together to learn representations for Automatic Speech Recognition respecting data privacy constraints. We use the speaker and chapter information in the unlabeled speech dataset, Libri-Light, to simulate non-IID speaker-siloed data distributions and pre-train an LSTM encoder with the Contrastive Predictive Coding framework with FedSGD. We show that the pre-trained ASR encoder in FL performs as well as a centrally pre-trained model and produces an improvement of 12-15% (WER) compared to no pre-training. We further adapt the federated pre-trained models to a new language, French, and show a 20% (WER) improvement over no pre-training.
Guruprasad V Ramesh, Gopinath Chennupati, Milind Rao, Anit Kumar Sahu, Ariya Rastrow, Jasha Droppo
2023-08-03T20:08:23Z
http://arxiv.org/abs/2308.02013v2
# Federated Representation Learning for Automatic Speech Recognition ###### Abstract Federated Learning (FL) is a privacy-preserving paradigm, allowing edge devices to learn collaboratively without sharing data. Edge devices like Alexa and Siri are prospective sources of unlabeled audio data that can be tapped to learn robust audio representations. In this work, we bring Self-supervised Learning (SSL) and FL together to learn representations for Automatic Speech Recognition respecting data privacy constraints. We use the speaker and chapter information in the unlabeled speech dataset, Libri-Light, to simulate non- IID speaker-siloed data distributions and pre-train an LSTM encoder with the Contrastive Predictive Coding framework with FedSGD. We show that the pre-trained ASR encoder in FL performs as well as a centrally pre-trained model and produces an improvement of 12-15% (WER) compared to no pre-training. We further adapt the federated pre-trained models to a new language, French, and show a 20% (WER) improvement over no pre-training. Guruprasad V Ramesh\({}^{1}\), Gopinath Chennupati\({}^{2}\), Milind Rao\({}^{2}\), Anit Kumar Sahu\({}^{2}\), Ariya Rastrow\({}^{2}\), Jasha Droppo\({}^{2}\)\({}^{1}\)University of Wisconsin-Madison \({}^{2}\)Amazon Alexa USA [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] **Index Terms**: representation learning, automatic speech recognition, federated learning, self-supervised learning ## 1 Introduction Federated Learning (FL) [1, 2] offers collaborative training of models on decentralized data distributed across multiple devices. In FL, model updates from the participating devices are shared to a central server without compromising on user data privacy, as personal data remains intact on client devices. A wealth of speech data is available on client devices such as smartphones and voice assistants like Alexa and Siri. This data can produce robust speech models for ASR and other downstream speech tasks. The audio data is unlabeled in nature due to the lack of reliable transcripts, which is more challenging in FL, to learn from this data in supervised fashion. Alternatively, we can learn robust representations using self-supervised learning (SSL), which are later fine-tuned with limited transcribed data. SSL attempts in speech [3, 4, 5, 6, 7] show the efficacy of such a two-stage training strategy. Here, we exploit FL, which offers the privacy of audio on the devices while producing effective speech transcription models. In this paper, we combine SSL and FL, to learn speech representations for ASR. Our strategy is two-stage: first, we use the 50K hours of unlabeled monolingual (English) speech corpus, Libri-Light [8] to pre-train a Recurrent Neural Network Transducer (RNN-T) [9] encoder with the FL algorithm, FedSGD [1]; second, we fine-tune RNN-T (with the above pre-trained encoder) on a limited amount of transcribed audio data. We simulate non-identical and independently distributed (non-IID) data using the speaker and chapter information of the utterances. Hereafter, this data setup is referred to as _speaker-siloed_ data, which is used to pretrain the LSTM encoder in a Contrastive Predictive Coding (CPC)[3] framework across a set of clients using FL. We show that the federated pre-trained models are similar in performance to that of the centrally pre-trained models. We further show the efficacy of FL pre-trained models in adapting to a foreign language, French. ## 2 Related Work There are a handful of attempts in literature for applying FL in speech-related tasks. Some of these applications are: ASR [10, 11, 12, 13, 14], Keyword Spotting [15, 16], Emotion Recognition [17, 18, 16], and Speaker Verification [19]. Notably, for combining FL with SSL, the only available works include Federated self-supervised learning (FSSL) [20] for acoustic event detection and [21], where the challenges involved in combining FL & SSL due to hardware limitations on the client are highlighted and a wav2vec 2.0 [4] model is trained with FL on Common-Voice Italian data [22] and fine-tuned for ASR. In this paper, we study the use of FL for producing acoustic representations using Libri-Light 50K+ hours of audio data. Our contributions are: * To the best of our knowledge, we are the first to integrate FL for ASR at scale that produces competitive downstream ASR performance compared to regular pretraining. * Adapting monolingual federated pretrained models to a resource-constrained target language (French) resulting in an improvement of 20% relative WER. ## 3 Methodology ### Generating speaker-siloed data One of the challenges in training models in a federated setup is the diverse data distribution among the participating devices. In the context of speech assistants, the data on a client device often belongs to one speaker and have a specific linguistic and acoustic profile. Mimicking a similar setup using an open-source dataset like Libri-Light poses a challenge. Previous research attempts to integrate open-source audio data in a federated setup involved splitting the dataset based on the speaker information [21, 11, 23] but does not include a temporal notion for the data. Libri-Light data contains speakerID and chapterID information embedded in the unique identifier of each utterance. Here we additionally include a temporal notion by treating chapterID as a unit of time and create non-IID partitions as follows: 1. Given a speech corpus, silo them based on the speakerID. 2. Sort the siloed utterances based on the chapter information, serving as a proxy for temporal distribution. For example, with three clients, **C1**, **C2**, **C3**, in a round of federated pertaining, each of them contains batches that have only one speaker's utterances. **C1** contains utterances from Speaker **S1**-Chapter1, **C2** contains utterances from Speaker **S2**-Chapter1, and so on. As Libri-Light's speaker-to-utterance ratio is not uniform, the addition of this temporal notion using the chapter information ensures that different clients in a round contain data from different speakers and more closely represent a real-life scenario of federated training of speech models. This _speaker-siloed_ data is used in the federated pre-training experiments. Additionally, during pre-training, we ensure that the batches generated during training are from the same speaker. Another non-trivial difference between federated and centralized pre-training is, in FL we process data once as opposed to the possibility of multiple passes in centralized pre-training. This strict constraint is to simulate the non-availability of data on the clients after a certain time threshold. ### Stage 1: Pre-training We compare traditional centralized pre-training with the proposed federated pretraining. **Central pre-training**: Our baseline is a model trained using the standard centralized CPC framework as outlined in Appendix A.1. Here, data is gathered and shuffled at the cloud eliminating speaker-specific distributions and running multiple epochs to learn the representations. **Federated pre-training**: Here, we pre-train the model with the hyperparameters (non-FL specific) and the network architecture are the same as central pre-training. Each of the clients involved in a round of federated pre-training updates the model based on local data and sends back weight updates to a central server. The central server accumulates and aggregates the weights and broadcasts the updated model to the participating clients of the next round (Algorithm 1). Unlabelled client data is used only once in the training process as indefinite retention of data is infeasible, especially on resource-constraint devices. ### Stage 2: Fine-tuning To assess the performance of the pre-trained models, both centralized and federated, we initialize the RNN-T encoder with the pre-trained model. The RNN-T encoder in our setup follows the same architecture as the CPC model. The prediction network and joint network are randomly initialized. All layers are centrally fine-tuned with transcribed audio data. ## 4 Experiments ### Datasets We use Libri-Light (LL) dataset [8] for pre-training. LL contains three parts1: _small_, _medium_, and _large_, we consider the _large_ portion, that contains \(52,000\) hours of unlabeled speech data generated from \(6845\) speakers. In the centralized pre-training experiments, we use all the 52K hours of the data, for FL, the same data is arranged in non-IID format, _speaker-siloed_, for pre-training. We further study the impact of the amount of data used in FL pre-training, for that, we use, \(5K\) hours (\(LL-5K\)), a randomly selected subset from LL. In fine-tuning the pre-trained models, we use the \(960\)-hour train partition of the Librispeech [24]. The fine-tuned models are evaluated on the dev and test sets of Librispeech. In addition to these, we use the French data from the Multilingual Librispeech (MLS) dataset [25] in our resource-constrained language adaptation experiments. Table 1 summarizes the datasets. Footnote 1: [https://github.com/facebookresearch/libri-light/tree/main/data_preparation](https://github.com/facebookresearch/libri-light/tree/main/data_preparation) ### Training Details The CPC (RNN-T encoder) models in centralized and federated pre-training both consist of a \(3\)-layer \(512\)-unit feed-forward feature encoder with ReLU activation and a \(6\)-layer \(1024\)-unit unidirectional LSTM context encoder. The RNN-T model includes a \(1024\)-unit \(2\)-layer prediction network and a single dense joint layer. The inputs for both pre-training and fine-tuning models are constructed from \(256\)-dimensional STFT features obtained through a \(25\)ms window and \(10\)ms frame shift, combined into a final \(768\)-dimensional feature by concatenating three consecutive frames. All pre-training experiments are randomly initialized. The pre-trained model weights are used to initialize \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{3}{c}{Duration(in hours)} \\ \cline{2-4} & Train & Dev & Test \\ & & (clean, other) & (clean, other) \\ \hline Libri-Light Large & 51934 & NA & NA \\ Librispeech & 960.9 & 5.4, 5.4 & 5.3, 5.1 \\ Multilingual & 1076.6 & 10 & 10 \\ Librispeech French & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Datasets used. For Librispeech, the clean and other splits are mentioned for dev and test Figure 1: Steps in obtaining speaker-siloed Libri-Light data for Federated pretraining the RNN-T encoder in the ASR fine-tuning stage, and the prediction and joint networks are randomly initialized. The ASR model is trained with a \(2500\) sentence-piece [26] vocabulary. We used \(48\) V100 GPUs for training the models. During pre-training, in centralized training, we ran for \(130K\) steps with a bucket batch size of \([64,32,1]\). In FL, we vary the number of physical GPUs for the federated experiments based on the number of clients per training round involved, roughly keeping two clients per GPU and allow a maximum batch size of \(8\) per client. The FedSGD models are trained for 22k rounds or one pass of the entire data. On each client, SGD with a unit learning rate is used, and Adam is applied at the server with a learning rate of \(1e^{-5}\). Fine-tuning on Librispeech is run for \(100\,K\) steps. ## 5 Results ### Federated versus Central Pre-training We show the efficacy of pre-trained (FL/central) audio representations. The RNN-T model is fine-tuned (with the pre-trained encoder) on the Librispeech \(960\) hour train data. Table 2 shows the WER of the fine-tuned models on the _dev_ and _test_ partitions. We compare the fine-tuned models to an RNN-T model trained without any pre-training, _from-scratch_, where the encoder is randomly initialized. We also study the impact of the amount of data on pre-training with LL-5k and LL-52k datasets. We find that all the pre-trained models (FL/central) performed better than the _from-scratch_ model. On average, the pre-trained models show a relative WER (WERR) improvement of \(11.3\)% and \(14.22\%\) on _dev_ and _test_ sets, respectively. We observe pre-training on the larger dataset (LL-52k, see Table 2) is better in both the cases of FL and central pre-training, similar to [4, 5, 6] where centrally pre-trained models show better ASR performance with more data. We can reassure the above observation for FL pre-training to produce audio representations. The key observations for FL vs. central pre-training include: i) the performance of the FL pre-trained models is similar to that of the central; ii) in fact, when pre-trained on a small amount of data (LL-5k), the FL pre-trained models perform better than central, average WERR on _dev_ and _test_ sets is \(4.76\%\) and \(5.56\%\); iii) the observed superior/equal performance of the FL models is despite the single pass through the data in FL settings as opposed to multiple passes through the same data in central pre-training. We attribute the performance of the FL models to the hybrid approach of combining FL and SSL, which produces robust privacy-preserving speech representations that are useful for downstream ASR tasks. Finally, we experimented with \(48\) and \(70\) clients in FL during pre-training. There is an insignificant difference in the performance of the models between the two settings. However, the impact of number of clients on the performance of the speech models can be explored in future. ### Adaptation of SSL representations Self-supervised representations are beneficial for tasks with a shortage of labeled data. The adaptation of both multilingual and monolingual pre-trained models [27, 28, 29] demonstrated remarkable adaptability to other languages in speech. We explore the effectiveness of the speech representations learned from our approach (FL+SSL) in adapting to French speech. We use FreeMatch data from Multilingual Librispeech [25], consists of \(1070\) hours of train data. The data is randomly split into two sets: \(215\) hours (_train-215_) and \(855\) hours (_train-855_).We conduct two sets of experiments: Direct fine-tuning and continued pre-training followed by fine-tuning. In direct fine-tuning, the pre-trained models from Libri-Light are directly fine-tuned using the _train-215_ data. In the other setting, we continue pre-training the LL pre-trained models using the _train-815_ unlabeled set and only then fine-tune them using the _train-215_ data. Table 3 shows the results on the MLS French dev and test sets. Overall the pre-trained models perform better than from-scratch. The continued pre-training results are better than the direct fine-tuning experiments as models first adapt to the language and then to the ASR task. Another observation is that the amount of pre-training data plays a crucial role in the target language ASR performance in both the fine-tuning experiments (direct and continuous), see the significant difference in the performance of the LL-5k and LL-52k pre-trained models. FL pre-trained models are competitive with that of the centrally trained models in adapting to another language. ## 6 Conclusion We empirically demonstrated that FL models pre-trained in SSL style perform similarly to the centralized pre-training for the downstream ASR tasks. We employed the Contrastive Predictive Coding (CPC) framework with FedSGD at scale on a large unlabeled monolingual speech corpus, Libri-Light. The FedSGD pre-trained models also adapt to a new language, where continued pre-training on domain-specific language improves performance. In conclusion, we suggest the inflection of traditional central pre-training of audio representations to FL based pre-training to be as effective as the central case. In the future, we plan to extend the work to broader speech corpora, such as multilingual audio datasets and closer to real \begin{table} \begin{tabular}{c c c} \hline \hline Pre-training & Fine-tuning Word Error Rate (WER) \\ \hline Setting & Dev-Clean/Other & Test-Clean/Other \\ \hline No pretraining & 6.84/17.63 & 7.24/18.42 \\ \hline C, LL-5k & 6.81/17.72 & 7.3/18.2 \\ FL48,LL-5k & 6.59/**17.02** & **6.80**/**17.42** \\ FL70,LL-5k & **6.43**/17.17 & 6.83/17.88 \\ \hline C, LL-52k & 5.79/**16.29** & 6.07/**16.21** \\ FL48,LL-52k & 5.83/16.9 & 6.16/16.34 \\ FL70,LL-52k & **5.77**/16.4 & **6.05**/**16.21** \\ \hline \hline \end{tabular} \end{table} Table 2: Central pre-trained and Federated pre-trained models with RNN-T fine-tuning on train-\(960\) hour Librispeech. WER results are on the dev and test sets of Librispeech. No pre-training model (trainedfrom-scratch on train-\(960\) hour) is the baseline. C refers to central pre-training, and FL refer to the Federated pre-trained models. The number next to FL indicates the number of clients used in each training round. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Experiment} & Source Pre-train & Dev WER & Test WER \\ & Setting & (WERR) & (WERR) \\ \hline No pre-training & - & 46.31 & 42.73 \\ \hline Direct & C, LL-52k & 37.8(18.37) & 34.6(19.02) \\ fine-tuning & FL48,LL-52k & 37.82(18.33) & 34.25(19.84) \\ & FL70,LL-5k & 43.27(6.56) & 39.69(7.11) \\ \hline Continued & C, LL-52k & **35.92**(**22.43**) & **32.6**(**23.70**) \\ pre-training & FL48,LL-52k & **37.63**(**18.74**) & **34.12**(**20.14**) \\ + fine-tuning & FL70,LL-5k & 41.48(10.42) & 37.1(13.17) \\ \hline \hline \end{tabular} \end{table} Table 3: Results of the language adaptation experiments on train-215 hour MLS French data. We compare no pre-training, direct fine-tuning, and continued pre-raining followed by fine-tuning. Boldface stands for the best pre-trained models. life federated speech corpora, explore more recent SSL frameworks based on self-attention, and characterize the impact of various FL settings, such as the number of clients participating in a training round and the impact of various FL algorithms. ## 7 Acknowledgements We acknowledge Aparna Khare and Minhua Wu in the initial discussions and the help to onboard to CPC and Anirudh Raju for his keen interest in the idea and insightful suggestions.
2302.03356
Data augmentation for battery materials using lattice scaling
A significant step forward in Lithium-ion batteries (LIBs) developments can only be achieved by proposing mold-breaking research based on selecting the best materials for the cell components, optimizing cell manufacture, anticipating the degradation mechanisms of the LIBs, and consolidating the regeneration processes of damaged batteries. LIBs with longer recycling life, better safety, and the ability to be reused will establish sustainable state-of-the-art batteries with maximum energy efficiency, low costs, and minimal CO2 emissions within a circular economy, promoting sustainability in areas as relevant as electromobility and portable electronics. Recently, there has been increasing interest in applying Artificial Intelligence (AI) techniques and their subclasses to better predict novel materials with designed properties. This collection of methods has already obtained considerable success, having been used to predict numerous physical properties of materials. However, compared to other fields, the materials data are typically much smaller and sometimes more diverse, which undoubtedly affects the construction and effectiveness of AI models. At present, several Data Augmentation (DA) methods have been proposed in materials science based on flipping, rotating, and distorting the unit cells of materials, which have been demonstrated to be very efficient in increasing the size and quality of data. Here we present an even more effective new method of Data Augmentation based on the lattice scaling of crystal structures. In the lattice scaling DA method, the unit cell is perturbed, undergoing an increase and decrease of the unit cell volume in an isotropic or anisotropic way. This transformation is particularly pertinent for battery components since volume changes of up to 5% have been reported for the insertion-based LIBs during cycling.
Eduardo Abenza, César Alonso, Isabel Sobrados, José M. Amarilla, Javier L. Rodríguez, José A. Alonso, Roberto G. E. Martín, Maria C. Asensio
2023-02-07T10:01:42Z
http://arxiv.org/abs/2302.03356v1
# Data augmentation for battery materials using lattice scaling ###### Abstract A significant step forward in Lithium-ion batteries (LIBs) developments can only be achieved by proposing mold-breaking research based on selecting the best materials for the cell components, optimizing cell manufacture, anticipating the degradation mechanisms of the LIBs, and consolidating the regeneration processes of damaged batteries. LIBs with longer recycling life, better safety, and the ability to be reused will establish sustainable state-of-the-art batteries with maximum energy efficiency, low costs, and minimal CO\({}_{2}\) emissions within a circular economy, promoting sustainability in areas as relevant as electromobility and portable electronics. Recently, there has been increasing interest in applying Artificial Intelligence (AI) techniques and their subclasses to better predict novel materials with designed properties. This collection of methods has already obtained considerable success, having been used to predict numerous physical properties of materials. However, compared to other fields, the materials data are typically much smaller and sometimes more diverse, which undoubtedly affects the construction and effectiveness of AI models. At present, several Data Augmentation (DA) methods have been proposed in materials science based on flipping, rotating, and distorting the unit cells of materials, which have been demonstrated to be very efficient in increasing the size and quality of data. Here we present an even more effective new method of Data Augmentation based on the lattice scaling of crystal structures. In the lattice scaling DA method, the unit cell is perturbed, undergoing an increase and decrease of the unit cell volume in an isotropic or anisotropic way. This transformation is particularly pertinent for battery components since volume changes of up to 5% have been reported for the insertion-based LIBs during cycling. *These authors contributed equally + Corresponding authors: Maria C. Asensio ([email protected]) and Roberto G.E. Martin ([email protected]) ## Introduction Materials are the keystones of every clean energy innovation in diverse domains, such as advanced batteries, solar cells, and low-energy semiconductors, among others. As the discovery and development of new materials currently imply studies over 10 to 20 years at a very high cost, finding appropriate materials is the bottleneck of the global transition to a low-carbon future [1, 2]. Recently, there has been an growing relevance in applying Artificial Intelligence (AI) and its related techniques [3, 4, 5, 6] to better predict novel materials with designed properties. This extensive collection of AI methods inspired by the outstanding success in text and image treatment is quickly and effectively spreading in materials science and engineering. Notably, in the field of materials development, considerable achievement has been attained in the context of the Materials Genome Initiative (MGI) [7, 8]. The goal to find optimized materials that are highly performant drives the development and application of well-accomplished and cost-effective Machine and Deep learning (ML and DL) [9] algorithms able to predict materials with preselected properties and remarkable performance. Both schemes, closed-loop active learning approaches [10], and generative DL models are being developed to get stable materials with target properties and accessible synthesis methods [11]. At the same time, material data repositories, which feed and enhance these Artificial Intelligence frameworks, enrich and multiply rapidly [12]. However, the databases describing materials are generally much smaller than those in the text and image fields. [13]. Particularly in batteries, they are relatively small and more diverse than in other traditional AI domains, which undoubtedly affects the construction and effectiveness of ML and DL models. The battery's components are successfully mastered by selecting materials that need a complete physicochemical characterization and a full electrochemical assessment. Obviously, the most straightforward way to fit this fundamental problem of size and homogeneity of databases is directly to collect more real data. However, that is not always feasible and usually costs time and effort. In addition, raw data in materials science, mainly experimental raw data, are typically vulnerable to noise, corruption, missing values and frequently suffer from inconsistent inputs. Hence, the issue related to missing data and the significant differences in the data type is always present, as the information is often collected through multiple sources like a diversity of theoretical calculations and a wide variety of experimental techniques [14, 15, 16, 17, 18]. In this context, the development of ML and DL approaches is closely dependent on methodologies able to perform compelling data pre-processing and data augmentation (DA) [19, 20], as poor datasets can essentially affect the accuracy and lead to incorrect predictions. That makes crucial the utilization of procedures that ensure a robust quality of the datasets. Moreover, another source of poor performance that can affect ML and DL models is the training data overfitting, which impairs the generalization ability of the models. DA is known to be a powerful technique to avoid overfitting and improving the performance of the models on unseen data [21]. Effectively, DA procedures can generate data for ML and DL models, decreasing the dependency of these methods on the size of the training data and improving the performance and precision of their results. Recently, several DA methods have been proposed in materials science based on flipping, rotating, and distorting the unit cells of materials, which have demonstrated to be very efficient at increasing the size and quality of the data [22]. They are considered an efficient way to perform data augmentation treatments without greatly distorting the original data, allowing a better-automated knowledge capture from the dataset by the AI algorithms, representing, processing, and learning from data. In this paper, we present a new method of Data Augmentation based on the lattice scaling of crystalline structures. In the lattice scaling DA method, the unit cell is perturbed by increasing and decreasing its volume in isotropic and anisotropic ways. This transformation is particularly pertinent for battery components since volume changes up to 5% have been reported for insertion-based rechargeable LIBs during the cycling processes [23][24, 25]. This DA can be used as a simple plug-in to traditional ML models such as Support Vector Machine (SVM) and more advanced DL architectures like Graph Neural Networks (GNNs). In this work, a simple machine learning pipeline is utilized to test the efficiency of the proposed lattice scaling DA method on a relatively small dataset of cathode active materials with different properties related to the material itself and to the performance of the battery that contains the material as its cathode and a metallic anode of the mobile ion. The Crystal Graph Convolutional Neural Network (CGCNN) model [26] is fed by the real and augmented data sets independently, and the results of both pipelines are compared. CGCNN learns properties of the materials based on their representation as a crystal graph. Each crystal graph is directly generated from the corresponding Crystallographic Information File (CIF) data of the considered material, which is a standard text file format for representing crystallographic information, promulgated by the International Union of Crystallography (IUCr). Figure 1 shows that the chemical structures of the real data described by individual CIF files are considered as input of the pipeline, followed by the automatic operation of the lattice scaling to generate the DA data set. As indicated schematically in the figure, the Figure 1: General framework describing the lattice scaling data augmentation method and the pipeline using this data augmentation for the predictive CGCNN model. CGCNN model is fed by the original chemical data and the data generated by the DA process as representative datasets of properties associated with batteries and materials battery components. The error metrics of these property predictions confirm that the lattice scaling DA method remarkably increases the effectiveness and performance of the CGCNN method when used for chemical and electrochemical properties prediction. Finally, the lattice scaling DA results are compared with other crystal structure DA methods already reported in the literature, including random perturbation, random rotation, random translation, and swap axes transformation, using precisely the same data sets and calculation procedures to reinforce the benefits, and promote the dissemination of our lattice scaling DA technique, we have made available an open-source library that can be widely and easily used by implementing a few lines of code. The package based on Python contains tutorials and example codes to test it and is available under request to the corresponding authors of this article. ## Modeling Design, Motivation, and Methods In traditional data augmentation methods, the main goal is to expand a training dataset by using transformations able to preserve the knowledge of class labels of the original dataset before the transformation. Experts in particular AI domains are usually very efficient at finding the right DA transformation to enhance the ML model performance. We propose one DA transformation, which can be automatically applied to a materials dataset by slightly modifying the unit cell of the materials which constitute the original dataset. Some transformations, like volume variations produced by small bond distance changes, distortion of angles, or anisotropic bond distance modifications, are widespread in crystal structures and can be spontaneously realized. Hence, DA transformation based on slightly modifying the size and shape of the unit cells seems to be an effective and physically meaningful way to increase the performance of the property prediction [27]. This work uses the properties reported at the Battery Explorer - Materials Project platform [28], which is a customized search tool for Li-ion electrode materials. This platform offers information about existing materials and predicts the properties of new ones related to cathodes and anodes of LIBs. In it, relevant properties like the voltage, capacity, energy density, specific density, voltage profile, oxygen evolution, and lithium positive ion diffusivity, among others, describe the materials. This Database mainly integrates results from high-throughput computing research based on quantum mechanics, solid state physics and statistical mechanics, and selected experiments. This approach, led by the Ceder group, is widely described in the following scientific reports, [29, 30, 31, 32, 33].This work uses this database with and without the DA process to train the graph-based predictive model CGCNN. The selected input structures are materials with application in insertion-based batteries, such as metal-ion batteries, in the discharged state (i.e., where the cathode is "full" of the mobile ion). For the lattice scaling DA technique, the volume of the materials has been modified in different ranges of percentages. However, we consider that the most physically meaningful range of percentages has been from -5% to +5%. Performing the DA transformation with such a small volume variation is aimed at increasing the'safety' of the transformation. In data augmentation contexts, the safety of a transformation has been defined as the likelihood that the transformed data retains the same label as the original data [21]. While label preservation is not guaranteed, a change of \(\pm\) 5% in volume will be more likely to maintain the labels than a greater modification (_e.g.,_\(\pm\) 30%). Furthermore, volume changes up to 5% have been experimentally observed in insertion-based LIBs during the charge/discharge steps of their life cycle, supporting the notion that such small volume changes can preserve the labels of the data [23][24, 25]. In symmetric lattice scaling, the volume of the unit cell has been increased or decreased randomly (within limits), but the relationships between the three lengths of the unit cell (a,b,c) are kept constant. In asymmetric lattice scaling, unit cell lengths (a,b,c) have also been increased and decreased randomly (and therefore also the volume) but changing the values of a, b, and c independently. Results are shown for two cases where the maximum limits of volume variation were chosen to be 5% and 30%. To feed CGCNN, the crystalline systems are represented as graphs, where atoms constitute the graph's nodes. The bonds in these systems are represented by the edges between the nodes, where all atoms within an appropriate radius are considered bonded. Nodes and edges are associated with vector representations (attributes) that enhance the model training. In fact, node attributes encode chemical information about the chemical element in each node. In contrast, edge attributes encode the bond distance between the corresponding pair of atoms. The lattice scaling DA method slightly changes the interatomic distances. Hence the CGCNN graphs are only modified at the edge level, representing the bonds between atoms. Because the atomic bonds are based on the interatomic distance, some atoms might be considered bonded when they previously were not, or vice versa. Furthermore, the edge attributes will also be modified as the distance varies. These changes at the edge level are the reason why the lattice scaling method increases the amount of data present in the dataset. For each initial material, there will be multiple different crystal graphs that will be processed by the CGCNN model. We have focussed on the results of the recent high-throughput search on cathode materials. The CGCNN model has been applied to predict the seven electrochemical properties and four material properties described in Table 1. The model has learned from ab initio computational results available in Battery Explorer. Briefly, the properties of potential cathodes for LIBs are obtained through first principles computations and high-throughput computational screening approaches described in previous reports[34, 35]. \begin{table} \begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|} \hline **Property category** & **Property** & **Description** \\ \hline Electrochemical & Average voltage & Amount of electrical potential a battery \\ & & holds. Units: V. \\ \hline Electrochemical & Gravimetric capacity & Amount of electric charge an electrode \\ & & material can store normalized by weight. \\ & & Units: mAh/g. \\ \hline Electrochemical & Gravimetric energy & The specific energy of a battery is a measure of how much energy a battery can store \\ & & normalized by weight. Units: Wh/kg. \\ \hline \end{tabular} \end{table} Table 1: List of the electrochemical and material properties predicted using the CGCNN model. ## Datasets The initial dataset was formed by 4401 electrode active materials for metal-ion batteries, obtained from the Battery Explorer of the Materials Project database on 2021-12-21. To train a CGCNN model, this initial dataset is split into a training set, validation set and test set. The training set, composed by 80% of the materials of the initial set, is used to directly train the model. The validation set, composed by 10% of the materials, guides the training of the model. Finally, the remaining 10% of the materials form the evaluation set, which will be used for the final evaluation of the trained model's performance. For each data augmentation technique, the initial training set was augmented, generating a new augmented training set. In each dataset, the initial training set was repeated 10 times, and the materials were subjected to the pertinent DA transformation. This results in a dataset in which, apart from the original materials present in the initial training set, there are ten new materials for each original material. The validation and evaluation sets remained non-augmented, encouraging a fairer evaluation of the technique. Four datasets have been generated for the lattice scaling augmentation technique. * First, two datasets were generated with randomly sampled volumes between 95 and 105% of the volume of the original material. Regarding these two datasets, transformations were applied in both an isotropic and an anisotropic fashion. In isotropic augmentations, all lattice lengths change in the same proportion, while in anisotropic transformations the lattice lengths vary in different proportions. * The remaining two datasets were generated with randomly sampled volumes between 70 and 130% of the volume of the original material. Transformations were also applied in an isotropic and anisotropic manner. All lattice scaling transformations were applied using the Python library "pymatgen"[36]. In summary, the size of the training, validation, and evaluation sets before and after the lattice scaling DA is indicated in Table 2. To compare the efficiency of the lattice scaling DA method presented in this work, we have carried out similar experiments from the same initial dataset described in Table 2, but employing four additional state-of-the-art DA transformations for crystalline chemical structures. These four DA transformations were (1) Random Perturbation, (2) Random Rotation, (3) Random Translate, and (4) Swap Axes. The same splits described in Table 2 have been used for all tested DA transformations. These transformations, performed using the Python library "AugLiChem", are briefly described as follows: * Random Perturbation: This augmentation transformation perturbs all the sites of the crystalline compound unit cell by a short distance between 0 and 0.5 A. This approach is very effective because, with such a small perturbation, the main symmetry elements of the system are usually radically changed. * Random Rotation: In the Random Rotation DA transformation, all the sites in the crystal are randomly rotated between 0 and 360 degrees. * Random Translate: In this augmentation transformation, only a few crystal sites are displaced by a distance ranging from 0 to 0.5 A. Swap Axes: In this DA transformation, the coordinates of the sites in the crystal are swapped between two axes, for example, between the x- and y-axes. With this transformation, the locations of all the crystal sites are considerably displaced. Additionally, we have also prepared a baseline dataset using oversampling, in which each instance of the initial training dataset has been repeated ten times, but without any DA transformation. The motivation of this baseline dataset is to have a dataset that is not transformed, but that contains the same amount of training samples as the augmented \begin{table} \begin{tabular}{l c c c} **Datasets** & **Training dataset** & **Validation dataset** & **Evaluation dataset** \\ **N° of materials** & **N° of materials** & **N° of materials** \\ **Initial Dataset** & 3515 & 439 & 440 \\ **After Data** & 38665 & 439 & 440 \\ **Augmentation** & & & \\ \end{tabular} \end{table} Table 2: Overview of the datasets used as input for property prediction using the CGCNN model, with and without Data Augmentation. Augmented materials are used only for training. datasets. Therefore, the DA techniques will be compared with this baseline dataset, so that the difference in performance cannot be attributed to differences in the amount of training data employed by the CGCNN model. In order to obtain a more robust performance evaluation, the entire process has been replicated three times for each method (e.g., for anisotropic lattice scaling 5%, three different datasets have been created). In each replicate, we have followed the procedure described above, only changing the random seed. This affects which materials belong to the training, validation, and evaluation sets; the augmentation transformations; and the training of the CGCNN model. The complete dataset generation is described in Figure 2. **RESULTS AND DISCUIONS** The enhancement in property prediction performance of the CGCNN model due to the lattice scaling DA has been evaluated by carrying out experiments to predict seven electrode materials' electrochemical properties and four material properties based on the chemical structure datasets listed in Table 2. These eleven properties have been documented in Table 1. The same procedure has been performed using the baseline dataset, and the augmented datasets. The first metric considered to evaluate the models is the Mean Absolute Error (MAE), defined as the mean of the errors (in absolute value) that the model commits when predicting the corresponding property. The lower the MAE value, the better the model behaves. In addition, the Root Mean Squared Error (RMSE) Figure 2: Generation of the datasets for the Data Augmentation experiments. The initial dataset is split into training, validation and evaluation sets. The training set is repeated 10 additional times to form the baseline dataset and augmented 10 times with the application of different DA transformations to create the rest of the datasets. This procedure is replicated three times in total. has been calculated as another performance metric. It is the square root of the mean of the errors (in quadratic value) that the model commits when predicting the respective property. A lower RMSE value indicates a better performance of the model. Furthermore, since each property has associated different units and variance, the Mean Absolute Deviation (MAD) of each predicted property has been calculated in the evaluation set to facilitate an unbiased comparison of the behavior of the model applied to properties with different natures. MAD measures how spread out the data is in the evaluation set, similar to the standard deviation; it is the mean of the absolute difference between all the data and the mean value of the corresponding property [37]. The value of MAD represents the MAE that would be obtained by a random model that predicts the mean value of the dataset for any instance. Therefore, the MAD/MAE ratio has been evaluated, as it allows a better comparison between different properties. A model with a high MAD/MAE ratio (5 or more) is considered an excellent predictive model. Finally, all metrics and predictions have been obtained using the same CGCNN model architecture, but applying the different DA methods to the dataset. Regarding the prediction of the electrochemical properties, Table 3 compares the lattice scaling DA methods and the previous state-of-the-art DA techniques with the baseline dataset. For the lattice scaling DA, the anisotropic lattice scaling (5%) improves the prediction of every property except for gravimetric capacity, for which this technique is neutral. The anisotropic lattice scaling (30%) improves every property prediction, except for minimum voltage, where it is neutral, and gravimetric capacity, where it worsens the performance. On the other hand, isotropic lattice scaling (5%) improves the prediction of every electrochemical property except for minimum voltage, where it is neutral, or gravimetric energy, where it performs worse than the baseline dataset. Lastly, isotropic lattice scaling (30%) only improves the predictions of gravimetric capacity, volumetric capacity and maximum voltage, worsening the predictions of the rest of the properties. For every electrochemical property except gravimetric energy, one or multiple lattice scaling DA methods give the best MAD/MAE improvement out of all the DA techniques, performing better than the state-of-the-art augmentations. We see remarkable improvements, up to 10.5% in the case of anisotropic lattice scaling (30%) applied to maximum voltage prediction. Regarding gravimetric energy, although anisotropic lattice scaling performs better than the baseline, the best results are obtained by the random perturbation DA method. An additional model was also trained with the initial dataset, but it consistently showed a worse performance than the other methods, due to the fact that it contained 10 times less data than the baseline or the augmented datasets. It should be noted that, except for a few cases, the state-of-the-art DA techniques for crystalline materials perform worse than the baseline dataset, where we are simply oversampling 10 times the initial dataset without any additional transformation. This topic will be discussed later. \begin{table} \begin{tabular}{l The same strategy has been followed to evaluate the effectiveness of the lattice scaling DA transformation on the enhancement of the materials' properties prediction using the CGCNN model. Apart from the electrochemical properties of the electrode materials, the CGCNN models were also trained to predict four materials properties: formation energy, energy, Fermi energy and band gap energy. For formation energy, every DA technique performed worse than the baseline. For energy per atom, the impact of every DA technique was either neutral or positive, with isotropic lattice scaling (5%) providing the greatest improvement, followed by isotropic lattice scaling (30%) In the case of Fermi energy, both types of lattice scaling (5%) had a positive impact, while most of the state-of-the-art augmentation methods impaired the performance of the model. Finally, band gap energy improved the most with anisotropic lattice scaling (5%), having mixed results with the other techniques. \begin{table} \begin{tabular}{c c c c c c c} & **Average** & **Gravimetric** & **Gravimetric** & **Maximum** & **Minimum** & **Volumetric** & **Volumetric** \\ & **Voltage** & **Capacity** & **Energy** & **voltage** & **voltage** & **capacity** & **Energy** \\ & (V) & (mah/g) & (Wh/kg) & (V) & (V) & (Ah/I) & (Wh/I) \\ Anisotropic & & & & & & \\ Lattice Scaling & 4.6\% & 0.5\% & 4.2\% & 3.1\% & **3.3\%** & **7.3\%** & **9.2\%** \\ & (5\%) & & & & & \\ Anisotropic & & & & & & \\ Lattice Scaling & 6.0\% & -1.1\% & 6.9\% & **10.5\%** & 0.7\% & 2.4\% & 7.8\% \\ & (30\%) & & & & & \\ Isotropic & 2.5\% & 1.4\% & -2.2\% & 4.4\% & -0.3\% & 3.0\% & 7.8\% \\ & (5\%) & & & & & & \\ Isotropic & -3.0\% & **1.8\%** & 0.6\% & 2.4\% & -5.0\% & 4.8\% & -1.5\% \\ & (30\%) & & & & & & \\ Random & -1.8\% & -3.8\% & -6.1\% & -2.4\% & -1.9\% & 1.6\% & 2.7\% \\ & & & & & & & \\ Random & -3.4\% & -7.8\% & **10.7\%** & 9.3\% & -0.2\% & -0.6\% & 4.3\% \\ & & & & & & \\ Random Rotate & -8.1\% & -2.5\% & -3.1\% & -0.8\% & -8.9\% & 0.7\% & 1.8\% \\ & & & & & & \\ Swap Axes & -7.5\% & -1.0\% & -7.2\% & -7.1\% & -12.8\% & -1.6\% & -2.2\% \\ \end{tabular} \end{table} Table 4: **Comparison of material property prediction with different techniques of Data Augmentation, expressed as the mean MAD/MAE improvement with respect to the baseline dataset.** For MAD/MAE ratio, a higher metric is indicative of a better performance. For a recapitulative of the comparative metrics on the prediction of the CGCNN model for all methods (MAE, RMSE, and MAD/MAE ratio), see Tables S8-S11.** In summary, lattice scaling DA, and especially anisotropic lattice scaling with a maximum volume change of 5%, generally improve the performance of the CGCNN predictive model in comparison to the baseline dataset and to the other state-of-the-art DA techniques (with some exceptions). In most of the cases, the state-of-the-art DA techniques tend to perform worse than the baseline dataset. In this study, we have used the default configuration for the different DA methods, which could be one of the reasons for this substandard performance. Using hyperparameter tuning to select the best configuration for each technique could be essential in improving their performance in material and electrochemical property prediction. Likewise, this tuning could be applied to our lattice scaling method, to further increase the predictive performance of the CGCNN model. It should be noted that the performance of each DA technique seems to be related to the property that is being predicted. The prediction of some material properties, such as formation energy, is always impaired when using data augmentation, independently of the transformation. On the contrary, properties such as volumetric energy improve with almost every DA transformation when comparing it to the baseline. By applying a data augmentation transformation while maintaining the same labels, we are biasing the model into considering that this particular transformation does not affect the predicted property. For instance, in the case of formation energy, we hypothesize that the volume of the unit cell and, in general, atomic displacements, affect the final value of the property. Therefore, when applying any DA technique, the prediction would not improve on this property, because it depends on the parameters that we are modifying. The same reasoning could be applied to the other materials and electrochemical properties. Furthermore, the initial dataset from Materials Project contains electrode active materials of different metal-ion batteries. A recent work on image data augmentation shows that certain augmentation techniques can enhance the prediction on certain classes, while impair the prediction on other classes [38]. In this work, we evaluate the augmentation techniques without distinguishing between the different inserted metal ions. However, it Figure 3: Comparison of the mean MAD/MAE ratio between isotropic and anisotropic lattice scaling (5%), the Swap Axes DA transformation, and the baseline dataset. is known that insertion-based batteries with different metal ions experiment different volume changes during their charge and discharge steps, as seen in the Materials Project database [39, 40, 41, 42]. Given the difference in volume changes, and the fact that the lattice scaling works by modifying the volume of the materials, it would be interesting to analyze the performance of this technique on the batteries with different mobile ions, to evaluate whether it depends on the type of ion. This knowledge could be useful for practical applications, to choose the method that better improves the prediction on LIBs instead of metal-ion batteries in general. In conclusion, lattice scaling is a powerful technique to improve the prediction of some electrochemical and material properties. It could have many possible near-future applications in predicting electrochemical properties, as it can ensure substantial gains over other standard DA approaches, given that, with some exceptions, the standard state-of-the-art DA transformations produce a worse performance than simply oversampling the initial dataset. ## Data Availability The data that support this work are available in the article and Supplementary Information file. Further raw data can be found at the CPE database: [https://smartmaterial.hi-iberia.es/app/account/login/?next=/app/](https://smartmaterial.hi-iberia.es/app/account/login/?next=/app/) and are available from the corresponding author (RGEM or MCA) upon requests
2308.16066
Vortex Creep Heating in Neutron Stars
Recent observations of old warm neutron stars suggest the presence of a heating source in these stars, requiring a paradigm beyond the standard neutron-star cooling theory. In this work, we study the scenario where this heating is caused by the friction associated with the creep motion of neutron superfluid vortex lines in the crust. As it turns out, the heating luminosity in this scenario is proportional to the time derivative of the angular velocity of the pulsar rotation, and the proportional constant $J$ has an approximately universal value for all neutron stars. This $J$ parameter can be determined from the temperature observation of old neutron stars because the heating luminosity is balanced with the photon emission at late times. We study the latest data of neutron star temperature observation and find that these data indeed give similar values of $J$, in favor of the assumption that the frictional motion of vortex lines heats these neutron stars. These values turn out to be consistent with the theoretical calculations of the vortex-nuclear interaction.
Motoko Fujiwara, Koichi Hamaguchi, Natsumi Nagata, Maura E. Ramirez-Quezada
2023-08-30T14:42:29Z
http://arxiv.org/abs/2308.16066v1
# Vortex Crep Heating in Neutron Stars ###### Abstract Recent observations of old warm neutron stars suggest the presence of a heating source in these stars, requiring a paradigm beyond the standard neutron-star cooling theory. In this work, we study the scenario where this heating is caused by the friction associated with the creep motion of neutron superfluid vortex lines in the crust. As it turns out, the heating luminosity in this scenario is proportional to the time derivative of the angular velocity of the pulsar rotation, and the proportional constant \(J\) has an approximately universal value for all neutron stars. This \(J\) parameter can be determined from the temperature observation of old neutron stars because the heating luminosity is balanced with the photon emission at late times. We study the latest data of neutron star temperature observation and find that these data indeed give similar values of \(J\), in favor of the assumption that the frictional motion of vortex lines heats these neutron stars. These values turn out to be consistent with the theoretical calculations of the vortex-nuclear interaction. Introduction A neutron star (NS) is incredibly dense and exists under extreme conditions of pressure and temperature that cannot be found in other places in the universe. While the internal structure of NSs remains elusive, indirect evidence suggests the existence of a neutron superfluid in the inner crust. The first indication of superfluidity in NSs came from the observation of non-zero pairing energy associated with attractive forces, leading to the formation of an energy gap and hence superfluidity [1]. This scenario was predicted to occur in stars with neutron cores [2]. After the discovery of pulsars, the energy gap in NS matter has been further studied; see, _e.g._, Refs. [3, 4, 5] for a recent review on superfluidity in NSs. In a rotating NS, the irrotational property of a superfluid requires the formation of vortex lines, whose distribution determines the angular velocity of the superfluid component. In the inner crust region, these vortex lines are fixed at certain positions by the interactions with nuclei and cannot move freely. This preserves the rotational speed of the superfluid component and prevents it from following the slowdown of the pulsar rotation, giving rise to the deviation in the rotational speed between the superfluid and other components. This deviation increases until the vortex lines are forced to move by the Magnus force, which increases as the difference in the rotational speed increases. This vortex-line dynamics leads to some observational consequences. A well-known example is the pulsar glitch phenomenon, namely, sudden changes in the rotational frequency of NSs,1 which could be attributed to an avalanche of unpinning of superfluid vortex lines [10, 11]. Another phenomenon is the heating effect caused by the friction associated with the creep motion of vortex lines [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22], which is the subject of the present paper. Footnote 1: See Refs. [6, 7, 8, 9] for a recent review on pulsar glitches. Our motivation to revisit this heating mechanism is provided by the recent observations of old warm NSs [23, 24, 25, 26, 27, 28, 29], whose observed temperature is considerably higher than that predicted in the standard NS cooling scenario [30, 31, 32, 33, 34, 35]. This work aims to study whether these observations can be explained by the vortex-creep heating effect. With this objective in mind, we focus on the following characteristic property of the vortex-creep heating. As we see below in detail, the heating luminosity in this heating mechanism is proportional to the time derivative of the angular velocity of the pulsar rotation, and the proportional constant is determined only by the NS structure and the vortex-nuclear interactions. As a result, the value of this proportional constant, denoted by \(J\) in this paper, is almost universal over NSs. In addition, we can obtain the value of \(J\) for an old NS by observing its temperature and its pulsar motion since, at late times, the heating luminosity balances with the luminosity of photon emission, which is determined by the surface temperature of the NS. As it turns out, the present data of old warm NSs indeed show similar values of \(J\) for these stars, in agreement with the prediction of the vortex-creep heating scenario. We also find that these values are compatible with the \(J\) parameter evaluated from the calculations of the nuclear pinning force available in the literature. The remainder of this paper is structured as follows. In Section 2, we provide a brief overview of the thermal evolution of NSs, focusing on the isothermal phase. In Section 3, we describe the vortex creep heating mechanism for NSs and explain how we can relate a universal parameter with the late-time temperature prediction. In Section 4, we summarize the numerical evaluation of the pinning force, from which we calculate the parameter \(J\). In Section 5, we study the recent observations of old and warm NSs to assess the current status of the vortex creep heating hypothesis. Finally, we summarize our findings and conclude our discussion in Section 6. ## 2 Thermal evolution of neutron star This section reviews the surface temperature prediction of NSs based on their thermal evolution and sources of heating and cooling. The temperature distribution within NSs is characterized by a local core temperature \(T\), which exhibits a temperature gradient only during the early stages of the NS, typically within the first 10-100 years of its existence [36, 37]. The high thermal conductivity of the highly degenerate electron gas causes the core to become isothermal over time, reaching thermal equilibrium. Hence, the red-shifted temperature of NSs, \(T^{\infty}(\bar{r},t)=T(\bar{r},t)e^{\phi(\bar{r})}\), where \(\phi(\bar{r})\) specifies the gravitational redshift, reaches a constant value \(T^{\infty}(\bar{r},t)\simeq T^{\infty}(t)\) and only the outermost layers exhibit an appreciable temperature gradient. The relativistic equation describing the thermal evolution of NSs after thermal relaxation is given by [38, 39] \[C(T^{\infty})\frac{dT^{\infty}}{dt}=-L_{\nu}^{\infty}(T^{\infty})-L_{\gamma}^ {\infty}(T^{\infty})+L_{\rm H}^{\infty}. \tag{2.1}\] Here, \(C\) represents the total heat capacity of the NS and is temperature-dependent. The right-hand side of the equation expresses the red-shifted luminosity for three different processes, namely, neutrino cooling \(L_{\nu}^{\infty}\), photon cooling \(L_{\gamma}^{\infty}\), and heating source \(L_{\rm H}^{\infty}\). At temperatures below a few \(\times\) 10\({}^{9}\) K, the NS becomes transparent to neutrinos, allowing them to escape without interacting with the stellar matter and carrying away energy. Therefore, the cooling process during the early stage of the star's life is dominated by neutrino emission. At later times, typically for \(t\gtrsim 10^{5}\) yrs,2 photon emission dominates neutrino emission. The thermal photon emission follows a blackbody spectrum. The photon luminosity \(L_{\gamma}\) can be described by the Stefan-Boltzmann law and related to surface temperature \(T_{\rm s}\) as Footnote 2: The dominance of photon emission might be delayed for massive NSs, in which the rapid neutrino emission via the direct Urca process could occur; we do not consider this possibility in what follows. \[L_{\gamma}=4\pi R_{\rm NS}^{2}\sigma_{\rm SB}T_{\rm s}^{4}, \tag{2.2}\] in the local reference frame of the NS, where \(\sigma_{\rm SB}\) is the Stephan-Boltzmann constant and \(R_{\rm NS}\) the NS radius. To relate the internal temperature \(T\) of a NS to its surface temperature \(T_{\rm s}\), we use the heat envelope model proposed by Potekhin _et al._ in 1997 [40]. According to this model, the observed thermal emission from old isolated NSs can be explained by the heat trapped in a thin envelope surrounding the star's crust. If we have an internal heating source, the heating luminosity will balance with the photon luminosity at a sufficiently late time, \[L_{\rm H}^{\infty}\simeq L_{\gamma}^{\infty}, \tag{2.3}\] which determines the surface temperature. It is pointed out that NSs have some internal heating mechanisms, such as vortex creep heating, rotochemical heating, and magnetic field decay. These heating effects may become visible at late times and can operate even for isolated NSs. These internal mechanisms are comprehensively compared with the observed surface temperature in Refs. [20, 22], which is recently revisited by Ref. [41]. We note in passing that the balance equation (2.3) generically holds with good accuracy for old NSs that are older than \(10^{5}\) yrs. This can be seen by estimating the typical timescale \(\tau_{\rm eq}\) for the relaxation into the equilibrium state: \[\tau_{\rm eq} \simeq\frac{CT^{\infty}}{4\pi R_{\rm NS}^{2}\sigma_{\rm SB}T_{\rm s }^{4}} \tag{2.4}\] \[\sim 3\times 10^{4}\ {\rm yrs}\left(\frac{C}{10^{35}\ {\rm erg/K}} \right)\left(\frac{R_{\rm NS}}{11.43\ {\rm km}}\right)^{-2}\left(\frac{T_{\rm s}}{10^{5}\ {\rm K}}\right)^{-4}\left(\frac{T^{\infty}}{10^{6}\ {\rm K}}\right)\, \tag{2.5}\] where we have used \(T^{\infty}\sim 10^{6}\) K corresponding to the surface temperature, at the equilibrium phase, of \(T_{\rm s}\sim 10^{5}\) K.3 It is found that this timescale is shorter than the age of old NSs, assuring the equilibrium condition (2.3). Footnote 3: To derive the typical scale of \(\tau_{\rm eq}\), we fitted the relation between the NS internal temperature \(T\) and the NS surface temperature \(T_{\rm s}\) for \(t_{\rm age}\lesssim 10^{6}\) yrs [40] by assuming \(T\propto T_{\rm s}^{2}\). ## 3 Review of vortex creep heating In this section, we will review vortex creep heating, where the presence of a superfluid in the inner crust of a NS plays a key role. In this region, vortex lines are thought to exist as a consequence of the NS rotation. In Sec. 3.1, we will derive the equation of motion for this rotational motion by introducing two different angular velocities for the inner crust superfluid and the other part of the star. In Sec. 3.2, we will consider the dynamics of a vortex line and evaluate its radial velocity. Finally, in Sec. 3.3, we will assess the effect of vortex creep on the late-time temperature prediction of NSs. ### Equations of motion for crust and superfluid To describe a rotating NS, let us divide it into two components depending on how external torque exerts on it, the _crust component_ and the _superfluid component_[42]. The crust component comprises a rigid crust, a lattice of nuclei, and charged particles tightly coupled to electromagnetic field lines [42]. This component is directly affected by external torque provided by pulsar magnetic radiation. The superfluid component refers to \({}^{1}S_{0}\) superfluid of neutrons.4 This superfluid phase is believed to appear in the inner crust region based on pairing gap evaluations [3, 4, 5]. This component is just indirectly affected by the external torque through the interaction with the crust component. On the left side of Fig. 1, we show a schematic diagram of a NS. The grey region is the outer crust, composed of ions and electrons. The blue layer represents the inner crust region and has an approximately \(\sim 1\) km thickness. In this region, nuclei, electrons, and neutrons exist, and neutrons are expected to be in the form of the neutron singlet superfluid. The brown region is the NS core, whose internal structure remains uncertain and a subject of ongoing research and debate. In the two-component model, the neutron superfluid in the blue layer is classified into the superfluid component, while all the other parts are classified into the crust component. Based on this two-component model, we derive the equations of motion and describe the rotation. For later convenience, we divide the system into a thin disc and introduce the cylindrical coordinate \((r,\varphi,z)\). Since we treat the crust component as a rigid body, its angular velocity \(\Omega_{\rm c}(t)\) is independent of \(r\). On the other hand, the superfluid angular velocity varies with \(r\), and we denote it by \(\Omega_{\rm s}(t,r)\). The equation of motion for the crust component is \[I_{\rm c}\dot{\Omega}_{\rm c}(t)=N_{\rm ext}(t)+N_{\rm int}(t), \tag{3.6}\] where \(I_{\rm c}\) represents the moment of inertia of the crust component, and the dot represents the derivative with respect to time \(t\). On the right-hand side, we divide the torque into two parts: The first term \(N_{\rm ext}(t)\) represents the external torque acting on the system. The second term \(N_{\rm int}(t)\) corresponds to the effects of internal torques in the system \[N_{\rm int}(t)=-\int dI_{p}(r)\frac{\partial\Omega_{\rm s}}{ \partial t}(t,r), \tag{3.7}\] where \(dI_{p}\) represents the differential inertial momenta. The integral is taken over the region where the crust component interacts with the superfluid component, called the _pinning_ region. These two components connect through the pinning force between the vortex line and the nuclei-like object in the inner crust, as we will discuss in Sec. 3.2. Figure 1: The structure of a NS and vortex lines. _Left_: The grey, blue, and brown regions represent the outer crust, inner crust, and core regions, respectively. The red lines represent vortex lines. _Right_: A single vortex line in the inner crust. The vortex line in the neutron superfluid is attached to the outer crust and has two boundaries. The equations of the superfluid component are obtained by introducing two fundamental measures of rotation in the fluid: vorticity and circulation. The vorticity vector \(\mathbf{\omega}\) characterizes the microscopic features of rotation and is a locally determined value defined as the curl of the fluid velocity \(\mathbf{v}\), \[\mathbf{\omega}\equiv\nabla\times\mathbf{v}. \tag{3.8}\] In contrast, the circulation \(\Gamma\) measures the macroscopic rotation and is defined over a finite region. It is given by the line integral of the fluid velocity \(\mathbf{v}\) around a closed path \(C\), which can be expressed as a surface integral over any surface \(S\) with the boundary \(C\), \[\Gamma\equiv\oint_{C}\mathbf{v}\cdot d\mathbf{\ell}=\iint_{S}\mathbf{\omega} \cdot d\mathbf{S}, \tag{3.9}\] where \(d\mathbf{\ell}\) and \(d\mathbf{S}\) denote the line and surface elements, respectively. We used Stokes' theorem to obtain the last expression. Superfluid motion obeys the potential flow condition, \[\nabla\times\mathbf{v}_{\rm s}=\mathbf{0}, \tag{3.10}\] where \(\mathbf{v}_{\rm s}\) denotes the superfluid velocity. This condition, implying the absence of the vorticity in the superfluid, holds because the superfluid velocity is proportional to the gradient of the phase of the condensate wave-function of the superfluid. Nevertheless, we still have a nonzero circulation if there exists a singular object known as a vortex line. In Fig. 1, we show a schematic picture of a single vortex line in the NS inner crust. The vortex line is a string-like configuration with a thickness of the order of femtometers (red curve). The circulation for each vortex line is quantized in units of \[\kappa\equiv\frac{h}{2m_{n}}, \tag{3.11}\] where \(h\) is the Planck constant and \(m_{n}\) is the neutron mass. This quantization follows from the condition that the wave-function of the condensate is single-valued, and thus the change in its phase must be \(2\pi k\) with \(k\) as an integer. Since a vortex line with \(k=1\) is energetically favored and stable, the system with larger angular velocity contains a larger number of vortex lines [45]. The number of vortex lines will be saturated if the total circulation reaches that of the rigid rotation as a whole system, which is expected in NSs. The vortex lines in the inner crust have boundaries corresponding to the normal matter in the outer crust. Under this circumstance, the circulation for the contour \(C\) in Fig. 1 is uniquely determined and, because of the potential flow condition (3.10), can be regarded as topological as it remains unchanged under deformations of \(C\) (unless it passes through another vortex line). We may express the superfluid velocity on average in the same form as the normal fluid,5 Footnote 5: This relation is confirmed by observing the shape of the free surface for rotating superfluid in liquid He system [46, 47, 48]. \[\langle\mathbf{v}_{\rm s}\rangle=\mathbf{\Omega}_{\rm s}\times\mathbf{r}\, \tag{3.12}\] where \(\mathbf{r}\) denotes the position vector from the center of the NS. The total circulation of a superfluid system is equal to the sum of the circulation of each vortex line. This means that the number of vortex lines is directly related to the superfluid angular velocity \(\mathbf{\Omega}_{\rm s}\). By substituting Eq. (3.12) in Eq. (3.9), we obtain \[\Gamma_{\rm superfluid}=\int_{C}d\mathbf{\ell}\cdot\left(\mathbf{\Omega}_{ \rm s}(t,r)\times\mathbf{r}\right)=\int_{0}^{r}dr^{\prime}\ 2\pi r^{\prime}\kappa n(t,r^{\prime}), \tag{3.13}\] where we choose the integral path \(C\) around the edge of the disc with radius \(r\), and \(n(t,r)\) is the number density of the vortex lines per unit area. Noting that only the radial motion of vortex lines changes \(n\) due to the axial symmetry around the rotation axis, we obtain the following conservation law, \[\frac{\partial n}{\partial t}+\nabla\cdot(nv_{r}\mathbf{e}_{r})=0, \tag{3.14}\] where \(v_{r}\) is the vortex velocity in the radial direction \(\mathbf{e}_{r}\), which we call the _creep rate_. Combining Eqs. (3.13) and (3.14), we obtain the equation of motion for the superfluid component: \[\frac{\partial\Omega_{\rm s}}{\partial t}=-\left(2\Omega_{\rm s} +r\frac{\partial\Omega_{\rm s}}{\partial r}\right)\frac{v_{r}}{r}. \tag{3.15}\] The NS rotation is described by Eqs. (3.6) and (3.15) coupled through a nonzero \(v_{r}\). In other words, by switching off the radial motion of the vortex line, we have \(\partial\Omega_{\rm s}/\partial t=0\). In this case, \(N_{\rm int}\) turns out to be zero, and the two equations of motion are decoupled. ### Dynamics of a vortex line In the inner crust, a vortex line feels two forces, the _pinning force_ and the _Magnus force_. The pinning force arises from the interaction between a vortex line and a nuclear-like object within the inner crust, resulting in the pinning of the vortex line to the lattice of nuclei where the energy is minimized [10]. As long as the pinning force is dominant, the vortex lines remain attached to the crust and move at the same velocity as the crust component: \[\mathbf{v}_{\rm VL}(t,r)=\mathbf{\Omega}_{\rm c}(t)\times\mathbf{r}, \tag{3.16}\] where \(\mathbf{v}_{\rm VL}(t,r)\) denotes the velocity of a vortex line. One way to quantify the pinning force is to compare the energies associated with different configurations of a vortex line. In Fig. 2, we show two possibilities for the pinning configurations, the _nuclear pinning_ and the _interstitial pinning_. The difference in the energies between these two configurations defines the pinning energy, \[E_{\rm pin}\equiv E_{\rm NP}-E_{\rm IP}, \tag{3.17}\] where \(E_{\rm NP}\) (\(E_{\rm IP}\)) denotes the energy of the nuclear (interstitial) pinning configuration. A nuclear pinning configuration occurs when the pinning energy is negative, and the vortex line is directly attached to the nuclei lattice. Conversely, when the pinning force is repulsive, the vortex line is pinned in the interstitial regions. For a nuclear pinning configuration, a rather crude estimate of the pinning force per unit length is then given by \[f_{\rm pin}|_{\rm NP}\simeq\frac{|E_{\rm pin}|}{\Delta r\Delta L}, \tag{3.18}\] where \(\Delta r\) is the distance between the nuclear and interstitial pinning positions and \(\Delta L\) is the distance between the successive pinning sites along a vortex line. These two quantities are expected to be of the order of the Wigner-Seitz radius \(R_{\rm WS}\), the radius of an imaginary sphere whose volume is equal to the average volume per nucleus in each region. Due to this pinning effect, the relative velocity between the superfluid and the vortex lines is developed, \[\delta\mathbf{v}\equiv\mathbf{v}_{\rm s}-\mathbf{v}_{\rm VL}=\delta\mathbf{\Omega}\times\mathbf{r}, \tag{3.19}\] where we use Eqs. (3.12) and (3.16) to obtain the last expression and introduce the relative angular velocity, \[\delta\mathbf{\Omega}\equiv\mathbf{\Omega}_{\rm s}-\mathbf{\Omega}_{\rm c}. \tag{3.20}\] This velocity difference induces the Magnus force per unit length of a vortex line, \[\mathbf{f}_{\rm Mag}=\rho\left(\delta\mathbf{v}\right)\times\mathbf{\kappa}, \tag{3.21}\] where \(\rho\) is the superfluid density and \(\mathbf{\kappa}\) is the vorticity vector, which is parallel to the vortex line (hence, parallel to the rotational axis of the NS) and has the absolute value \(|\mathbf{\kappa}|\equiv\kappa\). We see that the Magnus force always acts in the outward direction, as illustrated in Fig. 3, since the crust component rotates slower than the superfluid component due to the deceleration by the external torque. Vortex lines overcome the trapping through thermal fluctuations or quantum tunneling [49, 50], and start to creep outward. If this creep rate is rapid enough, the superfluid rotation can smoothly follow the change in the crust rotation, and the system can reach a steady state. To see if this is the case for the NSs of our interest, let us briefly review the evaluation of the vortex creep rate \(v_{r}\) following Ref. [50]. In this analysis, the pinning force is modeled by a periodic potential with a period equal to the span of the nuclei Figure 2: The nuclear pinning and interstitial pinning for the vortex line pinning configurations. Red cylinders and black spheres represent vortex lines and nuclei, respectively. lattice (\(\sim R_{\rm WS}\)) and a height equal to the pinning energy (\(\sim|E_{\rm pin}|\)). The Magnus force is considered as a bias that tilts the periodic potential. Let us introduce the transition rate for a vortex line to move from a local minimum into the next local minimum as \({\cal R}_{\rm VC}\). The zero-point frequency in the vicinity of local minima of the potential \(\omega_{0}\) controls \({\cal R}_{\rm VC}\) and is obtained through the quantization of the vortex system [49, 50]. If the vortex tension is negligible compared to the pinning force, we obtain6 Footnote 6: In Ref. [50], the case where the vortex tension dominates the pinning force is also studied, and the conclusion of the steady state turns out to remain unchanged. \[\omega_{0}\simeq\frac{\pi\kappa\Lambda}{4R_{\rm WS}^{2}}\simeq 1.2\times 10^{20} \,{\rm s}^{-1}\left(\frac{R_{\rm WS}}{50~{}{\rm fm}}\right)^{-2}\left(\frac{ \Lambda}{2}\right), \tag{3.22}\] where \(\Lambda\) characterizes the vortex tension, \(T_{v}=\rho\kappa^{2}\Lambda/(4\pi)\), and is evaluated as \(2\lesssim\Lambda\lesssim 10\) in the inner crust [51]. The transition rate is then given by [49, 50] \[{\cal R}_{\rm VC}\simeq\frac{\omega_{0}}{2\pi}\,e^{-\frac{|E_{\rm pin}|}{k_{ \rm B}T_{\rm eff}}}, \tag{3.23}\] where \(T_{\rm eff}\) is defined by \[k_{\rm B}T_{\rm eff}\equiv\frac{\hbar\omega_{0}}{2}\coth\left(\frac{T_{\rm q}} {T}\right)\sim\begin{cases}k_{\rm B}T&(T\gg T_{\rm q})\\ \frac{\hbar\omega_{0}}{2}&(T\ll T_{\rm q})\end{cases}, \tag{3.24}\] with \(T\) being the temperature of the inner crust and \[T_{\rm q}\equiv\frac{\hbar\omega_{0}}{2k_{\rm B}}\simeq 3.8\times 10^{8}~{}{ \rm K}~{}\left(\frac{\omega_{0}}{10^{20}~{}{\rm s}^{-1}}\right). \tag{3.25}\] As can be seen from these expressions, the unpinning occurs predominantly through thermal activation (quantum tunneling) for \(T\gg T_{\rm q}\) (\(T\ll T_{\rm q}\)). In particular, for NSs as old as those considered in the following analysis, their internal temperature is \(\lesssim 10^{8}\) K. Figure 3: The Magnus force acting on a vortex line. We define the direction of \(\kappa\) as the direction of the right-hand screw. Therefore, the vortex creep motion in these NSs is triggered by quantum tunneling. By using the transition rate in Eq. (3.23), we evaluate the creep rate as \(v_{r}\simeq\mathcal{R}_{\rm VC}\cdot R_{\rm WS}\). With the vortex creep rate obtained above, we can estimate the typical distance at which a vortex line travels through the creep motion during the lifetime of the NS: \[v_{r}\times t_{\rm age} \simeq R_{\rm WS}\cdot\frac{\omega_{0}}{2\pi}\cdot e^{-\frac{2|E_{\rm pin }|}{\hbar\omega_{0}}}\times t_{\rm age} \tag{3.26}\] \[\simeq 160~{}{\rm km}\times\left(\frac{\omega_{0}}{10^{20}~{}{\rm s}^{ -1}}\right)\left(\frac{R_{\rm WS}}{50~{}{\rm fm}}\right)\left(\frac{t_{\rm age }}{10^{5}~{}{\rm yr}}\right),\] where we set \(|E_{\rm pin}|=1\) MeV as a representative value. We find that this distance is considerably larger than the crust thickness (\(\simeq 1\) km), implying that the vortex creep motion would reach a steady state in old NSs. Once the system enters the steady phase, the crust and superfluid components decelerate at the same rate, i.e., \[\frac{\partial\Omega_{\rm s}(t,r)}{\partial t}=\dot{\Omega}_{\rm c }\equiv\dot{\Omega}_{\infty}~{}. \tag{3.27}\] Note that \(\Omega_{\rm c}\) and \(|\dot{\Omega}_{\infty}|\) are identified as the current observed values of the angular velocity and deceleration rate of NSs, respectively. As a consequence, the relative angular velocity between the crust and superfluid components becomes independent of time, and this is found to be fairly close to the critical angular velocity determined by the condition \(f_{\rm pin}=f_{\rm Mag}\)[50]: \[\delta\Omega_{\infty}\simeq\delta\Omega_{\rm cr}~{}, \tag{3.28}\] with \[\delta\Omega_{\rm cr}\equiv|\delta\mathbf{\Omega}|_{f_{\rm pin}=f_{ \rm Mag}}=\frac{f_{\rm pin}}{\rho\kappa r}. \tag{3.29}\] ### Prediction of surface temperature As vortices move outward, the rotational energy of the superfluid is dissipated through their frictional interaction with the normal components of the inner crust, which heats the NS. This heating luminosity is computed as [12, 15] \[L_{\rm H} =N_{\rm ext}\Omega_{\rm c}(t)-\frac{d}{dt}\left[\frac{1}{2}I_{ \rm c}\Omega_{\rm c}^{2}(t)+\frac{1}{2}\int dI_{\rm p}\Omega_{\rm s}^{2}(t,r)\right]\] \[=\int dI_{\rm p}\left[\Omega_{\rm c}(t)-\Omega_{\rm s}(t,r) \right]\frac{\partial\Omega_{\rm s}(t,r)}{\partial t}\] \[=\int dI_{\rm p}\,\delta\Omega\left|\frac{\partial\Omega_{\rm s} (t,r)}{\partial t}\right|~{}, \tag{3.30}\] where we use Eqs. (3.6) and (3.7), and the inertial momenta \(I_{\rm p}\) is integrated over the region where the pinning process efficiently occurs. This expression is further simplified in the steady state with the condition (3.27), \[L_{\rm H}=J|\dot{\Omega}_{\infty}|~{}, \tag{3.31}\] where we define, \[J\equiv\int dI_{\rm p}\delta\Omega_{\infty}\, \tag{3.32}\] and \(\delta\Omega_{\infty}\) denotes the steady-state value of the relative angular velocity. As we see, the heating luminosity is proportional to the current deceleration rate of the pulsar. Note that the value of \(J\) can be estimated from Eqs. (3.27), (3.29), and (3.32) by specifying the value of \(f_{\rm pin}\) and the region of pinning. We will evaluate the proportional coefficient \(J\) in Sec. 4.2. In particular, if this heating luminosity balances with the photon luminosity, which we expect to occur for old NSs as we discussed in Sec. 2, the surface temperature \(T_{\rm s}\) can be estimated using Eq. (2.3) as \[T_{\rm s}^{\rm eq} \equiv\left(\frac{J|\dot{\Omega}_{\infty}|}{4\pi R_{\rm NS}^{2} \sigma_{\rm SB}}\right)^{\frac{1}{4}} \tag{3.33}\] \[\simeq 1.0\times 10^{5}\ {\rm K}\left(\frac{J}{10^{43}\ {\rm erg \ s}}\right)^{\frac{1}{4}}\left(\frac{|\dot{\Omega}_{\infty}|}{10^{-14}\ {\rm s}^{-2}}\right)^{\frac{1}{4}}\left(\frac{R_{\rm NS}}{11.43\ {\rm km}}\right)^{-\frac{1}{2}}. \tag{3.34}\] In the steady vortex creep scenario, therefore, the surface temperature is predicted as a function of \(J\), \(|\dot{\Omega}_{\infty}|\), and \(R_{\rm NS}\), free from the uncertainty of the initial condition and the subsequent temperature evolution. We will examine this prediction against observation in Sec. 5. ## 4 Theoretical approaches for the vortex pinning To compute the energy dissipation due to vortex creep, we evaluate the parameter \(J\) in Eq. (3.32). We first review the calculation of the pinning force \(f_{\rm pin}\) available in the literature in Sec. 4.1; the values of \(f_{\rm pin}\) which our analysis is based on are summarized in Appendix A. Then, in Sec. 4.2, we estimate possible ranges of \(J\) using the results in Sec. 4.1, which will be compared with observation in the subsequent section. ### Evaluation of pinning force To evaluate the pinning force, we need to analyze a nucleon many-body system at high densities, which generically suffers from technical difficulties due to less-known properties of nuclear interactions. A traditional method of treating nuclear interactions is to model a form of the interaction and fit it to experimental data, such as nucleon-nucleon scattering [52, 53, 54]. For example, the Argonne interaction [54], which we consider in the following analysis, is a two-body nucleon potential fitted to nucleon scattering data and deuteron properties. This sort of bare interaction does not include in-medium effects. The many-body calculation based on bare interaction is necessary to obtain the properties of nucleon systems. At the same time, it is still challenging to perform it in general due to, e.g., the strong repulsive core. An alternative method is to use an effective interaction incorporating in-medium effect phenomenologically. Skyrme-type interactions [55, 56] are well-known examples, which consist of contact (zero-range) interactions with momentum-dependent coefficients. The parameters of the interactions are determined by fitting to the experimental data of binding energies and radii of several nuclei. There are many sets of fitting parameters used in the literature [57, 58, 59], such as SLy4 [60] and SkM* [61]. There are also finite-range interactions, such as the Gogny interaction [62]. There are several approaches to analyzing the nuclear matter, and the following two are often used for the calculation of the pinning energy: * **Quantum approach** A standard method to analyze a quantum multi-body system is to calculate the energy levels of a single particle in the mean field of a self-consistent potential by solving the corresponding Schrodinger equation. A neutron pairing interaction is then considered to determine the pairing field as in the BCS theory. These solutions are obtained via an iterative process such that they satisfy the self-consistent conditions. This method is called the Hartree-Fock-Bogoliubov (HFB) method and adopted in Refs. [63, 64, 65, 66]. * **Semi-classical approach** The quantum approach based on the HFB method often requires a high computational cost. To evade this, a semi-classical approach based on the Thomas-Fermi approximation is also frequently used, where nuclear matter is regarded as a many-body system of nucleons subject to the Pauli exclusion principle and moving independently from each other in a mean-field potential. The energy of the system for a given chemical potential is obtained by the variational principle.7 This approach is used in Refs [67, 68, 69, 70, 71, 72]. Footnote 7: Strictly speaking, we minimize the modified Hamiltonian defined by \(H^{\prime}=H-\mu N\), where \(\mu\) and \(N\) denote the chemical potential and the number of particles, respectively. We also note that for NSs, the temperature \(T\) can be regarded as zero; thus, the free and internal energy are equivalent. We can then estimate the pinning force from the pinning energy obtained above. There are two approaches for this calculation: * **Microscopic calculation** We may estimate the pinning force through Eq. (3.18) for the nuclear pinning configuration. We refer to this estimation as the _microscopic_ approach since, as illustrated in the most right window in Fig. 4, it focuses on the microscopic scale of \(\mathcal{O}(R_{\rm WS})\). This approach considers the interaction between a vortex and the single nucleus in the Wigner-Seitz cell, and thus the interaction of the vortex with other distant nuclei is neglected. We obtain the pinning force per unit length by just multiplying the pinning force per nucleus with the number of nuclei in the unit length along the vortex line (\(\simeq 1/R_{\rm WS}\)). * **Mesoscopic calculation** Vortex lines are much longer than the lattice spacing; therefore, each vortex line pins onto a large number of nuclei in reality. Such a vortex line does not align to the crystal axis over its total length in general. The _mesoscopic_ approach considers this realistic configuration by taking the average of the force exerted on a vortex over the possible directions of the vortex line with respect to the crystal lattice. This calculation focuses on the length-scale \(L\sim(10^{2}\)-\(10^{3})\times R_{\rm WS}\), for which the the middle window in Fig. 4--we call this scale the mesoscopic scale. The derived pinning force thus tends to be smaller than those obtained with the microscopic calculation. All in all, we have \(2\times 2=4\) combinations for the prescription of the pinning force calculation. In the following discussion, we consider a representative calculation with a specific choice of nuclear interactions for each combination, as summarized in Table 1. For the microscopic semi-classical approach, we consider the calculation in Ref. [71] with a Woods-Saxon potential for the mean-field potential and the Argonne interaction for the neutron-neutron pairing interaction. It is found that the nuclear pinning configuration (see Fig. 2) occurs only in high-density regions. The pinning force per nuclear pinning site is estimated by \(E_{\rm pin}/R_{\rm WS}\) for this configuration in Ref. [71]. To convert this into the pinning force per unit length \(f_{\rm pin}\), we multiply this by a factor of \(1/(2R_{\rm WS})\), as cubic cells with the side length \(2R_{\rm WS}\) are used in the calculation of Ref. [71]. As a result, we obtain the values of \(f_{\rm pin}=(1\!-\!7)\times 10^{-3}\ {\rm MeV}\cdot{\rm fm}^{-2}\), depending on the position in the inner crust. We list quantities relevant to this calculation in Table 3 in Appendix A.8 Notice that the values of the pinning force shown here should be regarded as a ballpark estimate. The lower limit (\(f_{\rm pin}=1\times 10^{-3}\ {\rm MeV}\cdot{\rm fm}^{-2}\)) could have been overestimated because of the discretization of the positions at which the pinning energy is estimated; as seen in Table 3, there is a density region of \(\rho\sim 10^{13}\ {\rm g}\cdot{\rm cm}^{-3}\) around which \(E_{\rm pin}\simeq 0\), leading to a very small pinning force if we use Eq. (3.18). The upper limit could also be underestimated since the pinning force obtained by this equation corresponds to the average taken over the distance between the nuclear center and the interstitial position. Footnote 8: To get some ideas about the dependence of this calculation on the choice of potentials, we note that Ref. [71] also considers the case where the Gogny interaction is used instead of the Argonne interaction. In this case, the nuclear pinning occurs in higher density regions, and the pinning forces tend to be larger by a factor of a few, compared with the calculation with the Argonne interaction; see Table 4 in Appendix A for this calculation. For the microscopic quantum approach, we consider the results given in Ref. [65], where the SLy4 Skyrme interaction is used for the Hartree-Fock calculation and a density-dependent contact interaction is used for the neutron-neutron pairing interaction. The parameters of the contact interaction are discussed in Ref. [73]. Table 5 summarises the Figure 4: The landscape of a vortex-line configuration at different length-scales. relevant quantities for this calculation. In contrast to the semi-classical analysis [71], nuclear pinning occurs in lower-density regions in this case. However, it also occurs in the highest-density regions (see Table 5 in Appendix A).9 The values of the pinning force estimated from the pinning energy and the Wigner-Seitz radius as in the semi-classical calculation are \(f_{\rm pin}=(3\)-\(4)\times 10^{-4}~{}{\rm MeV}\cdot{\rm fm}^{-2}\). See Ref. [65] for detailed discussions regarding the difference between the semi-classical and quantum results. Footnote 9: The qualitative feature described here does depend on the choice of the nuclear interactions. As shown in Ref. [65], if we use the SkM* interaction instead of SLy4, the nuclear pinning never occurs at high densities. The semi-classical mesoscopic calculation is given in Ref. [72], where the nuclear potentials are taken to be the same as in the semi-classical microscopic calculation in Table 1. The resultant values of the pinning force are found to be \(f_{\rm pin}=8\times 10^{-7}\) - \(4\times 10^{-4}~{}{\rm MeV}\cdot{\rm fm}^{-2}\), which are summarized in Table 6. We see that these values are considerably smaller than those for the semi-classical microscopic calculation in Ref. [71] due to the averaging over the vortex-line directions. For the quantum mesoscopic calculation, we consider the result given in Ref. [66], where the SLy4 interaction and a contact interaction are used for the Hartree-Fock calculation and pairing interactions, respectively, as in the quantum microscopic calculation in Table 1. As summarized in Table 7, the pinning force is found to be in the range \(f_{\rm pin}=5\times 10^{-7}-8\times 10^{-5}~{}{\rm MeV}\cdot{\rm fm}^{-2}\).10 We again find that these values are much smaller than those obtained with the quantum microscopic approach. Footnote 10: If we use the SkM* potential instead of SLy4, we obtain slightly smaller values of \(f_{\rm pin}\), as also shown in Table 7. Before concluding this section, we note that we can also calculate the pinning force with a three-dimensional dynamical simulation of a vortex [74, 75, 76, 77].11 Such a calculation \begin{table} \begin{tabular}{c||c|c||c|c} \hline & \multicolumn{2}{c||}{**Semi-classical**} & \multicolumn{2}{c}{**Quantum**} \\ \hline \hline & Mean field: & Pairing: & Hartree-Fock: & Pairing: \\ \cline{2-5} **Microscopic** & Woods-Saxon & Argonne & SLy4 & contact force \\ \cline{2-5} & \(f_{\rm pin}=(1-7)\times 10^{-3}\)[71] & \(f_{\rm pin}=(3-4)\times 10^{-4}\)[65] \\ \hline \hline & Mean field: & Pairing: & Hartree-Fock: & Pairing: \\ \cline{2-5} **Mesoscopic** & Woods-Saxon & Argonne & SLy4 & contact force \\ \cline{2-5} & \(f_{\rm pin}=8\times 10^{-7}-4\times 10^{-4}\)[72] & \(f_{\rm pin}=5\times 10^{-7}-8\times 10^{-5}\)[66] \\ \hline \end{tabular} \end{table} Table 1: Calculations of the vortex pinning force considered in this work. The first row in each column lists the potentials for the mean-field and pairing interactions used in the evaluations; the second row shows the range of the pinning force in the inner crust in units of \({\rm MeV}\cdot{\rm fm}^{-2}\). tends to be costly; thus, a certain degree of simplification is usually required for the moment. Besides, the current evaluation is limited to a few benchmark values of density. The estimated values of the pinning force are consistent with the above estimates. ### Theoretical evaluation of \(J\) For the evaluation of \(J\) in Eq. (3.32), it is convenient to change the coordinate from cylindrical coordinates \((r,\varphi,z)\) to spherical coordinates \((R,\theta,\phi)\): \[J=\int_{\rm pin}dr\,dz\,d\varphi\,\rho r^{3}\cdot\frac{f_{\rm pin}}{\rho\kappa r }\simeq\int_{R_{\rm in}}^{R_{\rm out}}dR\,d\theta\,d\phi\,R^{3}\sin^{2}\theta \cdot\frac{f_{\rm pin}}{\kappa}\,, \tag{4.35}\] where we approximate \(\delta\Omega_{\infty}\) in Eq. (3.32) by \(\delta\Omega_{\rm cr}\) in Eq. (3.29), which holds with good accuracy in the situation of our interest [50] as we mentioned in Sec. 3.2. We perform the integral over the range \([R_{\rm in},R_{\rm out}]\) where the pinning force is evaluated. We use the Akmal-Pandharipande-Ravenhall (APR) [79] equation of state to determine the NS core size and the equation of state tabulated in Crust_EOS_Cat_HZD-NV.dat in NSCool[80] based on Refs. [81, 82] to determine the density distribution in the crust. For the evaluation method of pinning force, we focus on the mesoscopic approach shown in Table 1, since for the microscopic calculation, the pinning force is obtained only in a limited region in the crust, as can be seen in Table 3-5. Considering that the evaluation of the pinning force suffers from large uncertainty depending on calculation methods, we make the following crude approximation in the calculation of the above integral--we neglect the density dependence of \(f_{\rm pin}\) and fix it to a value in the range shown in Table 1. We thus obtain a range of \(J\) accordingly, which we regard as the uncertainty of this pinning force estimation. As a result, we obtain \(J=3.9\times 10^{40}-1.9\times 10^{43}\ {\rm erg\cdot s}\) for the semi-classic mesoscopic calculation and \(J=1.7\times 10^{40}-2.7\times 10^{42}\ {\rm erg\cdot s}\) for the quantum mesoscopic calculation. ## 5 Vortex creep heating vs. observation We now compare the prediction of the vortex-creep heating mechanism with observation. For this purpose, it is useful to calculate the following quantity for each NS:12 Footnote 12: We neglect the gravitational redshift factor since its effect is within the \({\cal O}(1)\) uncertainty of \(J\) discussed below. \[J_{\rm obs}\equiv\frac{4\pi R_{\rm NS}^{2}\sigma_{\rm SB}T_{\rm s}^{4}}{| \dot{\Omega}|}. \tag{5.36}\] As evident from Eqs.(2.2), (2.3), and (3.31), this corresponds to the \(J\) parameter inferred from the observation of each NS. This inference assumes the steady creeping of vortices (discussed in Sec.3.2) and the balance between vortex-creep heating luminosity and photon cooling luminosity, which we expect to hold if the NS is older than \(\sim 10^{5}\) years. Since NSs are comparable in size and mass, we expect that \(J_{\rm obs}\) is also roughly equal (up to a factor of \({\cal O}(1)\)) for every NS. We test this expectation by using the data of \(J_{\rm obs}\) for old NSs. We also compare the values of \(J_{\rm obs}\) with the theoretical computations given in Sec. 4.2. \begin{table} \begin{tabular}{l l||l||l|l|l|l} \hline No. & Type & Name & \(\log_{10}t_{\rm sd}\) & \(\log_{10}t_{\rm kin}\) & \(\log_{10}|\dot{\Omega}|\) & \(\log_{10}T_{\rm s}\) & \(\log_{10}J_{\rm obs}\) \\ & & [yr] & [yr] & [s\({}^{-2}\)] & [K] & [erg s] \\ \hline \hline 1. & [Y] & PSR B1706-44 & 4.2 & — & \(-10.3\) & 5.68–6.34 [83] & 41.9–44.6 \\ 2. & [O] & PSR J1740+1000 & 5.1 & — & \(-11.2\) & 5.89 [84] & 43.8 \\ 3. & [Y] & PSR B2334+61 & 4.6 & — & \(-11.3\) & 5.76 [85] & 43.3 \\ 4. & [O] & PSR B0656+14 & 5.0 & — & \(-11.6\) & 5.87 [86] & 44.1 \\ 5. & [O] & PSR J0633+1746 & 5.5 & — & \(-11.9\) & 5.71 [87] & 43.7 \\ 6. & [Y] & PSR J0538+2817 & 5.8 & \(4.60^{+0.18}_{-0.30}\)[88] & \(-11.9\) & 6.02 [88] & 45.0 \\ 7. & [O] & PSR B1055-52 & 5.7 & — & \(-12.0\) & 5.81 [89] & 45.1 \\ 8. & [X] & RX J1605.3+3249 & 4.5 & \(5.66^{+0.04}_{-0.07}\)[90] & \(-12.1\) & 5.86 [91] & 44.5 \\ 9. & [O] & PSR J2043+2740 & 6.1 & — & \(-12.1\) & \(<5.95\)[92] & \(<44.8\) \\ 10. & [O] & PSR J1741-2054 & 5.6 & — & \(-12.2\) & 5.85 [93] & 44.6 \\ 11. & [O] & PSR J0357+3205 & 5.7 & — & \(-12.4\) & 5.62 [94] & 43.8 \\ 12. & [O] & PSR B0950+08 & 7.2 & — & \(-13.6\) & 4.78–5.08 [29] & 41.7–42.9 \\ 13. & [X] & RX J0420.0-5022 & 6.3 & — & \(-13.8\) & 5.74 [95] & 45.8 \\ 14. & [M] & PSR J0437-4715 & 9.20 & — & \(-14.0\) & 5.54 [25] & 45.1 \\ 15. & [X] & RX J1308.6+2127 & 6.2 & \(5.74^{+0.16}_{-0.26}\)[96] & \(-14.2\) & 6.08 [97] & 47.5 \\ 16. & [X] & RX J0720.4-3125 & 6.3 & \(5.93^{+0.07}_{-0.26}\)[98] & \(-14.2\) & 6.02 [99] & 47.3 \\ 17. & [M] & PSR J2124-3358 & 9.58 & — & \(-14.3\) & 4.70–5.32 [26] & 42.0–44.5 \\ 18. & [X] & RX J1856.5-3754 & 6.6 & \(5.66^{+0.04}_{-0.05}\)[98] & \(-14.4\) & 5.65 [100] & 46.0 \\ 19. & [X] & RX J2143.0+0654 & 6.6 & — & \(-14.5\) & 5.67–6.06 [101, 102] & 46.1–47.8 \\ 20. & [X] & RX J0806.4-4123 & 6.5 & — & \(-14.6\) & 6.01 [103] & 47.6 \\ 21. & [O] & PSR J0108-1431 & 8.3 & — & \(-15.1\) & 4.43–4.74 [28] & 41.8–43.1 \\ 22. & [O] & PSR J2144-3933 & 8.4 & — & \(-16.4\) & \(<4.62\)[104] & \(<43.8\) \\ \hline \end{tabular} \end{table} Table 2: The data of the NSs considered in this paper. We classify them into four types—ordinary pulsars younger than \(10^{5}\) yrs [Y], ordinary pulsars older than \(10^{5}\) yrs [O], XDINSs [X], and millisecond pulsars [M]. The values of \(t_{\rm sd}\) and \(\dot{\Omega}\) without references are computed from the data given in the ATNF pulsar catalogue [105, 106]. The value of \(J_{\rm obs}\) is evaluated as in Eq. (5.36) with \(R_{\rm NS}=11.43\) km. In Table 2, we list the values of \(J_{\rm obs}\) for the NSs we consider in this paper. We select isolated NSs older than \(10^{4}\) yrs. In evaluating \(J_{\rm obs}\), we have just assumed \(R_{\rm NS}=11.43\) km for all NSs, as the radius is poorly known for most NSs; we keep in mind that this may introduce an \(\mathcal{O}(1)\) error in the determination of \(J_{\rm obs}\). We also show the age, surface temperature, and \(\dot{\Omega}=2\pi\dot{P}/P^{2}\) of the NSs, (\(P\) and \(\dot{P}\) are the period and its time derivative, respectively). Regarding the NS age, we use the kinetic age \(t_{\rm kin}\) if available. Otherwise, we use the spin-down age \(t_{\rm sd}=P/(2\dot{P})\). We calculate \(t_{\rm sd}\) and \(\dot{\Omega}\) from \(P\) and \(\dot{P}\) given in the Australia Telescope National Facility (ATNF) pulsar catalogue [105, 106]. Notice that the surface temperatures of some of the old NSs in this table are much higher than the predicted temperature in the standard NS cooling scenario [30, 31, 32, 33, 34, 35]. It is important to note that not all NSs listed in Table 2 are useful for testing the vortex-creep heating. As we have discussed in Sec. 2, photon emission becomes the dominant cooling source for NSs older than \(\sim 10^{5}\) years. For younger NSs, this may not be the case, so \(L_{\gamma}^{\infty}\lesssim L_{\rm H}^{\infty}\) instead of (2.3), for which the values of \(J_{\rm obs}\) in Table 2 may be underestimated. To distinguish such young NSs from others, we indicate them by the type [Y] in the table. Another class of NSs that are inappropriate for our test is the X-ray Dim Isolated NSs (XDINSs). These NSs are considered to be descendants of magnetars [95, 103] that experienced the decay of magnetic fields before. This may make these NSs hotter than ordinary NSs of the same age [107, 108, 109, 95], resulting in an overestimate of \(J_{\rm obs}\). We denote these NSs by the type [X]. The rest of the NSs, which we use for the test of the vortex-creep heating, are classified into old ordinary pulsars [O] and millisecond pulsars [M]. The uncertainty in the determination of \(J_{\rm obs}\) stems mainly from that in the surface temperature, which is significant due to its quartic dependence on \(T_{\rm s}\). Generically speaking, it is very difficult to identify all of the sources of uncertainties in the measurement of the NS surface temperature, and it is often the case that the error shown in the literature is only a part of them, such as those from the spectrum fitting, the determination of the distance and/or radius of the NS, and so on. At present, it is fair to say that the NS temperature measurement typically suffers from \(\mathcal{O}(1)\) uncertainty, as can be seen in, _e.g._, Ref. [110]. Motivated by this, we include a factor of two uncertainty in \(T_{\rm s}\) for the stars in Table 2 for which only the central value is presented. For the other stars, we describe our prescription for the error estimation in Appendix B. We have checked that the errors thus obtained are similar to or more conservative than those adopted in Ref. [110]. In Fig. 5, we show the range of \(J_{\rm obs}\) estimated as described above for each NS listed in Table 2. The grey triangles, green inverse triangles, blue circles, and orange stars represent the young ordinary pulsars ([Y]), the XDINSs ([X]), the old ordinary pulsars ([O]), and the millisecond pulsars ([M]), respectively. The points with an arrow indicate that we only have an upper limit on \(J_{\rm obs}\) for those NSs. Recall that we are concerned only with the NSs represented by the blue [O] and orange [M] points. It is found that the estimated values of \(J_{\rm obs}\) for these NSs are in the same ballpark, \(J\sim 10^{43}\) erg \(\cdot\) s, even though their \(|\dot{\Omega}|\)'s distribute over orders of magnitude. This is in good agreement with the prediction of the vortex-creep heating mechanism. On the other hand, \(J_{\rm obs}\) for the green points (XDINSs [X]) tend to be larger than this, as expected. We also show the theoretical estimations given in Sec. 4.2 in the upper panel of Fig. 5. We see that the semi-classical mesoscopic calculation is consistent with the observation, given that this theoretical estimation suffers from a NS-dependent uncertainty of \(\mathcal{O}(1)\) Figure 5: The values of the \(J\) parameter obtained from the observation. The grey triangles, blue circles, green inverse triangles, and orange stars correspond to the young ordinary pulsars ([Y]), the old ordinary pulsars ([O]), the XDINSs ([X]), and the millisecond pulsars ([M]), respectively. The points with an arrow indicate upper limits. The red shaded region shows observationally favored range, \(J\simeq 10^{42.9-43.8}\) erg\(\cdot\)s. For comparison, we also show the values of \(J\) estimated with the mesoscopic calculations by black bars. coming from the integration in Eq. (4.35), in addition to that from the estimation of \(f_{\rm pin}\). The quantum mesoscopic calculation can explain some of the points with a small \(J_{\rm obs}\), but they are not large enough to explain, _e.g._, that of J0437-4715. However, we note that this theoretical estimation is still allowed by the observations since it just results in a lower heating luminosity than the observed one. If this is the case, the vortex-creep heating may operate but there exists another heating mechanism that dominates the vortex-creep heating, such as the rotochemical heating [111, 112, 113, 114, 115, 116, 117, 118, 119]. It is premature to establish the existence of the vortex-creep heating, as well as to conclude if an extra heating mechanism is required to be present. To that end, we need to accumulate more data on the surface temperature of old NSs with high accuracy, which we anticipate to be provided by future optical, UV, and X-ray observations.13 Nevertheless, obtaining a current compilation of the value of \(J\) suggested by the observation is intriguing. Considering intrinsic \(\mathcal{O}(1)\) uncertainty in \(J\), we determine its rough range by requiring that it covers the range suggested by B0950+08, which favors the smallest value, and satisfies the upper limit set by J2144-3944 based on non-observation of thermal flux. This yields Footnote 13: See, for instance, Ref. [120]. \[J\simeq 10^{42.9-43.8}\ {\rm erg}\cdot{\rm s}\, \tag{5.37}\] which we show as the red band in Fig. 5. Finally, in Fig. 6, we show the evolution of NS surface temperature with (without) the vortex creep heating effect in the red band (black dashed line).14 The band corresponds to the range of \(J\) in Eq. (5.37). The dots with error bars show the observed temperatures Figure 6: The evolution of NS surface temperature with (without) the vortex creep heating effect in the red band (black dashed line). The band corresponds to the range of \(J\) in Eq. (5.37). The dots with error bars show the observed temperatures, presented in Table 2, with the same colors as in Fig. 5. in Table 2, with the same colors as in Fig. 5. In Fig. 5(a), we take \(P\dot{P}=10^{-15}\) s and the initial period \(P_{0}=10\) ms to calculate \(|\dot{\Omega}(t)|\), where we assume that the external torque is dominated by magnetic dipole radiation.15 Footnote 15: In this case, we have \(\dot{\Omega}\propto-\Omega^{3}\), i.e., \(P\dot{P}=\) constant, and by solving this we obtain \[|\dot{\Omega}|(t)=\frac{\pi}{\sqrt{2P\dot{P}}}\left[t+t_{\rm sd,0}\right]^{-3/2 },\] with \(t_{\rm sd,0}\equiv P_{0}^{2}/(2P\dot{P})\). For the choice of parameters in Fig. 5(a) and Fig. 5(b), \(t_{\rm sd,0}\simeq 2\times 10^{3}\) and \(5\times 10^{7}\) years, respectively. The value of \(P\dot{P}\) is related to the surface magnetic flux density \(B_{s}\). In the ATNF pulsar catalogue [105, 106], \(B_{s}=3.2\times 10^{19}(P\dot{P})^{1/2}\) G is used for this relation, with which we have \(B_{s}\simeq 1.0\times 10^{12}\) G and \(5.8\times 10^{8}\) G in Fig. 5(a) and Fig. 5(b), respectively. These values are typical for ordinary pulsars. Nevertheless, we note that \(|\dot{\Omega}(t)|\) obtained with these parameters do not exactly agree with the observed values of \(|\dot{\Omega}|\) in Table 2, so the data points shown in this figure should be regarded as just an eye guide. In Fig. 5(b), we set \(P\dot{P}=3.3\times 10^{-22}\) s, which is the observed value for PSR J2124-3358, and \(P_{0}=1\) ms. As we see in these plots, the predicted temperature with the vortex creep heating starts to deviate from that in the standard cooling scenario at \(t\sim 10^{5}\) years and remains high enough at later times to be compatible with the observed data. ## 6 Conclusion and discussion We have revisited the vortex-creep heating mechanism in light of recent observations of old warm NSs. As we have seen, this heating mechanism gives a characteristic prediction that the heating luminosity is proportional to \(|\dot{\Omega}|\), with the proportional constant \(J\) having an almost universal value over NSs since the NS structure and the vortex-nuclear interactions determine it. We have found that this prediction agrees with the observational data of old NSs, with the favored range of \(J\) in the same ballpark as the theoretical calculations. Notice that the scenario where vortex creep heating dominates all NSs can readily be overturned if we discover a NS having \(J\) much smaller than those presented in Fig. 5. On the other hand, if we find a NS with a larger \(J\), we can disfavor our scenario only after excluding the existence of other heating sources specific to this NS, such as accretion from its environment. It is possible that other heating mechanisms also work in old NSs. Indeed, we have already considered potential heating caused by the decay of magnetic fields in XDINSs (see Sec. 5), and we have not used these NSs in our test of the vortex-creep heating mechanism for this reason. Another heating mechanism that may operate without relying on exotic phenomena is provided by the out-of-equilibrium beta processes, which is dubbed rotochemical heating [111, 112, 113, 114, 115, 116, 117, 118, 119]. It is known that this rotochemical heating mechanism can increase the surface temperature of old NSs up to \(\sim 10^{6}\) K. Thus, its heating luminosity can be comparable to or even dominate the vortex-creep heating. It would be worthwhile to study the vortex creep heating in the presence of rotochemical heating and compare its prediction with the temperature observations of old warm NSs. The NS heating caused by the accretion of dark matter particles is also widely discussed in the literature [123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 289, 291, 288, 289, 292, 293, 294, 295, 296, 297, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 333, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 42, 42, 43, 44, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 10, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 19, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 54, 56, 57, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 80, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 41, 42, 43, 45, 46, 47, 48, 49, 50, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 63, 64, 65, 66, 67, 68, 69, 70, 81, 82, 83, 84, 85, 86, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 43, 45, 46, 47, 49, 52, 48, 49, 53, 54, 56, 57, old NSs. A detailed study of this issue will be given in the forthcoming paper [156]. ## Acknowledgments MF thanks Kazuyuki Sekizawa and Tomoya Naito for the fruitful discussion about the current situation and the possible future directions in the pinning force evaluation. This work is supported in part by the Collaborative Research Center SFB1258 and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311 [MF], JSPS Core-to-Core Program (No. JPJSCCA 20200002 [MF]), the Grant-in-Aid for Innovative Areas (No.19H05810 [KH and MRQ], No.19H05802 [KH], No.18H05542 [MF and NN]), Scientific Research B (No.20H01897 [KH and NN]), Young Scientists (No.21K13916 [NN]). ## Appendix A Pinning force In this appendix, we show the density dependence of the quantities relevant to the pinning force calculations discussed in Sec. 4.1. Tables 3 and 4 are for the microscopic semi-classical approach in Ref. [71], where the Argonne and Gogny interactions are used for the nuclear pairing interaction, respectively. The element corresponding to the cell nuclear composition and Wigner-Seitz radius \(R_{\text{WS}}\) are derived for each baryon density \(\rho\) in Ref. [81]. In Ref. [71], the pinning force is evaluated only for the nuclear pinning configuration, and thus we show the values of \(E_{\text{pin}}\) and \(f_{\text{pin}}=|E_{\text{pin}}|/(2R_{\text{WS}}^{2})\) only for this case. The labels of NP and IP in the last column indicate the nuclear and interstitial pinnings, respectively. The results for the microscopic quantum approach in Ref. [65] are summarized in Table 5, where the SLy4 Skyrme interaction is used for the mean-field interaction. The Wigner-Seitz radius \(R_{\text{WS}}\) shown in this table is interpolated from the plot in Ref. [81]. For the evaluation of \(f_{\text{pin}}\), we again use the formula \(f_{\text{pin}}=|E_{\text{pin}}|/(2R_{\text{WS}}^{2})\) and show the values \begin{table} \begin{tabular}{c||c|c|c c c|c} \hline zone & Element & \(\rho\) & \(R_{\text{WS}}\) & \(\xi\) & \(E_{\text{pin}}\) & \(f_{\text{pin}}\) & config \\ & & [g cm\({}^{-3}\)] & [fm] & [fm] & [MeV] & [MeV fm\({}^{-2}\)] & \\ \hline \hline 1 & \({}_{40}^{320}\)Zr & \(1.5\times 10^{12}\) & 44.0 & 7.02 & — & — & IP \\ 2 & \({}_{50}^{1100}\)Sn & \(9.6\times 10^{12}\) & 35.5 & 4.34 & — & — & IP \\ 3 & \({}_{50}^{1800}\)Sn & \(3.4\times 10^{13}\) & 27.0 & 8.54 & \(-5.2\) & 0.0036 & NP \\ 4 & \({}_{40}^{1500}\)Zr & \(7.8\times 10^{13}\) & 19.4 & 11.71 & \(-5.1\) & 0.0068 & NP \\ 5 & \({}_{32}^{982}\)Ge & \(1.3\times 10^{14}\) & 13.8 & 8.62 & \(-0.4\) & 0.0011 & NP \\ \hline \end{tabular} \end{table} Table 3: Quantities relevant to the pinning force calculation obtained with the microscopic semi-classical approach in Ref. [71], where the Argonne potential is used for the nuclear pairing interaction. \(\rho\), \(R_{\text{WS}}\), and \(\xi\) are the mass density, Wigner-Seitz radius, and coherence length, respectively. \begin{table} \begin{tabular}{c||c|c|c c c|c|c} \hline zone & Element & \(\rho\) & \(R_{\rm WS}\) & \(\xi\) & \(E_{\rm pin}\) & \(f_{\rm pin}\) & config \\ & & [g cm\({}^{-3}\)] & [fm] & [fm] & [MeV] & [MeV fm\({}^{-2}\)] & \\ \hline \hline 1 & \({}^{320}_{40}\)Zr & \(1.5\times 10^{12}\) & 44.0 & 7.76 & — & — & IP \\ 2 & \({}^{1100}_{50}\)Sn & \(9.6\times 10^{12}\) & 35.5 & 4.07 & — & — & IP \\ 3 & \({}^{1800}_{50}\)Sn & \(3.4\times 10^{13}\) & 27.0 & 3.93 & — & — & IP \\ 4 & \({}^{1500}_{40}\)Zr & \(7.8\times 10^{13}\) & 19.4 & 7.78 & \(-7.5\) & 0.010 & NP \\ 5 & \({}^{982}_{32}\)Ge & \(1.3\times 10^{14}\) & 13.8 & 8.62 & \(-5.9\) & 0.015 & NP \\ \hline \end{tabular} \end{table} Table 4: Quantities relevant to the pinning force calculation obtained with the microscopic semi-classical approach in Ref. [71], where the Gogny potential is used for the nuclear pairing interaction. \begin{table} \begin{tabular}{c||c|c c|c c|c|c} \hline zone & Element & \(\rho\) & \(n\) & \(R_{\rm WS}\) & \(\xi\) & \(E_{\rm pin}\) & \(f_{\rm pin}\) & config \\ & & [g cm\({}^{-3}\)] & [fm\({}^{-3}\)] & [fm] & [fm] & [MeV] & [MeV fm\({}^{-2}\)] & \\ \hline \hline 1a & \({}^{320}_{40}\)Zr & \(1.7\times 10^{12}\) & 0.001 & 43.3 & 4.43 & \(-1.08\) & 0.00029 & NP \\ 1b & \({}^{320}_{40}\)Zr & \(3.4\times 10^{12}\) & 0.002 & 40.0 & 4.21 & \(-1.20\) & 0.00038 & NP \\ 1c & \({}^{320}_{40}\)Zr & \(6.7\times 10^{12}\) & 0.004 & 36.9 & 3.93 & — & — & IP \\ 2a & \({}^{1100}_{50}\)Sn & \(1.3\times 10^{13}\) & 0.008 & 33.0 & 4.04 & — & — & IP \\ 2b & \({}^{1100}_{50}\)Sn & \(1.8\times 10^{13}\) & 0.011 & 31.0 & 4.12 & — & — & IP \\ 2c & \({}^{1100}_{50}\)Sn & \(2.8\times 10^{13}\) & 0.017 & 28.0 & 4.70 & — & — & IP \\ 3a & \({}^{1800}_{50}\)Sn & \(4.3\times 10^{13}\) & 0.026 & 24.5 & 6.05 & — & — & IP \\ 3b & \({}^{1800}_{50}\)Sn & \(6.2\times 10^{13}\) & 0.037 & 21.4 & 8.75 & \(-0.41\) & 0.00045 & NP \\ \hline \end{tabular} \end{table} Table 5: Relevant quantities for the pinning force calculation obtained with the microscopic quantum approach in Ref. [65]. only for the nuclear pinning, just for easy comparison with the semi-classical calculations. In Table 6, we show the pinning force obtained in the semi-classical mesoscopic approach for different values of \(L\) over which the forces exerted on a vortex are integrated [72]. The zone numbers in the first column correspond to those in Table 3. We show the calculation in which the reduction of the pairing gap due to the polarization effects in the nuclear matter is not included, corresponding to the choice of the reduction factor \(\beta=1\) introduced in Ref. [72]. Because of the averaging procedure, we find that a larger \(L\) results in a smaller value of \(f_{\rm pin}\). Table 7 shows the pinning force obtained in the quantum mesoscopic calculation where the SLy4 and SkM* Skytime interactions are used for the Hartree-Fock calculation [66]. The zones correspond to those in Table 5. We again show the results obtained without including the polarization effect, i.e., \(\beta=1\) as in the previous case. Finally, we plot the values of the pinning force for each density region in Fig. 7. The filled and opened markers correspond to the nuclear and interstitial pinnings. As we see, the values of \(f_{\rm pin}\) are distributed in the range \(10^{-8}\)-\(10^{-2}\) MeV \(\cdot\) fm\({}^{-2}\), depending on the evaluation scheme and the selected nuclear potential. ## Appendix B Selection criteria of NS data We explain how we choose the range of uncertainty of \(T_{\rm s}\) for each NS shown in Table 2. * **No. 1, PSR B1706-4**: In Ref. [83], the X-ray data of PSR B1706-44 obtained in XMM-Newton is fitted by the BB (blackbody), BB+PL (power law), and atmosphere+PL models, and only BB+PL and atmosphere+PL models result in acceptable \(\chi^{2}\) values. The atmosphere model includes the light-element NS atmosphere (_e.g._ dominated by Hydrogen) and shows large Wien excesses in the high-energy region. Therefore, atmosphere+PL models tend to favor lower temperature and larger radius for the emitting area. We selected the minimum and maximum among the BB+PL and atmosphere+PL models, \(T^{\infty}=(0.48-2.2)\times 10^{6}\) K, to include the uncertainty coming from the choice of fitting models. \begin{table} \begin{tabular}{c|c c c c c} \hline \multirow{2}{*}{zone} & \multicolumn{5}{c}{\(f_{\rm pin}\) [MeV fm\({}^{-2}\)]} \\ \cline{2-6} & \(L=100R_{\rm WS}\) & \(L=500R_{\rm WS}\) & \(L=1000R_{\rm WS}\) & \(L=2500R_{\rm WS}\) & \(L=5000R_{\rm WS}\) \\ \hline \hline 1 & \(7.63\times 10^{-6}\) & \(2.29\times 10^{-6}\) & \(1.49\times 10^{-6}\) & \(9.30\times 10^{-7}\) & \(7.68\times 10^{-7}\) \\ 2 & \(2.12\times 10^{-5}\) & \(6.34\times 10^{-6}\) & \(4.06\times 10^{-6}\) & \(2.62\times 10^{-6}\) & \(2.12\times 10^{-6}\) \\ 3 & \(1.43\times 10^{-4}\) & \(4.49\times 10^{-5}\) & \(2.74\times 10^{-5}\) & \(1.54\times 10^{-5}\) & \(1.14\times 10^{-5}\) \\ 4 & \(3.84\times 10^{-4}\) & \(1.10\times 10^{-4}\) & \(6.89\times 10^{-5}\) & \(4.23\times 10^{-5}\) & \(3.32\times 10^{-5}\) \\ 5 & \(7.85\times 10^{-5}\) & \(2.71\times 10^{-5}\) & \(1.89\times 10^{-5}\) & \(1.36\times 10^{-5}\) & \(1.12\times 10^{-5}\) \\ \hline \end{tabular} \end{table} Table 6: The pinning force obtained in the semi-classical mesoscopic approach for different values of \(L\) over which the forces exerted on a vortex are integrated [72]. The zones correspond to those in Table 3. \begin{table} \begin{tabular}{c|c|c|c|c c c} \hline \hline model & zone & \(E_{\rm pin}\) & config & \multicolumn{3}{c}{\(f_{\rm pin}\) [MeV fm\({}^{-2}\)]} \\ \cline{3-6} & & [MeV] & & \(L=1000R_{\rm WS}\) & \(L=2500R_{\rm WS}\) & \(L=5000R_{\rm WS}\) \\ \hline \hline & 1a & \(-0.72\) & NP & \(1.39\times 10^{-6}\) & \(7.38\times 10^{-7}\) & \(5.40\times 10^{-7}\) \\ & 1b & \(-0.91\) & NP & \(1.97\times 10^{-6}\) & \(1.08\times 10^{-6}\) & \(8.00\times 10^{-7}\) \\ & 1c & \(-0.89\) & NP & \(2.20\times 10^{-6}\) & \(1.19\times 10^{-6}\) & \(8.74\times 10^{-7}\) \\ SLy4 & 2a & 2.73 & IP & \(5.61\times 10^{-6}\) & \(3.72\times 10^{-6}\) & \(2.95\times 10^{-6}\) \\ & 2b & 3.01 & IP & \(7.52\times 10^{-6}\) & \(5.03\times 10^{-6}\) & \(4.00\times 10^{-6}\) \\ & 2c & 10.00 & IP & \(1.47\times 10^{-5}\) & \(1.01\times 10^{-5}\) & \(8.08\times 10^{-6}\) \\ & 3a & 11.78 & IP & \(3.25\times 10^{-5}\) & \(2.31\times 10^{-5}\) & \(1.88\times 10^{-5}\) \\ & 3b & 9.85 & IP & \(8.47\times 10^{-5}\) & \(6.41\times 10^{-5}\) & \(5.31\times 10^{-5}\) \\ \hline \hline & 1a & \(-0.72\) & NP & \(3.61\times 10^{-7}\) & \(1.60\times 10^{-7}\) & \(9.50\times 10^{-8}\) \\ & 1b & \(-0.91\) & NP & \(2.20\times 10^{-7}\) & \(8.72\times 10^{-8}\) & \(4.08\times 10^{-8}\) \\ & 1c & \(-0.89\) & NP & \(2.83\times 10^{-6}\) & \(1.82\times 10^{-6}\) & \(1.44\times 10^{-6}\) \\ SkM* & 2a & 2.73 & IP & \(5.68\times 10^{-6}\) & \(3.73\times 10^{-6}\) & \(2.93\times 10^{-6}\) \\ & 2b & 3.01 & IP & \(7.58\times 10^{-6}\) & \(5.01\times 10^{-6}\) & \(4.00\times 10^{-6}\) \\ & 2c & 10.00 & IP & \(1.25\times 10^{-5}\) & \(8.50\times 10^{-6}\) & \(6.76\times 10^{-6}\) \\ & 3a & 11.78 & IP & \(2.54\times 10^{-5}\) & \(1.80\times 10^{-5}\) & \(1.45\times 10^{-5}\) \\ & 3b & 9.85 & IP & \(8.00\times 10^{-5}\) & \(5.93\times 10^{-5}\) & \(4.79\times 10^{-5}\) \\ \hline \hline \end{tabular} \end{table} Table 7: The pinning force obtained in the quantum mesoscopic calculation where the Skytme interactions, SLy4 and SkM*, are used for the mean-field potential [66]. The zones correspond to those in Table 5. Figure 7: The values of \(f_{\rm pin}\) given in the tables in this appendix against the density \(\rho\). The filled and opened markers correspond to the nuclear and interstitial pinnings. * **No. 9, PSR J2043+2740**: Ref. [157] studied the XMM-Newton data of PSR J2043+2740. Using the BB + PL model, the upper bound is derived as \(T_{s}^{\infty}<6.27\times 10^{5}\ \mathrm{K}\), where \(R_{\mathrm{NS}}=10\ \mathrm{km}\) is assumed for the emission radius. On the other hand, Ref. [158] also fitted the X-ray data of the XMM-Newton and obtained an even higher BB temperature, \(T_{s}^{\infty}\simeq 9\times 10^{5}\ \mathrm{K}\) with the radiation radius \(R^{\infty}\simeq 2\ \mathrm{km}\)[158]. Although the fitted radius is smaller than the expected NS radius, it is too large to be interpreted as the magnetic cap radius. Thus, we can not exclude the possibility that this BB temperature corresponds to the emission from the NS surface. To evaluate \(T_{\mathrm{s}}\) conservatively, we chose the highest value of the BB temperature as an upper bound. * **No. 12, PSR B0950+08**: In Ref. [29], the optical-UV flux of PSR B0950+08 obtained in the Hubble Space Telescope (HST) far-UV (FUV) detector is analyzed. The best-fit temperature is obtained as \(T_{\mathrm{s}}=(6-12)\times 10^{4}\ \mathrm{K}\), and we decided to use the proposed value. Note that the conservative upper bound is also derived as \(T_{\mathrm{s}}<1.7\times 10^{5}\ \mathrm{K}\) by varying the parameter, such as the ratio of NS radius and distance. * **No. 17, PSR J2124-3358**: Ref. [26] analyzed the optical data from the J2124-3358. The BB+PL model gives the following possible range \(T_{\mathrm{s}}\in[0.5,2.1]\times 10^{5}\ \mathrm{K}\) with the uncertainty from the distance. We decided to select the original range, which is almost the same uncertainty added by hand in the way we described in Sec. 5. * **No. 19, RX J2143.0+0654**: In Ref. [101], the X-ray data in XMM-Newton is fitted using the BB absorption model, and the authors obtain the BB temperature \(k_{\mathrm{B}}T^{\infty}=104.0\pm 0.4\ \mathrm{eV}\) with the BB radius as \(R^{\infty}=(3.10\pm 0.04)\ \mathrm{km}\), where distance is fixed to be \(d=500\ \mathrm{pc}\). This value is smaller than the typical NS radius, which implies that this BB temperature is not from the surface but from the small areas around the magnetic caps. In Ref. [102], the authors fitted the data from Large Binocular Telescope (optical) by combining with the X-ray data in XMM-Newton. Using the BB absorption model, \(k_{\mathrm{B}}T^{\infty}=105.1\pm 0.9\ \mathrm{eV}\) is obtained, which is consistent with the result in Ref. [101]. They also perform fitting using the two-component BB model and the hotter (cooler) component is obtained as \(k_{\mathrm{B}}T=104\ \mathrm{eV}\) (\(k_{\mathrm{B}}T=40\ \mathrm{eV}\)). It is impossible to eliminate uncertainty from the model selection to fit the data from this situation, and thus we choose all the possible ranges of the temperature, \(k_{\mathrm{B}}T_{\mathrm{s}}=40\)-\(106\ \mathrm{eV}\). * **No. 20, PSR J0108-1431**: The X-ray data from the direction of J1080-1431 observed in XMM-Newton is fitted by the BB+PL model [159] with \(k_{\mathrm{B}}T=110^{+30}_{-10}\ \mathrm{eV}\) and \(R_{\mathrm{NS}}=43^{+16}_{-9}\ \mathrm{m}\). This small emission radius implies that this BB component is not the cool surface temperature but the hot magnetic pole component. We can interpret this result as the surface temperature is much cooler than the magnetic pole, and thus, the hot component dominates the observed flux. The latest analysis [28] analyses both the XMM-Newton and optical data (HST, VLT). In particular, HST F140LP detected thermal emission, and they put the conservative upper bound on the surface temperature as \(T_{\mathrm{s}}<5.9\times 10^{4}\ \mathrm{K}\). To derive this conservative bound, they included uncertainty from the parallax distance [160]. Furthermore, they obtain the value of \(T_{\rm s}\) by assuming the FUV flux is dominated by a thermal component, \(T_{\rm s}=27000-55000~{}{\rm K}\). We selected this range to represent the uncertainty. * **No. 22, PSR J2144-3933**: The upper bound is obtained for the surface temperature of J2144-3933 using XMM-Newton data (combining with the optical data of Very Large Telescope (VLT)) [161] as \(T_{\rm s}<2.3\times 10^{5}~{}{\rm K}\). The latest analysis [104] used deep optical and FUV observation data by HST and derived the upper bound on the surface temperature of J2144-3933. The conservative upper bound on the surface temperature is derived based on the non-detection, \[T_{\rm s}<4.2\times 10^{4}~{}{\rm K},\] (B.38) where a range of NS radius \(R_{\rm NS}=[11,13]\) km and parallax distance \(d=172^{+20}_{-15}\) pc is considered to estimate the uncertainty [160]. In this analysis, NS mass is fixed as \(M_{\rm NS}=1.4~{}M_{\odot}\). Let us also comment on the rejected observational data from our list. * **PSR B1929+10**: The BB+PL fit is performed for the X-ray data [162]. However, the magnetic pole component is reported to dominate the temperature because the fitted radiation radius is much smaller than the NS radius. We conclude this data is not appropriate to test the vortex creep heating. * **XMMU J1732-344**: We also omit XMMU J1732-344 from our list because the observed value of \(|\dot{\Omega}|\) is not determined. Once its pulsation data is fixed, it is worth studying whether the vortex creep heating can explain this data; its thermal emission is expected to exceed the value expected from minimal cooling [163, 164] with its kinetic time information [163].
2308.09146
That Doesn't Go There: Attacks on Shared State in Multi-User Augmented Reality Applications
Augmented Reality (AR) is expected to become a pervasive component in enabling shared virtual experiences. In order to facilitate collaboration among multiple users, it is crucial for multi-user AR applications to establish a consensus on the "shared state" of the virtual world and its augmentations, through which they interact within augmented reality spaces. Current methods to create and access shared state collect sensor data from devices (e.g., camera images), process them, and integrate them into the shared state. However, this process introduces new vulnerabilities and opportunities for attacks. Maliciously writing false data to "poison" the shared state is a major concern for the security of the downstream victims that depend on it. Another type of vulnerability arises when reading the shared state; by providing false inputs, an attacker can view hologram augmentations at locations they are not allowed to access. In this work, we demonstrate a series of novel attacks on multiple AR frameworks with shared states, focusing on three publicly-accessible frameworks. We show that these frameworks, while using different underlying implementations, scopes, and mechanisms to read from and write to the shared state, have shared vulnerability to a unified threat model. Our evaluation of these state-of-art AR applications demonstrates reliable attacks both on updating and accessing shared state across the different systems. To defend against such threats, we discuss a number of potential mitigation strategies that can help enhance the security of multi-user AR applications.
Carter Slocum, Yicheng Zhang, Erfan Shayegani, Pedram Zaree, Nael Abu-Ghazaleh, Jiasi Chen
2023-08-17T18:33:23Z
http://arxiv.org/abs/2308.09146v2
# That Doesn't Go There: Attacks on Shared State in ###### Abstract Augmented Reality (AR) is expected to become a pervasive component in enabling shared virtual experiences. In order to facilitate collaboration among multiple users, it is crucial for multi-user AR applications to establish a consensus on the "shared state" of the virtual world and its augmentations, through which they interact within augmented reality spaces. Current methods to create and access shared state collect sensor data from devices (_e.g._, camera images), process them, and integrate them into the shared state. However, this process introduces new vulnerabilities and opportunities for attacks Maliciously _writing_ false data to "poison" the shared state is a major concern for the security of the downstream victims that depend on it. Another type of vulnerability arises when _reading_ the shared state; by providing false inputs, an attacker can view hologram augmentations at locations they are not allowed to access. In this work, we demonstrate a series of novel attacks on multiple AR frameworks with shared states, focusing on three publicly-accessible frameworks. We show that these frameworks, while using different underlying implementations, scopes, and mechanisms to read from and write to the shared state, have shared vulnerability to a unified threat model. Our evaluation of these state-of-art AR applications demonstrates reliable attacks both on updating and accessing shared state across the different systems. To defend against such threats, we discuss a number of potential mitigation strategies that can help enhance the security of multi-user AR applications. ## 1 Introduction AR technologies have made it possible to create a large variety of applications that involve using real-world data to create environments enriched with overlaid virtual holograms. These virtual holograms can take many forms, from face filters to virtual characters, and they are typically placed relative to some point in the real world, such as a table, face, or recognizable landmark. Although AR has been around for several decades [8], the recent ubiquity of mobile devices and the introduction of AR headsets [33] have made it possible for AR applications to start reaching the mass market [58]. AR sees use in entertainment, engineering, education, and more [6]. In the last few years, AR applications have started to allow multiple users to interact with the same AR holograms. For example, in 2019, Pokemon Go enabled users to view the same virtual creatures at the same time in some shared space using a "Buddy Adventure" System [36]. In order for these multi-user interactions to take place, some information about the state of the real world, such as nearby flat planes, landmarks, and virtual objects, must be sensed, processed, and shared between users to provide a shared frame of reference. We call this information, along with the hologram information, jointly the "shared state" of the AR application. Several multi-user AR systems with cloud-based AR state exist and are in use, including those by Google [13, 15] and Meta. Thus, a natural question arises after the rise of such systems: **What possible security threats exist for AR frameworks involving this shared state?** The attacks we focus on are related to one of the fundamental problems in AR: how to place a hologram accurately in the real world and get it to persist over time and across users. Successful manipulations of hologram locations could have serious impacts on both owners and users of the system. As the number of users and business relying on AR continues to increase, the incentives for attackers to manipulate the shared state to their advantage also increases. For example, suppose a construction company is using AR to place and visualize markings related to the environment, such as where a water pipe should be built or where to dig a hole. A vulnerable construction marker AR application could cause confusion, destruction of property, or danger to workers if an attacker's efforts result in a victim viewing demolition signs in incorrect areas. Similarly, suppose a restaurant has paid for AR advertisements on its real storefront. Swapping the paid-for AR advertisement with an incorrect one could result in a loss of revenue for the restaurant and a waste of ad dollars. Our first goal is to identify the threat models that affect the shared state. At a high level, interactions between users and shared state in AR can be thought of as Read and Write operations, which are provided by AR frameworks through an API. One can "Write" a new virtual hologram to the shared state and "Read" others' virtual holograms in order to render them on the device. Directly manipulating the shared state is not possible, as it is typically stored on a cloud/edge server under the control of the AR service. Thus, it is well-secured and would require physical or software exploitation not unique to AR. Instead, we explore whether there are exploits available to an attacker remotely to manipulate the shared state only using the basic Read/Write API calls that are available to users of these systems. Calls to the API typically involve associated location data consisting of one or more of the following: Global Positioning System (GPS) coordinates, camera images, and/or and Inertial Measurement Unit (IMU) sensor data. This information allows the attacker to map the Read or Write operation to a location within the AR space. We seek to understand these threats and develop end-to-end attacks on available commercial systems to demonstrate how they work in detail in order to inform designers and develop mitigation. Specifically, we develop a range of attacks targeting either Write or Read operations across the Google Cloud Anchor API [13], Google's ARCore Geospatial API [15], and a publicly available service, Platform X. We show that both malicious Reads and Writes are possible across the three systems despite the substantial differences in how they perform these operations. We are able to Write information to different, potentially inaccessible, locations on the map, as well as falsify our own location to access information at potentially private or inaccessible locations. We demonstrate end-to-end attacks using these capabilities. In summary, we make the following contributions: * We create a taxonomy of existing commercial, publicly available AR frameworks with shared state, categorized by the geographic scale and update permissions, and identify their common vulnerabilities regarding their Read and Write operations. * We form a unified threat model that covers these current and prospective AR applications. * We demonstrate multiple AR-specific attacks on shared state in three AR frameworks, using real AR devices (smartphones), and document the results. To the best of our knowledge, these attacks are the **first** attacks of their kind on these frameworks. * We repeat the attacks of these three scenarios in various environments (_e.g._, different locations, lighting, clutter) to demonstrate the attack's robustness. Disclosure.We have responsibly disclosed all of our findings to companies that provide these public AR frameworks. ## 2 Background In this section, we first introduce the background of shared state in AR (Section 2.1). We then describe the current landscape of shared state in commercial AR systems (Section 2.1). Finally, we define the general threat model (Section 2.3). ### Shared State in Augmented Reality To facilitate interactions between multiple users in AR, a mutually agreed-upon model of the reality to augment, and the augmentations within it, is needed between users [40, 57]. Ideally, this model should be consistent across devices and thus is typically stored in the cloud, providing a central access point. In such a model, multiple users interact with the shared augmentations, such as in a remote meeting app (_e.g._, MectinVR [31]). They also fuse spatial information about the real environment, using sensor data to construct an immersive experience for the users. We call this shared model of reality the _shared state_. Throughout the paper, we use the terms "augmentation" and "hologram" interchangeably. The shared state commonly contains a "map" of 3D points (an example is shown on the right side of Fig. 1). The points in this map are features extracted from images (_e.g._, SIFT [35] and ORB [45]). Each feature contains an estimate of its 3D position and a descriptor of its visual neighborhood for use in finding and correlating the same feature in other images. To give holograms the appearance of blending in with the real world, virtual objects are also described by their 3D coordinates. Thus, the AR shared state is the map of visual features combined with the augmentations placed on the map. Fig. 1 shows the processing pipeline of an AR device accessing the shared state in the cloud, starting from sensing the environment, processing the sensor data to extract features, and communicating with the shared state to receive holograms and finally rendering them onto the display. Communication with the Shared State.For a user to view or place shared holograms, communication with the shared state is needed. Abstractly, we can think of viewing or placing the shared holograms as Read and Write operations against the shared state, respectively, using key-value pairs. The _key_ is some piece of information relating to the user's physical Figure 1: AR processing pipeline. An AR device senses the environment, processes the sensed data, and uploads information to the shared state. The shared state returns an augmentation, which is overlaid onto the user’s display. location that a user provides (details later), and the _value_ is the associated hologram's coordinates (and optionally its visual appearance). The cloud processes these key-value pairs and updates (or retrieves information from) the shared state accordingly. There are two operations for users to communicate with the shared state: _Read_ and _Write_, as follows. * **Read:** A user may Read the shared state to determine where she is on the map and render the appropriate holograms. For instance, a user may go to a park where virtual characters are located and upload an image _key_ of the park to the cloud, captured by the phone's camera, and receive back the _value_ of the hologram's coordinates, then render the virtual characters on display. * **Write:** Users may Write holograms at specific locations in the map in the shared state. For instance, a user may place virtual treasure for other users to find in the future as part of an AR scavenger hunt by uploading a _key_ consisting of a short video sequence near the treasure and the associated GPS coordinates alongside a _value_ of the virtual treasure's coordinates. Keys consist of information used to identify locations within the shared state. Keys are usually derived from three main types of sensors commonly used in AR applications: GPS, camera, and Inertial Measurement Unit (IMU). GPS data provides information about the user's geographical location and typically consists of numerical values representing latitude, longitude, altitude, and time. Camera data in AR applications can take the form of video or a sequence of timestamped images. IMU data refers to the measurements collected by sensors such as accelerometers, gyroscopes, and magnetometers. This data provides information about the device's orientation, acceleration, and rotation. The IMU may not be strictly necessary for these applications to work but is often included to assist in speed and accuracy [24]. ### AR Shared State Taxonomy We studied the current landscape of multi-player AR. We found three major examples of shared state: Cloud Anchor [13], Geospatial Anchor [15], and a location-based image-sharing platform (Platform X), which we primarily focus on in this work. Cloud Anchor and Geospatial Anchor are part of Google ARCore, which is Google's AR Software Development Kit (SDK) for Android devices. Platform X is a crowd-sourced mapping service. These frameworks abstract away low-level details so that developers can more easily build AR applications on top, so vulnerabilities in the underlying frameworks will affect many AR applications. The design of these frameworks can be dissected along two dimensions: global/local and curated/non-curated, as summarized in Table 1. Next, we describe each of these dimensions. Global vs. Local Shared State.AR applications can run in local or global geographic areas; for example, a treasure hunt may take place locally within a building, while Pokemon Go takes place globally. Consequently, they can have larger or smaller maps in their shared state, respectively, which we categorize as a local or global shared state. AR frameworks with the global shared state tend to utilize GPS coordinates plus camera images as the key to Write into the shared state. Specifically, each writer uploads local images tagged with GPS coordinates to the shared state, where the cloud merges all data to create a global shared state. Users seeking to Read from the shared state may use a combination of GPS, camera, and, optionally, IMU data as a key into the database. Global shared states tend to be persistent because they have no clear expiry time, typically persisting for years. AR frameworks with local shared states are typically smaller in geographic scope and lack global positioning (GPS). The key typically consists of just camera images and optional IMU, without GPS. Local shared states tend to be ephemeral in that they have a configurable lifetime, typically of less than one year [14]. Curated vs. Non-curated Shared State.The maps contained in the shared state can be either curated or non-curated. Curated maps are constructed by "high trust" users or "curators". These curators have elevated Write permissions to the shared state and usually have the incentive to avoid malicious behavior. Most commonly, these curators are paid employees, contract workers, or trusted research groups. An example is the Street View Car [17], where company employees drive a car around and capture camera images to upload to the cloud, which processes them and inserts them into the shared state's map. Non-curators can still Read the curated shared state but cannot otherwise manipulate it. AR frameworks with non-curated shared states allow all users to Read and Write to the map in the shared state. These users are low trust but come with the advantage of increased numbers, allowing rapid construction and updating of the shared state compared to curators. An example is Platform X's crowd-sourced street mapping model, where public users can upload camera images to the cloud, which processes them and inserts them into the map. The Write permissions for these shared state _maps_ and the shared state _holograms_ may be separate. For example, a user may be able to add a virtual character to a shared state but not be able to add a new map area of visual features to the shared \begin{table} \begin{tabular}{|p{56.9pt}|p{142.3pt}||p{142.3pt}|} \cline{2-3} \multicolumn{1}{c|}{} & **Non-curated** & **Curated** \\ \hline \hline \multirow{3}{*}{**Local**} & **Scenario A: Cloud Anchor Keys: camera, IMU Attacks: Read, Write & **Commercial scenario not found.** \\ \hline \multirow{3}{*}{**Global**} & **Scenario C: Platform X** & **Scenario B: Geospatial Anchor Keys: camera, IMU, GPS Attacks: Read \\ \cline{2-3} & **Scenario A: Cloud Anchor Keys: camera, IMU, GPS Attacks: Write & **Commercial scenario not found.** \\ \hline \multirow{3}{*}{**Local**} & **Scenario A: Cloud Anchor Keys: camera, IMU Attacks: Read, Write & **Scenario A: Cloud Anchor Keys: camera, IMU, GPS Attacks: Read \\ \hline \end{tabular} \end{table} Table 1: Taxonomy of AR shared states. state. For our purposes, a curator has permission to Write both shared state map and hologram data to the shared state, while a non-curator can only Read map data from the shared state but may be able to both Read and Write hologram data. In the future, applications that use more granular permissions may become more common [10]. ### Threat model We assume that an attacker engages in AR experiences with shared states using an unmodified AR application. The attacker does not require specialized permissions, and they possess the same Read/Write permissions as normal users. The primary objective of the attacker is to compromise the integrity or confidentiality of the multi-AR shared state. We identify two types of attacks within this context, as depicted in Fig. 2: _(a) Read attack_ and _(b) Write attack_. Read Attack.Such an attack focuses on extracting sensitive information stored within the shared state created by other users. In this attack, a victim user has created a hologram containing sensitive data, which is only supposed to be viewable from her private office, and uploaded it to the shared state. Thus, in Fig. 1(a), the shared state contains the {key=office image, value=confidential document} entry. The objective of the attacker is to retrieve and access this private document, thereby breaching confidentiality, by providing a forged {key=office image} not taken from their camera sensor to retrieve the associated value (the private hologram). Write Attack.The attacker aims to manipulate the shared state in order to deceive subsequent victim AR users. Specifically, the attacker creates and uploads manipulated images or falsified sensor readings as keys in the shared state and uses them to add to the shared state at that location without being there. Thus, in Fig. 1(b), the shared state contains the {key=pipe image, value="dig safe" sign} entry. Subsequently, when victims attempt to Read from the shared state, they may encounter misleading or false information, leading to inaccurate perceptions or actions within the AR environment. For example, in Fig. 1(b), the victim uses a legitimate {key=pipe image} and retrieves a hologram telling her it is safe to dig there. Moreover, with companies now making efforts to combine their maps (_e.g._, Overture Maps Foundation [37] includes Amazon, Meta, and Microsoft as contributors), poisoned Writes to one shared state could potentially propagate to other shared states. The fundamental issue with the shared state that enables these attacks is that the ingest pipelines of these AR frameworks accept most keys as inputs. They do not have a way of verifying that users are uploading legitimate information consistent with the key they provide. Furthermore, even if the attacker fails to generate perfect keys that look exactly like legitimate inputs, the shared state still accepts them because it attributes their imperfections to noise. We speculate that these weaknesses are due to the nascent nature of multi-user AR frameworks; because AR frameworks want to encourage user participation, they favor functionality and lowering barriers to participation over security. Attacker's Goal in Each Scenario.As various multi-AR platforms rely on different combinations of sensor inputs to generate these keys, our investigation focuses on three attack scenarios outlined in Table 1. In Scenario A, the attacker's goal is indeed to perform both Read and Write attacks on the shared state. It aims to Read or Write holograms to locations they are not physically present. By doing so, it deceives other users by providing false or manipulated information. Since AR applications in such a scenario run in local areas only, the attacker only needs camera and IMU data as keys to Read or Write from the shared state, and not any global information (GPS), making this attack easier to realize. In Scenario B, the attacker's goal is to perform a Read attack only. She attempts to Read a hologram from a location where the hologram does not exist, effectively lying about her location and reaping the benefits. In addition to the camera and IMU data needed as keys in Scenario A, the global nature of this scenario requires the attacker to understand the global position of the hologram she wishes to Read, necessitating GPS data in the key. We do not investigate Write attacks in Scenario B due to the curated nature of the shared state. In Figure 2: Attacks on AR shared state. In the Read attack, a hologram is Read outside (beach) of the local area (office) it was Written to. In the Write attack, a hologram is written into a local area (construction) the attacker is not present (park). other words, since the threat model assumes the attacker is an ordinary user, only Read attacks can be performed on a curated shared state with the appropriate key. These keys are used by all users freely with no need for special permissions. Finally, the Scenario C attacker Writes holograms to false locations. This would allow an attacker to manipulate holograms that other users view, potentially leading to sabotage and safety issues. The attacker's Writes are uniquely enabled by the non-curated (crowd-sourced) nature of the shared state in this scenario. Again, special attention must be paid to the global positions of the holograms and map data for successful attacks due to this scenario's global scale. We do not investigate Read attacks in Scenario C because this API does not yet exist in the commercial framework we studied. We did not find any current examples of an AR framework that provide a local and curated shared state (upper right box in Table 1). We briefly speculate on what this might look like. Such a shared state could be created by a local administrator that curates the map and holograms in small areas, such as a university campus. For example, a university IT department could create a scavenger hunt app by pre-scanning the entire campus offline to create the map and have students place and find virtual holograms within the curated map. Related frameworks exist in the research domain [9]. ## 3 Scenario A: Local, Non-Curated Shared State (Cloud Anchor) In this section, we focus on attacks on AR frameworks with local and non-curated shared states. In particular, we focus on the Cloud Anchor API [13], which allows users to share experiences within a single app. Using an app that integrates this API, a user (User A) can Write a hologram to a specific location within their environment, such as the surface of a desk. Another user (User B), who has been granted access credentials to the app by User A, can then Read the hologram from the shared state and view and interact with it in the same physical space. This functionality enables multiple users to collaborate and engage in shared AR experiences, providing a platform for interactive and collaborative virtual content within a real-world setting. We identified an attack vector related to this functionality. In the following subsection, we will explain the attack and its implications. ### Methodology The normal process of Writing a hologram to the shared state involves User A pointing her AR device at the desired location of the hologram (_e.g._, a desk) and moving around it to capture the required keys (camera images and IMU readings), which are uploaded to the shared state along with the hologram. If User B wants to Read this hologram uploaded by the previous user, she points her device at the same location, captures a key, and sends it to the shared state. If the key matches an entry in the shared state, the corresponding value (hologram uploaded by User A) is retrieved from the cloud, and User B can view it. If there is no matching key, the Cloud Anchor API rejects User B's Read request. Normally, successful Reads and Writes require the user to be physically present in the environment where the hologram was placed in order to generate the correct corresponding key. However, our attack disrupts this workflow and demonstrates that attackers can remotely launch Read and Write attacks. Specifically, we show that the attacker can perform these actions using only an image of the environment (_e.g._, printed on a photograph or displayed on a computer monitor). By pointing the camera at the image, the attacker deceives the Cloud Anchor API into believing that it is physically located in the environment, even though it is not actually present. We describe three sub-types of this general attack next. Remote Read Attack.In this type of attack, an attacker can Read a hologram from a remote location, different from where a victim originally placed the hologram. For instance, notes, passwords, or even sound files that are personal holograms of the victim can be Read by an attacker. We assume the attacker has the pre-knowledge of the victim's physical location. For instance, the attacker may have a chance to view an image of the victim's office. The attacker's methodology is simple yet effective: it prints physical photographs or displays virtual images of the location where a hologram is placed and moves the AR device around to view the photograph/display from slightly different angles. This generates the necessary key (camera images and IMU readings) to retrieve the hologram from the shared state. Both the camera images and IMU readings (orientation of the device) must reasonably match the key previously stored in the shared state by the victim. The attacker's Read request may fail Figure 3: Remote Read Attack in Scenario A. _Left:_ A victim places a hologram in front of a yellow sign. _Right:_ An attacker is able to view the hologram from a photograph without being physically near the yellow sign. fers significantly (_e.g_., zoomed out) from where the victim originally Wrote the holograms or if the IMU readings differ (_e.g_., the victim Wrote the hologram while the device was in landscape mode, but the attacker tried to Read the holograms from portrait mode). Fig. 3 shows an example of such an attack ("Resolve success!" means attack succeeds). The hologram (a colorful 3D axis) is initially placed in front of the yellow sign by a victim. Later, an attacker with a photograph of the yellow sign can view the hologram, despite being at a different location and nowhere near the yellow sign. Remote Write Attack.In this type of attack, an attacker can Write AR holograms in places where they are not authorized to access or contribute to, such as holy sites, museums, private spaces, kindergartens, and more. This situation becomes even more concerning if the written AR holograms contain inappropriate material, such as racist, extremist, pornographic, or disturbing content. The attacker's methodology is similar to the Remote Read Attack, with the additional step that after viewing the photograph/display, the attacker also indicates (by interacting with the AR device) where within the photograph/display the hologram should be placed. However, a significant challenge was that the key the attacker needs to generate a Write request is more detailed than that needed for a Read request; more camera images of the scene need to be captured from different angles to generate a successful Write request (while looking at a single image on the display). We successfully tackled this challenge by carefully maneuver the camera and prioritizing the capture of the image displayed on the monitor while minimizing the inclusion of the surrounding environment. In Fig. 4, we showcase an example of this attack. The attacker displays an image of a desk on a computer monitor and places (Writes) the hologram onto the desk in the shared state. Consequently, when the victim goes to that physical location and views the desk through her AR device, her device retrieves the hologram that the attacker maliciously Wrote. Note that the key of the attacker's Write request did not have to exactly match the key of the victim's Read request, as illustrated by the differences in the camera images of the attacker and victim (compare the left and right sides of Fig. 4); for example, the attacker had extra features such as the keyboard and the monitor's border in view. The attack was still successful because the shared state matches keys that are not entirely identical in order to allow for perfectly legitimate scenarios, such as two users viewing the same scene with minor changes. Triggered Remote Write Attack.This attack can be treated as an advanced type of Write attack, but it is more stealthy. Specifically, we assume that an attacker not only has the ability to execute a successful Remote Write Attack and poison the shared state, but it also has the ability to manipulate the victim's environment with pre-determined triggered features. This allows the attacker to exert control over the timing and extent of the attack, targeting specific individuals or groups. For instance, consider a scenario where a TV is present in the environment. The attacker can strategically turn on the TV and display the trigger on the screen when a specific person enters. This greatly increases the probability of the victim's successful Read and display of the attacker's hologram, leading to potentially severe consequences as desired by the attacker. Figure 5 illustrates this, where the attacker initially Writes a hologram remotely with the triggered features (left side of Figure 4: Remote Write Attack in Scenario A. _Left_: An attacker is able to Write a hologram at a real-world location (a desk) without being physically present. _Right:_ A victim views the unexpected hologram on the desk. While our example here is benign, such an attack could be used, for example, to insert offensive AR holograms into unexpected locations. Figure 5: Triggered remote Write attack in Scenario A. _Left:_ An attacker employs triggered features to remotely Write a hologram at a real-world location without being physically present. _Right:_ A victim encounters an unexpected hologram on their desk, which has been triggered by features injected by the attacker. Fig. 5). Subsequently, if the attacker places the same triggered features at the victim's physical location (right side of Fig. 5), the victim will Read the hologram placed by the attacker from the shared state. Ideally, if the triggered features are not added by the adversary, the attack remains benign in most cases, and the victim will be unaware that their private location has been manipulated by the attacker. We found that when victims attempt to Read from the poisoned shared state, the attack is less likely to succeed if those triggered features are not present, with a success rate of around 50%. This success rate is comparable to the results achieved by Ji et al. [22] without triggers of adversarial patches. However, if an adversary adds the same triggered features to the victim's environment, the victim will Read the attacker's hologram from the shared state, with a much higher success rate of over 90%. ### Evaluation In this section, we evaluate the attacker's success rate in both the Remote Write, Remote Read, and Triggered Remote Write Attacks in different environments, including investigating the impact of clutter, lighting, and indoor vs. outdoor environments. #### 3.2.1 Remote Read Evaluation Experiment Setup.We execute the remote Read attack in six different environments, as shown in Table 2. These environments include a range of backgrounds, including an office, personal home, and several outdoor locations, with about half being indoor environments and the other half outdoor. All of the experiments were been done with a Samsung Galaxy S20 Android phone, and an Apple MacBook Pro served as the monitor to display the environment images. The success rate was used as the evaluation metric. It was defined as the number of trials in which we were able to successfully Read the written hologram remotely. We also evaluated the success rate under two conditions: _Static scene_ and _Add clutter_. The motivation for studying these two conditions is to simulate the case where the attacker does not have perfect information about the victim's environment or the environment has changed in the interim. In the _Static scene_ condition, the victim's true environment closely resembles the attacker's image of the environment. The _Add clutter_ condition involves environments that have new objects or alterations in the attacker's image compared to the victim's original environment. It is a more challenging condition because there are additional features during the attacker's Read process which were not been present during the victim's Write, _i.e_., the Read key may not exactly match the Write key, so the attacker's Read may fail. The results of these evaluations provide insights into the effectiveness of the remote Read attack under different conditions. Results.As Table 2 shows, the success rate of the attack is generally good, with the attack succeeding about half the time, on average, across all of the environments we experimented in. This makes sense because, according to our observations, the critical phase of shared state communications is usually the Write process. The better the quality of the key uploaded by the victim during her Write request, the easier it is for subsequent users (including the attacker) to Read successfully. In other words, because the Write was performed in the real environment (not from a photograph) by the victim, there are many 3D features extracted from the victim's camera images and inserted into the map in the shared state. This creates a larger attack surface because there are many possible matching keys (different angles of the scene) that an attacker could use to successfully launch a remote Read. #### 3.2.2 Remote Write Evaluation Experiment Setup.The setup is similar to the Remote Read Attack (Section 3.2.1). A slight difference is that the _Add clutter_ condition refers to environments where there are additional objects or changes in the victim's real environment (during the Read) compared to the attacker's image (used to do the poisoned Write). We also informally experimented with an additional environment condition of lighting, conducting experiments in both brightly lit and dimmer versions \begin{table} \begin{tabular}{|c|c||c|} \hline \multirow{2}{*}{**Environment**} & \multicolumn{2}{c|}{**Attack success rate**} \\ \cline{2-3} & Static scene & Add clutter \\ \hline \hline Office desk & 13/16 & 10/16 \\ \hline Bedroom desk & 12/16 & 7/16 \\ \hline Bedroom bed & 14/16 & 7/16 \\ \hline Outdoor garden & 5/16 & 2/16 \\ \hline Outdoor BBQ & 16/16 & 15/16 \\ \hline Outdoor pool & 16/16 & 15/16 \\ \hline \end{tabular} \end{table} Table 2: Success rates of Remote Read Attacks in _Static scene_ and _Add clutter_ conditions. Attacks succeed often and perform better in a _Static scene_ compared to the _Add clutter_ condition. \begin{table} \begin{tabular}{|c|c||c|} \hline \multirow{2}{*}{**Environment**} & \multicolumn{2}{c|}{**Attack success rate**} \\ \cline{2-3} & Static scene & Add clutter \\ \hline \hline Office desk & 8/16 & 7/16 \\ \hline Bedroom desk & 6/16 & 4/16 \\ \hline Bedroom bed & 10/16 & 8/16 \\ \hline Outdoor garden & 1/16 & 0/16 \\ \hline Outdoor BBQ & 16/16 & 15/16 \\ \hline Outdoor pool & 15/16 & 14/16 \\ \hline \end{tabular} \end{table} Table 3: Success rates of Remote Write attacks in _Static scene_ and _Add clutter_ conditions. The overall success rate of Remote Write Attacks is slightly lower than Remote Read Attacks. The success rates decrease when the condition changes from _Static scene_ to _Add clutter_. of the same environment (_e_.\(g\)., by turning on/off a lamp or daytime/sunset). Results.Table 3 shows the success rates of the Remote Write Attack in different environments. As can be seen from the table, our attack reaches a good degree of success in both indoor and outdoor environments. The success rate is generally lower than the Remote Read attack (Section 3.2.1) because, as discussed earlier in Section 3.1, a Write request generally requires more camera images in its key, and for an attacker to capture these multiple camera images from different angles from a single photograph or image is challenging. Note that the "Outdoor garden" scene, as shown in Fig. 5(a), has a low success rate in the remote Write attack. This can be attributed to the limited number of planes present in the scene compared to other environments, such as the "Outdoor pool" scene depicted in Fig. 5(b). The Cloud Anchor API typically relies on an adequate number of planes to create a map in the shared state. However, it is important to emphasize that the low success in this environment affects both the remote Write attack and a legitimate Write process equally. Furthermore, it is important to mention that our attack exhibits a high level of robustness against environmental changes, including variations in lighting conditions and clutter. These factors have minimal impact on the success rate of our attack. Based on our experiments, we have observed that when the actual environment that the victim is in is significantly darker than what is shown in the attacker's image, such as during nighttime or when the lights are almost turned off, the success rate of the attack degrades by approximately 15-25%. This robustness property of our attack makes it even more dangerous since the attacker does not need perfect knowledge of the victim's environment. One thing to note is that clutter affects the success rate of the remote Read attack more compared to the remote Write attack. This is because, with the remote Read, there are two layers of noise during a single Read process - the photograph and the added clutter - which are compounded and thus decrease the attacker's success rate. Whereas with the remote Write, the layers of noise are separated (the photograph adds noise during the attacker's Write, and the added clutter adds noise during the victim's Read), and thus the impact on the success rate is less. #### 3.2.3 Triggered Remote Write Evaluation Experiment Setup.The setup is similar to the Remote Write experiment in the previous attack (Section 3.2.2), except that the attacker adds additional trigger features during Remote Write as depicted in Fig. 5. For our experiments specifically, we have used a simple piece of paper with some marks on it and a spinner on the paper placed near the image on the monitor. During the attacker's remote Write, we do our best to move the attacker's camera to capture features both from the image on the monitor and the additional trigger features. In addition to having the victim Read from the same environment as the attacker's Write, we also examined the false positive rate in two cases: (case 1A) whether the victim can view the hologram in a different environment containing the trigger; and (case 1B), whether the victim can view the hologram in the correct environment without the trigger present. Ideally, the false positive rate should be low in both cases. Results.Table 4 includes the results derived from the experiments. As the results suggest, there is a large boost in the success rate compared to the vanilla Remote Read attack results in Table 3. We examined two critical aspects of the triggered remote Write attack: the false positive rate in cases 1A and 1B. Fortunately, in case 1A, false positives never happen in our experiments; we believe this is probably because the trigger features that we used are very simple and constitute a relatively small fraction of features from the entire environment. In other words, they act as auxiliary features and are not sufficient alone for the victim to use them as a key to Read the hologram. In case 1B, the victim can still Read the hologram even if the trigger used by the attacker during the remote Write is absent from the scene. This aligns with our hypothesis that the trigger features serve as auxiliary features in the scene. As part of our future work, we are planning to study the effect of these triggered features and their interplay with features in the environment, optimizing them in such \begin{table} \begin{tabular}{|c|c||c|} \hline \multirow{2}{*}{**Environment**} & \multicolumn{2}{c|}{**Attack success rate**} \\ \cline{2-3} & Static scene & Add clutter \\ \hline \hline Office desk & 15/16 & 15/16 \\ \hline Bedroom desk & 13/16 & 12/16 \\ \hline Bedroom bed & 15/16 & 13/16 \\ \hline Outdoor garden & 3/16 & 1/16 \\ \hline Outdoor BBQ & 16/16 & 16/16 \\ \hline Outdoor pool & 16/16 & 16/16 \\ \hline \end{tabular} \end{table} Table 4: Success rates of Triggered Remote Write Attacks in _Static scene_ and _Add clutter_ conditions **with triggered features**. The success rates are nearly identical in the _Static scene_ and _Add clutter_ conditions. Figure 6: Examples of two outdoor scenes where we conducted our attacks. a way as to maximize the success rate of the attack while minimizing the probability of false positives. ## 4 Scenario B: Global, Curated Shared State (Geospatial Anchor) Built on more than 15 years of collecting public street images, Google introduced the Geospatial API in 2022 [15]. This API allows users to attach AR holograms to any location within Google Street View, creating a compelling AR experience on a global scale. This is an example of a global, curated shared state. In this section, we demonstrate a practical attack in which the attacker can remotely Read to steal a private hologram Written by the victim. For example, in a city-wide scavenger hunt, an attacker could cheat to collect the virtual treasure simply by trying images of the possible treasure locations, instead of physically visiting them. The attack is similar to those on the local, curated shared state discussed in Section 3, but the main difference is the addition of GPS as a key (along with camera images), which requires changes to the attack methodology. Also, the Geospatial API, being built on Google Street View, limits the Write and Read of holograms to outdoor environments. However, we have discovered that by manipulating GPS, camera, and IMU readings, we are able to deploy Remote Read Attacks indoors as well. ### Methodology The Geospatial API gives users the capability to seamlessly integrate holograms into their physical surroundings by leveraging spatial data obtained from Google's Visual Positioning System (VPS) [16], which is based on Street View images. Using computer vision algorithms on the camera images, the API facilitates the accurate determination of the device's location and orientation to locate and display the correct holograms, surpassing the localization capabilities of GPS alone. However, this technology also introduces potential security vulnerabilities that can be exploited by malicious actors. Remote Read Attack.By employing GPS spoofing applications, an attacker can remotely Read holograms by altering the GPS location of her device. Along with utilizing a GPS emulator, the attacker points her device's camera toward printed photographs or virtual images displayed on a monitor in order to generate a poisoned Read request to the shared state and view the hologram at the target location. These photographs/images could be sourced from public online platforms, such as Google Street View [1], or even real estate websites. Fig. 7 demonstrates the process, illustrating how the attacker successfully manipulates the device's GPS location using a GPS emulator and achieves the remote reading of holograms onto her AR display. ### Evaluation Experiment Setup.To begin with, we place 23 holograms at various locations within our university campus using Geospatial API. We selected these locations to encompass a range of environmental differences and varying light conditions. Subsequently, we capture photographs of the areas where the holograms were placed. We employ a GPS emulator application [43] to generate fake GPS locations on the Android phones utilized for testing. By manipulating the GPS coordinates and displaying an image of the target location, we aim to deceive the shared state into returning the associated holograms at those locations. We conducted the Remote Read Attack with the attacker's device located from [0.25, 0.5, 0.75, 1, 1.5, 2] meters away from the monitor. To assess the effectiveness of these attacks, we utilize the attack success rate as the primary metric. We define a successful attack when each Read operation can succeed in less than three trials. Our testing involved two Android phones, namely the Samsung Galaxy S8 and the Samsung Galaxy S21. The former was used by the victim to place the holograms, while the latter was used by the attacker to both capture the images (size 3024 x 4032 pixels) and conduct the Remote Read Attacks. The attack application was developed using Android Studio version 2022.2.1. Results.Fig. 8 shows our findings in terms of the attack success rate as a function of the attacker's distance. When the distance between the attacker's device and the monitor is too close, such as at 0.25 meters, the camera on the device may struggle to focus properly. This can result in blurred images, making conducting successful remote Read attacks Figure 7: Remote Read Attack in Scenario B. _Left:_ A legitimate user places a hologram in front of a building. _Right:_ The attacker can view the hologram with faked GPS using an image of the building. Note that the hologram is not displayed by the monitor, but rather by the attacker’s device as an overlay on top of the monitor. challenging. Notably, we achieved a 100% success rate for remote Read attacks conducted at a distance of 0.5 meters. This distance proves to be optimal for the camera on the device to focus properly, resulting in clear and discernible images. However, as the distance between the attacker's device and the monitor increases, the success rate of the remote Read attacks declines significantly. We speculate that several factors may influence this decline in success rate. Firstly, as the distance increases, the images displayed on the monitor become smaller, making the attacker's Read key significantly different from the victim's initial Write key. Similarly, when the device is positioned at a greater distance from the monitor, there is an increased likelihood of capturing unrelated objects in the field of view. This can significantly impact the success rate of the attack. ## 5 Scenario C: Global, Crowd-Sourced Shared State (Platform X) While the imagery needed for global AR exists in Geospatial Anchor, as discussed in the previous section, such services' shared states are curated, meaning that only trusted individuals (paid contractors) are able to gather and upload data to Write to the map. However, more recent services like Platform X allow users to both Read from the map and Write new data to expand and update it. Platform X is a non-curated service, which means that all users have the ability to Read and Write in the shared state. This includes the raw map data as well as holograms that are virtual representations of real objects (_e.g._, traffic signs, fire hydrants, and light poles). Non-curated applications that rely on GPS and camera images as keys, such as Platform X, introduce new attack vectors, as attackers with minimal permissions gain more capabilities. Towards this, in this section, we investigate attacks on global, crowd-sourced AR shared states, using Platform X as an example. We investigate two types of attacks: a poisoned Write to the map in the shared state (Section 5.1), and a poisoned Write that creates false holograms in the shared state (Section 5.2). All experiments conducted in this section were carried out with permission from Platform X. The experiments were conducted within a designated geo-fenced area, which was specifically created for the purpose of these experiments. This ensured that our attacker's poisoned Write to the shared state was segregated from the data Written and Read by regular Platform X users. ### Poisoned Write to the Shared State's Map #### 5.1.1 Methodology The high-level idea is for the attacker to poison the GPS part of the key associated with the Write request while keeping the hologram part of the key legitimate. Specifically, the attacker obtains image sequence A and image sequence B from two locations, A and B. Normally, these locations should be associated with holograms A and B, respectively. It swaps their GPS coordinates and makes two Write requests: one with { key=image sequence A + GPS B, value=hologram B}, and another with {key=image sequence B + GPS A, value=hologram A}. Thus the hologram at location B becomes associated with image sequence A, and vice versa. Because of this, a victim who later Reads with {key=image sequence A} will receive a response from the shared state with {value=hologram B}, and view the wrong hologram at location A. Note that while the mechanics of this attack are similar to the global, non-curated attack (Section 4) in that the GPS part of the key is modified, here we focus on Write attacks rather than Read attacks, which means that we need to carefully craft the spoofed GPS (by swapping) during the Write request, in order to cause adverse effects to downstream victim users who Read the poisoned shared state. Next, we describe the detailed attack mechanics in terms of Writes and Reads. Write Attack Mechanics.In order to upload images to the shared state, a sequence of images, each with associated metadata (latitude, longitude, altitude, and time), is needed. Image sequences can be as short as three images but are often longer, stretching into the low hundreds of images. All of the metadata are contained in the image file in Exchangeable Image File (EXIF) [12] format. EXIF data are freely manipulable using scripts. An attacker can modify the metadata so that the image looks to have been captured at any arbitrary location and time (within reason, for example, the timestamps cannot be from the future as the shared state will reject them). An illustration of image sequences with modified metadata Figure 8: Results of Remote Read Attacks at variant distances. Figure 9: Two image sequences with their GPS coordinates swapped are shown in the Platform X shared state map. being successfully ingested by the shared state is shown in Fig. 9. In this particular example, two sequences of five images each, captured using an iPhone 12, were uploaded with swapped latitude, longitude, altitude, and time. The images were successfully uploaded using Platform X's desktop uploader utility. Platform X allows these sequences to be uploaded, processed, and displayed at the swapped locations for any user to view. Victim's Read Mechanics.One challenge that we faced is that Platform X is a closed source and does not currently have a public AR interface to experiment with, which is needed in our study to Read the shared state and demonstrate the impact on AR victims. To overcome this, we utilize an open-source computer vision library, OpenSFM [30], along with some of our own additions (written as Python scripts), to construct a simple AR viewer that replicates, to the best of our ability, how a legitimate user's Read request would be processed. Fig. 10 shows the flow of the AR application from an image sequence to the hologram. First, the initial maps in the shared state are generated using OpenSFM from the data in the attacker's Write request. Next, the spoofed GPS from the Write request is used to facilitate the processing of the maps in the shared state. These two steps are similar to how Platform X handles Write requests, to the best of our knowledge. Finally, a victim captures new images and uploads them to the shared state in a Read request, which is processed by the OpenSFM to return nearby holograms for rendering on the victim's display. This step is similar to how a commercial AR service would handle Read requests [40, 49]. We do not use GPS as a key during the Read process because OpenSFM does not support it, and other frameworks that combine image and GPS for Read are research prototypes only [7]. #### 5.1.2 Evaluation We repeated the Write attack a total of 8 times on Platform X's shared state for 15 total image sequences with false GPS data (one sequence was a duplicate, not a swap). These swaps occurred using imagery captured outdoors within 1 km\({}^{2}\) of the geo-fenced area. The images were taken at different times of day, ranging from early morning to early evening, facing different directions, and at different locations (_e.g._, street imagery with roads, grass fields without roads). We verified through the Platform X web interface that all the attacker's Write requests with spoofed GPS data were successfully ingested by the Platform X pipeline, uploading, processing, and displaying the spoofed imagery. This shows that the fundamental Write attack mechanic works. The main reason this works is that while Platform X does check for basic undesirable content, it does not check whether crowd-sourced images indeed correspond to the claimed GPS locations (further discussion of mitigations is deferred to Section 6). With the Write attack mechanics validated, we turned our attention to showing the impact on a victim AR user who Reads the shared state. To evaluate this, we had the attacker Write two image sequences (a grass scene and a pipe scene), containing five images each, to the shared state. We reserved one additional image per sequence for use by the victim. The holograms (a "dig safe" sign and a "danger: underground gas line" sign) included in the Write request were associated with locations 5 meters in front of the first image in each sequence. After running through the pipeline in Fig. 10, the final display to the victim is shown in Fig. 11. The first row shows the AR display seen by a victim without our attack. The "dig safe" hologram is displayed in the grass field, and the "danger" hologram is displayed near the pipes, as intended. The bottom row shows that with our attack, the wrong hologram ("dig safe") is shown near an underground gas line, leading to serious safety concerns. These results demonstrate the success of the poisoned Write attack to a global, crowd-sourced AR shared state. ### Poisoned Write of Shared Holograms In addition to modifying the GPS part of the key in the Write request (Section 5.1), in this subsection, we discuss another vulnerability through modifications to the image sequence part of the key. Some AR shared states (namely, Platform X) perform object detection on the images uploaded by users to Figure 11: Effect of poisoned Write to the shared state’s map. Holograms are Read at the wrong locations; in this example, a “safe to dig” sign is placed next to an underground pipe. Figure 10: Details of poisoned write to the shared state’s map mechanics. We used an open-source OpenSFM library to demonstrate the attack with swapped GPS keys. their service. When an image sequence is uploaded, these detected objects are then added to the shared state map at the positions they were detected. This presents attackers with the opportunity to tamper with the images and induce fake holograms to the shared state. For example, the attacker could create a fake stop sign hologram overlaid onto an otherwise empty street, causing an AR navigation app to provide wrong directions to the user. #### 5.2.1 Methodology We used Photoshop to edit a sequence of images, manipulate them to add a stop sign and Write them to the shared state. Getting this attack to work was not straightforward, as the stop sign's size had to be proportionate to the user's distance from it, and the octagonal shape was preserved using transparency layers. Platform X's algorithm seems to ignore "clip-art" or cartoonish stop signs and needed photographs using realistic stop signs as part of the fake to be recognized. The fake object also must be present in at least 3 images for our testing in order for Platform X to place it accurately which requires multiple photographs of the stop sign and additional scaling. Without these tweaks, the Platform X pipeline rejected the Write request. #### 5.2.2 Evaluation Fig. 12 shows an example of a successful attack. Fig. 12 shows the real-world ground truth. Fig. 12 shows the tampered image, with a photograph of a stop sign taken from public sources cropped and overlaid on top of it. The small subfigure in the bottom left of Fig. 12 shows a screenshot of Platform X's web interface where the stop sign hologram is accepted into Platform X's shared state. We expect that attackers could also Write other false holograms into the shared state; any of Platform X's pre-defined object detection classes (_e.g_., traffic signs, lamp posts) could work. ## 6 Shared State Attack Mitigations The fundamental question at issue for these attacks is how to accurately establish the true location of an AR device. All of these attacks involve deceiving the shared state about the attacker's location to Read or Write data maliciously. We discuss multiple potential mitigation strategies related to this. Additional Sensor Modalities.With the increasing integration of multiple sensor modalities on AR devices, the vulnerability of shared state attacks can be mitigated by assessing the coherence between the shared state and other accessible sensor data. For instance, the Microsoft Hololens 2 incorporates not only Red-Green-Blue (RGB) cameras but also a depth camera [56]. As illustrated in Fig. 13, we can leverage the depth camera to detect the presence of a computer monitor or a photograph, which was key to launching the attacks in Scenarios A and B. Subsequently, a comparison (potentially automated) can be made with the output of the RGB cameras. Clean-Slate System Design.The most straightforward solution is to make sure that the core design of these applications uses more traditional security measures to prevent tampering. Non-curated shared states (Scenario A and C) may be updated to be curated with a permissions system where only trusted users may perform Write. This may be done to turn non-curated AR shared state into curated (similar to [10]). Still, for those applications where crowd-sourcing (non-curation) is desirable, a compromise involving a user reputation system based on past good behavior may prove sufficient. Even in non-curated applications, only accepting appropriately watermarked images as keys [28], _i.e_., images that have not been tampered between sensing and upload, to prevent the false hologram attack in Scenario C. GPS information may also be encoded into these watermarks to make manipulation more difficult than vulnerable plain-text EXIF data. Real Space Security.Read attacks, as seen in Scenarios A and B, enable attackers to capture images from one location and reuse them to place holograms elsewhere. QR codes printed and placed into the real location can offer a form of locality assurance, particularly if those codes are changed regularly [26]. This method ensures that attackers who lack knowledge of the code configuration from outside the location will be unable to resolve holograms accurately. Additionally, as we found in Section 4.2, Read attacks achieve a low success rate when distance goes far. Thus, We can demand the users collect more images at different distances and angles to ensure Figure 12: Poisoned Write to the shared state’s holograms. A fake stop sign has been inserted into a sequence of images in order to fool the shared state’s object detector, resulting in a fake stop sign hologram being added to the shared state. Figure 13: Mitigation via depth sensors on Microsoft Hololens 2. Depth sensors show the screen as flat and lacking details of an image captured of the real location. the Read operation is taken in real space. Local Moderators.AR frameworks with crowd-sourced (Scenario C) shared state may be considered as a form of content hosting (where the content is image keys and hologram values being uploaded). Hence, human moderators may be used to great effect, as in other successful applications like YouTube [44] and Facebook [39]. While one of the most powerful mitigation strategies, moderator teams are expensive and come with the additional hurdle of needing to be located close to locations of the uploaded imagery to verify their veracity. Automated Computer Vision Techniques.Some attacks may be made more difficult through automated means. Checking for duplicate imagery or manipulated images have a long history in computer security [63, 3] and may be deployed to great effect in global scenarios to catch sloppy attacks. AI-based depth segmentation [34] could also be used to check for photographs or screens used to present images as in Scenarios A and B. The planes that contain the attacker's image could be ignored, but this is expensive and untested. Wireless fingerprinting can be used to establish a user's rough device location [47, 38] either by cell tower or WiFi signal strength. ## 7 Related Work AR/VR Security and Privacy Overviews.AR and Virtual Reality (VR) have taken off in recent years but research into potential security and privacy issues has pre-existed popular adoption [42]. Recent overview [41] and literature review [11] broadly cover the existing issues. Literature covering human factors of multi-user AR also exists [26], which our work aligns with. Work on securing AR output in multi-user AR [46] is orthogonal in that it focuses on content sharing for holograms with given their locations, whereas we study how these locations are determined. The global shared state scenarios also intersect with geospatial information services security covered in [5]. AR Leakage Vectors.A plethora of prior research [25, 54, 55, 56, 2, 29, 50, 52, 54] has highlighted the issue of unauthorized acquisition of sensitive information from AR/VR devices, exploiting both _software_ and _physical_ leakage vectors. In a software-based approach, several studies [50, 32] demonstrate the feasibility of inferring the user's location by analyzing network traffic information. Other investigations [55, 60] showcase the extraction of sensitive data from VR network traces. More recent works [54, 29, 52] establish the ability to deduce keystrokes based on the user's head motions. Additionally, Zhang et al. [62] explore the utilization of rendering performance counters to execute side-channel attacks on AR/VR systems. In the realm of physical vectors, Arafat et al. [2] employs WiFi CSI side-channel information leakage to infer keystrokes. Furthermore, a cluster of studies [25, 27, 18] show that attackers can exploit vision and sensor-based side-channel leakages to exfiltrate sensitive information. However, none of these investigate attacks on shared state in multi-user AR as we do. Computer Vision Attacks.AR uses computer vision techniques as part of its foundation, and thus attacks on computer vision systems can apply to the AR systems that depend on them, as shown in Section 5.2. Previous work on attacks on computer vision object detectors is wide-reaching, ranging from attacks on machine learning models [61, 19] to on-board vehicles [65]. While our work uses photographs, screens, manipulated images, and GPS to trick computer vision systems, attacks using additional hardware like lasers have also been explored [59]. Simultaneous Localization and Mapping (SLAM) attacks exist [51, 20] and also would impact AR systems like Cloud Anchor that depend on these techniques to function. Our work takes inspiration from these to show computer vision attacks can cascade into interesting behaviors in AR systems, rather than general object detectors or autonomous vehicles. Sensor Spoofing and Confusion.Our work uses GPS spoofing by simply altering the metadata which is stored in plain-text. While not necessary for our attacks, more sophisticated GPS spoofing that has existed for nearly as long as the technology has reached widespread use [53]. Tricking sensors such as the IMU was not done in this work but is possible with acoustic waves [48, 23] and can be used to assist in attacking computer vision systems that use such modalities to improve accuracy or stabilize camera imagery. AR/VR Threat Mitigation.Defenses against user-manipulated input data, such as image manipulation [64, 4], have become sophisticated in recent years. GPS spoofing mitigation [21] focuses on real-time mitigation, but frameworks like Platform X provide the ability to upload batched imagery at later times for user convenience, and thus potential mitigations may need additional investigation. The most effective mitigation is likely to come in the form of permissions systems like in [10], but these will require non-curated shared states to become curated. ## 8 Conclusions As AR become ubiquitous, there is growing need for additional research into security and privacy risks unique to AR, especially multi-user AR. This paper introduced and explored attacks on multiple shared state AR applications and frameworks. Specifically, we show that using GPS and camera images is insufficient to establish the location of an AR device or the objects that are visible to that device without additional steps to prevent tampering. We formed a threat model that applies to several different scenarios and demonstrated them on current off-the-shelf AR systems. We show that these attacks can be successfully performed in a variety of environments. Simple defenses like duplication detection and image manipulation detection could work, but in the future, further work on defenses such as map update policies or fraud detection is paramount.
2308.11709
Investigating the characteristic shape and scatter of intergalactic damping wings during reionization
Ly$\alpha$ damping wings in the spectra of bright objects at high redshift are a useful probe of the ionization state of the intergalactic medium during the reionization epoch. It has recently been noted that, despite the inhomogeneous nature of reionization, these damping wings have a characteristic shape which is a strong function of the volume-weighted average neutral hydrogen fraction of the intergalactic medium. We present here a closer examination of this finding using a simulation of patchy reionization from the Sherwood-Relics simulation suite. We show that the characteristic shape and scatter of the damping wings are determined by the average neutral hydrogen density along the line of sight, weighted by its contribution to the optical depth producing the damping wing. We find that there is a redshift dependence in the characteristic shape due to the expansion of the Universe. Finally, we show that it is possible to differentiate between the shapes of damping wings in galaxies and young (or faint) quasars at different points in the reionization history at large velocity offsets from the point where the transmission first reaches zero.
Laura C. Keating, Ewald Puchwein, James S. Bolton, Martin G. Haehnelt, Girish Kulkarni
2023-08-22T18:00:04Z
http://arxiv.org/abs/2308.11709v1
Investigating the characteristic shape and scatter of intergalactic damping wings during reionization ###### Abstract Ly\(\alpha\) damping wings in the spectra of bright objects at high redshift are a useful probe of the ionization state of the intergalactic medium during the reionization epoch. It has recently been noted that, despite the inhomogeneous nature of reionization, these damping wings have a characteristic shape which is a strong function of the volume-weighted average neutral hydrogen fraction of the intergalactic medium. We present here a closer examination of this finding using a simulation of patchy reionization from the Sherwood-Relics simulation suite. We show that the characteristic shape and scatter of the damping wings are determined by the average neutral hydrogen density along the line of sight, weighted by its contribution to the optical depth producing the damping wing. We find that there is a redshift dependence in the characteristic shape due to the expansion of the Universe. Finally, we show that it is possible to differentiate between the shapes of damping wings in galaxies and young (or faint) quasars at different points in the reionization history at large velocity offsets from the point where the transmission first reaches zero. keywords: dark ages, reionisation, first stars - galaxies: high-redshift - intergalactic medium - methods: numerical ## 1 Introduction Lyman-\(\alpha\) (Ly\(\alpha\)) scattering redward of rest-frame Ly\(\alpha\) in the spectra of luminous objects is a clear indicator of the presence of neutral hydrogen. In the reionization epoch, even the neutral regions of the diffuse intergalactic medium (IGM) can cause such a "damping wing" (Miralda-Escude, 1998). Observations of damping wings in the spectra of \(z>7\) quasars indicate that the reionization of the Universe is in progress at these redshifts (Fan et al., 2022). IGM damping wings have also been observed in the spectra of high-redshift gamma-ray burst afterglows (Hartoog et al., 2015) and galaxies (Umeda et al., 2023). However, translating the strength of observed damping wings to a constraint on the ionization state of the IGM is non-trivial, in part due to the inhomogeneity of reionization (Mesinger and Furlanetto, 2008; McQuinn et al., 2008). The typical approach is to marginalise over the position of the quasar host halo within the large-scale morphology of the ionization field (Greig et al., 2017; Davies et al., 2018). Recently, another approach was proposed by Chen (2023) using mock IGM damping wings generated from the CROC cosmological radiative transfer simulations of reionization (Gnedin, 2014). In that work, the simulated damping wings were shifted along the wavelength axis, such that the IGM transmission \(T\) along all lines of sight reached zero at the same point. Despite the spatially inhomogeneous distribution of neutral gas, these realigned damping wings were shown to have a characteristic shape, which was a strong function of the IGM volume-weighted average neutral fraction \(\langle\rm{r_{HI}}\rangle_{\rm{V}}\). Applying the same technique to observations may simplify analyses of damping wings in the spectra of high-redshift objects, with Chen (2023) predicting that measurements of the volume-weighted average IGM neutral fraction of the order of \(\Delta\langle\rm{r_{HI}}\rangle_{\rm{V}}\sim 0.1\) could be possible. In this Letter, we investigate this characteristic shape of intergalactic damping wings in more detail using one of the Sherwood-Relics simulations of inhomogeneous reionization. We demonstrate why this characteristic shape arises. We also explain the origin and quantify the size of the scatter among the realigned transmission curves at a given volume-weighted average IGM neutral fraction. In Section 2, we describe this simulation and outline how we produce the mock IGM damping wings. In Section 3, we discuss the origin of this characteristic shape and its scatter and, in Section 4, we make predictions for the observability of this signal in mock observations of damping wings in galaxies and quasars. Finally, in Section 5, we present our conclusions. We assume throughout this work that \(\Omega_{\rm{m}}=0.308\), \(\Omega_{\Lambda}=0.692\) and \(h=0.678\)(Planck Collaboration et al., 2014). ## 2 Mock Intergalactic Damping WINGS We analyse a simulation of inhomogeneous reionization from the Sherwood-Relics simulation suite1(Bolton et al., 2017; Puchwein et al., 2023). This simulation was performed with a modified version of p-gadget-3(Springel, 2005). It has a box size of \(40\ h^{-1}\) cMpc and a gas particle mass of \(M_{\rm gas}=9.97\times 10^{4}\ h^{-1}\ M_{\odot}\). Patchy reionization was treated with a novel hybrid technique described in detail in Puchwein et al. (2023). In brief, a cosmological hydrodynamic simulation was performed and then post-processed with the radiative transfer code atom(Aubert & Teyssier, 2008, 2010). Maps of the spatially varying UV background from the radiative transfer simulation at finely spaced redshift intervals were used as input to a new cosmological hydrodynamic simulation, which then accounts for spatial fluctuations in the ionization state of the gas and its hydrodynamical response due to inhomogeneous reionization. Mock intergalactic damping wings were generated as described in Keating et al. (2023). Lines of sight were extracted in six orthogonal directions through the 100 most massive haloes in each snapshot, for a total of 600 sightlines with length \(20\ h^{-1}\) cMpc. Additional lines of sight drawn along random directions through the simulation were spliced together until each sightline had a total length of \(220\ h^{-1}\) cMpc. We selected haloes from four snapshots with volume-weighted average neutral fractions \(\langle x_{\rm HI}\rangle_{\rm v}=(0.29,\,0.47,\,0.71,\,0.91)\), i.e., separated by \(\Delta\langle x_{\rm HI}\rangle_{\rm v}\sim 0.2\). This choice was made due to the simulation snapshots available to us, which were taken at fixed redshifts rather than specific volume-weighted average neutral fractions. As will be discussed in Section 3, the characteristic shape of these damping wings evolves with redshift. Unless otherwise specified, all the mock damping wings were generated from lines of sight with densities, distances and velocities rescaled to \(z=6.54\) as in Chen (2023). The \({\rm Ly}\alpha\) absorption was computed with contributions of gas in the foreground of the halo using the analytic approximation to the Voigt profile presented in Tepper-Garcia (2006). For most of this Letter, we only include the contribution of gas with \(x_{\rm HI}>0.5\) when computing the \({\rm Ly}\alpha\) absorption, as in Chen (2023). We are therefore neglecting any contribution from residual neutral hydrogen inside ionized bubbles. This assumption will be relaxed in Section 4. ## 3 Characteristic Shape of IGM Damping Wings ### Shape at fixed redshift Using the mock IGM transmission spectra, we first demonstrate that we recover the characteristic damping wing shape as a function of volume-weighted average neutral fraction as presented in Chen (2023) (top panel of Figure 1). We indeed find that, after the mock damping wings have been shifted along the velocity axis so that \(T=0\) at the same point, the shapes of the damping wings generated at each volume-weighted average neutral fraction strongly resemble each other. We also plot the IGM damping wings calculated using the analytic model of Miralda-Escude (1998), which assumes that all of the gas is at the mean density at that redshift and that there is a single average value of \(x_{\rm HI}\) everywhere. As in Chen (2023), we find that these "average" damping wings are very similar to the patchy reionization model, after realignment. We find that there is more overlap in the 68 per cent scatter of our damping wing profiles at different \(\langle x_{\rm HI}\rangle_{\rm v}\) than in Chen (2023). This may be because we probe a slightly narrower range of models with \(\Delta\langle x_{\rm HI}\rangle_{\rm v}\sim 0.2\), rather than \(\Delta\langle x_{\rm HI}\rangle_{\rm v}\sim 0.25\) in that work. We further find that there is significant overlap between the IGM transmission from the different \(\langle x_{\rm HI}\rangle_{\rm v}\) outputs outside of the 68 per cent scatter (bottom panel of Figure 1). To quantify the source of this scatter, we consider the properties of the gas surrounding the host galaxies and how that may influence the shape of the damping wing. In particular, we investigate how the average \({\rm H\,i}\) number density along the line of sight correlates with the IGM transmission. However, it is important to apply the correct weighting when doing the averaging. The Voigt profile is proportional to \(v^{-2}\) in the Lorentzian wings of the profile, where \(v\) is the velocity. We therefore determine the average \({\rm H\,i}\) number density by weighting this quantity by \((\Delta v)^{-2}\). Here we have defined \(\Delta v=|v_{\rm pixel}-v_{\rm T}|\), where \(v_{\rm pixel}\) is position of a given pixel on the velocity axis of our sightline and \(v_{\rm T}\) is the velocity offset behind the realignment point at which we measure the IGM transmission. The results are shown in the left panels of Figure 2, assuming \(v_{\rm T}=2000\) km s\({}^{-1}\). Our results are not sensitive to this choice of velocity offset. We compare the results for our simulated sightlines to the Miralda-Escude (1998) damping wing model for different values of the \({\rm H\,i}\) number density. This model typically assumes a \({\rm H\,i}\) number density \(n_{\rm HI}=x_{\rm HI}\ \langle n_{\rm H}\rangle\), where \(n_{\rm HI}\) is the \({\rm H\,i}\) number density, \(x_{\rm HI}\) is the average IGM neutral fraction and \(\langle n_{\rm H}\rangle\) is the background hydrogen density. However, here we wish to explore how variations in the local \(n_{\rm HI}\) impact the IGM transmission, so we explore a range of values. We find that the transmission is correlated with the average \({\rm H\,i}\) number density along the sightline, with higher (lower) average \({\rm H\,i}\) densities producing stronger (weaker) damping wings. The Miralda-Escude (1998) model reproduces the relation between IGM transmission and average \({\rm H\,i}\) number density very well. We find that the average \({\rm H\,i}\) number density along our lines of sight has a substantial scatter, but is clustered around the volume Figure 1: _Top:_ Mock IGM damping wings realigned along the velocity axis such that the different transmission curves all reach a transmission \(T=0\) at the same point. The different colours represent different volume-weighted average neutral fractions. The solid lines are the median values of the transmission curves generated from the Sherwood-Relics patchy reionization simulation and the shaded regions show the 68 per cent scatter around the median. The dashed lines are the realigned transmission curves calculated using the Miralda-Escude (1998) damping wing model, assuming the same volume-weighted average neutral fraction. _Bottom:_ Probability density function of the IGM transmission \(T\) measured at a velocity of 2000 km s\({}^{-1}\) (after realignment). The shaded region shows the 68 per cent scatter as plotted in the top panel. The vertical dashed lines mark the transmission calculated from the Miralda-Escude (1998) model at the same velocity offset. weighted average H i number density for each choice of \(\langle x_{\rm HI}\rangle_{\rm v}\). This explains the similarity between the transmission curves for the patchy simulation and Miralda-Escude (1998) model in Figure 1. The scatter is due to the location of galaxies within ionized bubbles, the clustering of these bubbles and the density of the environment in which the galaxy lives. An example of this is shown for two realigned sightlines in the right panels of Figure 2. In both cases, there is an island of neutral hydrogen at the point where \(T=0\). For the stronger damping wing, this marks the beginning of a long neutral island. However, for the weaker damping wing, this island marks only a small wall between two ionized bubbles, resulting in a lower average H i number density. We also find that there is overlap in average H i number densities measured along different lines of sight between the models with different volume-averaged IGM neutral fractions (bottom left of Figure 2). This explains the overlap in the scatter of the characteristic damping wing shapes at different points in the reionization history. ### Shape at fixed volume-weighted average neutral fraction We next investigate how the characteristic damping wing shape evolves with redshift, holding the volume-weighted average neutral fraction fixed at \(\langle x_{\rm HI}\rangle_{\rm v}=0.47\). We rescale the sightlines from this simulation output to redshifts \(z=6,8,10\) and \(12\). The resulting realigned transmission curves are shown in the top panel of Figure 3. We find that the damping wings become stronger towards higher redshift. Part of this is because the H i number density along a line of sight will increase as \(n_{\rm HI}\propto(1+z)^{3}\). As shown in the bottom panels of Figure 3, we do find that the densities are higher as we move to higher redshift, both in the mean values of the cosmological background density at that redshift and in the line of sight averages. However, this does not completely explain all of the evolution. We find that even at fixed proper H i number density, we see stronger damping wings at higher redshifts. This is because of additional effects due to the expansion of the Universe when calculating the optical depth, such as the conversion from proper length to redshift. We plot the expected transmission in the Miralda-Escude (1998) model for the four redshift bins we consider as the four dashed lines in each panel of Figure 3 (where we have plotted the lines for all redshifts in each panel for ease of comparison, but note that the points only correspond to one line in each panel). Again, the analytic model nicely recovers the trend found in the simulated sightlines. ## 4 Prospects for observations So far we have only presented idealised results, because we have assumed that there is no residual neutral gas inside the ionized bubbles (following Chen 2023). However, even a small amount of residual neutral hydrogen can saturate the absorption in the Ly\(\alpha\) forest, at neutral fractions as low as \(x_{\rm HI}\sim 10^{-5}\)(McQuinn 2016). We have also thus far neglected the peculiar velocities of infalling gas which can impact Ly\(\alpha\) transmission, but this will be a small effect across the scales of several thousand km s\({}^{-1}\) that we consider here. We now investigate how absorption by the residual neutral hydrogen impacts the observability of the characteristic shape of damping wings. We first calculate the Ly\(\alpha\) optical depth for the same set of sightlines as analysed in Figure 1, but now including the effects of all the neutral gas, both in the neutral islands and within the ionized bubbles, as well as the peculiar velocity of the gas. These transmission curves are representative of the IGM damping wings that may be seen in high-redshift galaxies or gamma-ray burst afterglows, where the sources live within ionized bubbles carved out by galaxies. We repeat the process of realigning the transmission curves. However, due to small amounts of transmitted flux in the partially-ionized regions, we now realign the curves such that they all reach \(T=10^{-6}\) at the same point. The results are shown in the left panel of Figure 4. We find that in contrast to what was shown in Chen (2023) and Figure 1, we no longer see a smooth decline in the IGM transmission with decreasing velocity, but rather a sharp vertical cutoff in the transmission. This arises from the residual neutral hydrogen in the ionized bubbles (e.g., Mason & Gronke 2020). Despite this, we still find that the median of the models with different volume-weighted Figure 2: _Top left:_ The coloured points show the IGM transmission at \(v=2000\) km s\({}^{-1}\) as a function of the average H i number density along the line of sight, weighted by its contribution to the damping wing. Each panel shows results at a different volume-weighted average IGM neutral fraction. The vertical dotted line shows the corresponding mean H i number density. The black dashed lines are the predictions from the Miralda-Escude (1998) analytic model for the IGM damping wing and are the same in all panels. _Bottom left:_ Probability density function of the average H i number density along a line of sight, weighted by \(\Delta v^{-2}\) as above. The shaded regions show the 68 per cent scatter. We note that the distribution for the model with \(\langle x_{\rm HI}\rangle_{\rm v}=0.29\) has a long tail of average H i number densities extending to below what is plotted here. These correspond to the most ionized sightlines. The vertical dotted line shows the mean H i number density at each volume-weighted average IGM neutral fraction. _Right:_ The top panel shows weak (purple) and strong (blue) IGM transmission curves that have been realigned such that their transmission reaches \(T=0\) at the same point, drawn from a simulation with \(\langle x_{\rm HI}\rangle_{\rm v}=0.47\) (the sightlines are indicated by the stars in that panel on the left). The light lines in the middle and bottom panels show the H i number density along the line of sight, with the colours corresponding to the spectra in the top panel. The darker lines show the H i number density, weighted by its contribution to the average value as plotted in the panels on the left. average IGM neutral fractions can be distinguished at large velocity offsets from the realignment point. This may therefore be a useful way to probe reionization in a large sample of galaxies that JWST will discover, stacked in different redshift bins. However, noise in the observations will make it challenging to find the point where the transmission first reaches \(T\approx 0\). It will also be necessary to accurately model the intrinsic spectrum of the galaxies to recover the normalised IGM transmission. Finally, absorption within the galaxy itself will further confuse the signal (Heintz et al., 2023). It is also useful to consider how this technique may apply to damping wings in high-redshift quasars. Currently, only a handful of quasars are known above redshift 7 (Fan et al., 2022). However, with the advent of Euclid, many fainter quasars are predicted to be discovered (Schindler et al., 2023) which may be good candidates for stacking at different redshifts and searching for these characteristic damping wing shapes. To account for the extra ionization by the quasar of its surroundings, we post-process the sightlines with the 1D radiative transfer code presented in Bolton & Haehnelt (2007). We assume a magnitude \(M_{1450}=-27\), similar to the luminosity of the brightest known quasars. We assume the spectrum is a broken power law, with \(f_{\nu}\propto\nu^{-0.5}\) for \(1050\ \mathrm{\AA}<\lambda<1450\ \mathrm{\AA}\) and \(f_{\nu}\propto\nu^{-1.5}\) for \(\lambda<1050\ \mathrm{\AA}\). This results in an ionizing photon rate \(\dot{N}_{\gamma}=1.9\times 10^{57}\) s\({}^{-1}\). Euclid is expected to discover many fainter quasars, so we also investigate a quasar with \(M_{1450}=-25\) (\(\dot{N}_{\gamma}=3.0\times 10^{56}\) s\({}^{-1}\)). We show here results for two quasar lifetimes, \(t_{\mathrm{Q}}=1\) Myr and 10 Myr. However, in reality, the quasars will sample a range of magnitudes and lifetimes (e.g., Eilers et al., 2017). Stacking the quasar transmission spectra is more complicated than for the previous case, as absorption within the quasar proximity zone can reach \(T\approx 0\), even though there may be more transmission towards lower redshifts. To account for this, we smooth the transmission spectra with a boxcar filter with width 20 A in the observed frame, as is done for measuring quasar proximity zone sizes (Fan et al., 2006). We then realign the transmission curves based on where this smoothed spectrum reaches \(T=10^{-6}\) for the first time, and stack the smoothed curves to calculate the median and scatter of the transmission. The results are shown in Figure 4. We find that for a magnitude \(M_{1450}=-27\) and a lifetime of 1 Myr, we do not see a sharp cutoff in the transmission as was seen for the galaxy transmission curves. This is because the ionizing photons from the quasar have ionized much of the residual neutral hydrogen in the bubbles. At large velocity offset (\(v\gtrsim 3000\ \mathrm{km\ s^{-1}}\)), we find that it is possible to differentiate between the median of the stacked transmission curves at different values of \(\langle x_{\mathrm{HI}}\rangle_{\mathrm{V}}\). At lower velocity offset, it is more difficult as there is a larger scatter in transmission at low \(\langle x_{\mathrm{HI}}\rangle_{\mathrm{V}}\). This is due to the scatter in proximity zone sizes becoming larger as the neutral islands which impede the quasar ionization front become rarer. For a longer quasar lifetime, even though the quasar has had more time to ionize its surroundings, there again remains a trend with the volume-weighted average IGM neutral fraction at large velocity offsets and high \(\langle x_{\mathrm{HI}}\rangle_{\mathrm{V}}\). We find that the damping wings of fainter quasars at longer lifetimes look more like those of young bright ones, and may therefore be a useful population on which to apply this technique. It may also be possible to improve the method by stacking quasars with similar proximity zone sizes, or choosing a different value of the IGM transmission at which to align the spectra. However, as with the galaxy damping wings, the noise of the observations and intrinsic spectrum of the quasar will make the analysis more complex. ## 5 Conclusions We have presented a closer look at the characteristic shape of intergalactic damping wings during reionization that was first identified in Chen (2023). Using a simulation of inhomogeneous reionization from the Sherwood-Relics simulation suite, we found that we can recover the same trend presented in that work. Once the transmission curves are realigned along the velocity axis so that they reach \(T=0\) at the same point, the median transmission curve is a strong function of volume-weighted average IGM neutral fraction at fixed redshift, but there is overlap between the scatter of realigned transmission curves generated at different points in the reionization history. We demonstrated that this characteristic shape arises because the damping wings are set by the average H i number density along the line of sight, weighted by its contribution to the optical depth. Scatter in this quantity is driven by fluctuations in the density and ionization field, and higher (lower) average H i number densities result in stronger (weaker) IGM damping wings. We also found that the characteristic shape of these damping wings evolves with redshift, due to the expansion of the Universe. These effects are all well captured by the Miralda-Escude (1998) IGM damping wing model. Figure 3: _Top:_ Mock IGM damping wings realigned along the velocity axis such that the different transmission curves all reach a transmission \(T=0\) at the same point. The different colours represent different redshifts. The solid lines are the median values of the transmission curves and the shaded regions show the 68 per cent scatter around the median. _Bottom:_ The points show the IGM transmission at 2000 km s\({}^{-1}\) as a function of the average H i number density along the line of sight, weighted by its contribution to the damping wing. Each panel shows results at a different redshift. The vertical dotted line shows the mean H i number density at each redshift. The dashed lines are the predictions from the Miralda-Escude (1998) analytic model for the IGM damping wing, which evolves with redshift. We finished by showing that although more realistic models of damping wings in galaxies and quasars distort the characteristic damping wing shape, it is still possible to tell the difference between transmission curves drawn from simulations with different \(\langle x_{\rm HI}\rangle_{\rm v}\) at large velocity offset from the realignment point. This suggests that realigning IGM transmission curves may be a promising avenue for constraining reionization with large samples of faint objects delivered by JWST and Euclid. ## Acknowledgements The simulations used in this work were performed using the Joliot Curie supercomputer at the Tres Grand Centre de Calcul (TGCC) and the Cambridge Service for Data Driven Discovery (CSD3), part of which is operated by the University of Cambridge Research Computing on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). We acknowledge the Partnership for Advanced Computing in Europe (PRACE) for awarding us time on Joliot Curie in the 16th call. The DiRAC component of CSD3 was funded by BEIS capital funding via STFC capital grants ST/P002307/1 and ST/R002452/1 and STFC operations grant ST/R00689X/1. This work also used the DiRAC@ Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility. The equipment was funded by BEIS capital funding via STFC capital grants ST/P002293/1 and ST/R002371/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure. JSB is supported by STFC consolidated grant ST/T000171/1. MGH is supported by STFC consolidated grant ST/S000623/1. GK is partly supported by the Department of Atomic Energy (Government of India) research project with Project Identification Number RTI 4002, and by the Max Planck Society through a Max Planck Partner Group. Support by ERC Advanced Grant 320596 "The Emergence of Structure During the Epoch of Reionization' is gratefully acknowledged. We thank Volker Springel for making p-gadget-3 available. We also thank Dominique Aubert for sharing the aton code. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission. ## Data Availability All data and analysis code used in this work are available from the first author on reasonable request.
2310.04628
(Re)framing Built Heritage through the Machinic Gaze
Built heritage has been both subject and product of a gaze that has been sustained through moments of colonial fixation on ruins and monuments, technocratic examination and representation, and fetishisation by aglobal tourist industry. We argue that the recent proliferation of machine learning and vision technologies create new scopic regimes for heritage: storing and retrieving existing images from vast digital archives, and further imparting their own distortions upon its visual representation. We introduce the term `machinic gaze' to conceptualise the reconfiguration of heritage representation via AI models. To explore how this gaze reframes heritage, we deploy an image-text-image pipeline that reads, interprets, and resynthesizes images of several UNESCO World Heritage Sites. Employing two concepts from media studies -- heteroscopia and anamorphosis -- we describe the reoriented perspective that machine vision systems introduce. We propose that the machinic gaze highlights the artifice of the human gaze and its underlying assumptions and practices that combine to form established notions of heritage.
Vanicka Arora, Liam Magee, Luke Munn
2023-10-06T23:48:01Z
http://arxiv.org/abs/2310.04628v1
# (Re)framing Built Heritage through the Machine Gaze ###### Abstract Built heritage has been both subject and product of a gaze that has been sustained through moments of colonial fixation on ruins and monuments, technocratic examination and representation, and fetishisation by a global tourist industry. We argue that the recent proliferation of machine learning and vision technologies create new scopic regimes for heritage: storing and retrieving existing images from vast digital archives, and further imparting their own distortions upon its visual representation. We introduce the term '_machinic gaze_' to conceptualise the reconfiguration of heritage representation via AI models. To explore how this gaze reframes heritage, we deploy an image-text-image pipeline that reads, interprets, and resynthesizes images of several UNESCO World Heritage Sites. Employing two concepts from media studies - _heteroscopia_ and _anamorphosis_ - we describe the reoriented perspective that machine vision systems introduce. We propose that the machine gaze highlights the artifice of the human gaze and its underlying assumptions and practices that combine to form established notions of heritage. _Keywords--_ Heritage representation, visual analysis, visual cultures, media studies, generative AI, psycho-analysis ## Introduction Built heritage has a long- and well-established relationship with visual representation and production. Visual regimes of heritage have evolved and diversified, from carefully curated artistic depictions of the romantic archaeological ruin to the proliferation of digital photography on social media produced and consumed in turn by a global tourist industry. Alongside the far-reaching impact of social media platforms like Instagram that rely primarily on the production and circulation of images, sophisticated photo-editing technologies that are no longer confined to the domain of professionals, have amplified and complicated styles of aesthetic appreciation of the heritage site. Nevertheless, across various forms of visual media, the construction of a mythic representation of the heritage site persists (Dicks, 2000; Watson, 2010; Sterling, 2016). Much of the discussion on the construction of heritage identity and meaning through visual representation or the 'heritage gaze' has focused on two closely entwined ways of seeing: the tourist gaze (Urry, 2002 [1990]; Urry and Larsen, 2011; MacCannell, 1999; Watson, 2010, Waterton, 2009) and the expert gaze (Smith, 2006; Winter, 2006; Moshenska, 2013). In the context of archaeological monuments and sites, both ways have both been further tied to forms of mechanical comprehension and capture since the inception of photography. Maxime Du Camp's French government-commissioned images of Egyptian monuments exemplify the early enthusiasm for architectural and landscape photography in the years following the invention of the daguerreotype, surpassed only by the desire to reproduce the human form (Kittler, 2010; Urry and Larsen, 2011). Photography has long assisted in what Sterling (2019: 2) has termed the mythic representation of heritage as ideology', drawing attention to iconic or emblematic aspects of sites that reinforce narratives of power. Analysis of the performative function of photography has often been connected to the concept of the 'gaze': the work conducted by the camera-and-operator to compose and constitute the heritage subject. For scholars such as Urry and Larsen, the act of seeing is always informed by 'ideas, skills, desires and expectations' (2011: 1), and in the era of mass tourism - since the nineteenth century, and so coinciding with the era of mass photography - the seeing of sites has been especially demarcated by the 'tourist gaze.' The emergence of computer vision and, recently, of machine learning systems trained on image corpora reproduce both these modes alongside other styles and subjects, retaining as they do so existing social biases (Offert and Phan, 2022). However, this reproduction is not pure. In their reconstitution of heritage, image-generating systems such as Midjourney observe conventions with palettes and perspectives, but also at times inject the uncanny differences of an alien observer or subject (Parisi, 2019). As these systems scale and mature, it no longer seems sufficient to ascribe these differences to the category of 'error' - an intermediate stage in a system's evolution, to be 'fixed' with the next software update. Differences between their outputs and human expectation seem to belong instead to a novel _sui generis_ mode of visual perception and production, which we describe here as the'machinic gaze.' By directing computer vision algorithms to interpret and resynthesise a controlled set of images of heritage sites, we attempt to register aspects of this gaze. In doing so, we ask the question: What does the machine see when it looks at heritage? A feature of this 'gaze' is its intensification. The proliferation of social media, and Instagram in particular, has stretched and magnified how it has become invested and circulated within the circuits of travel capitalism (Barauah, 2017; Ogden, 2021; Oh, 2022). This expansive digitisation of vision has led in turn to new possibilities in how machines consume and produce images. Social media image agglomerations have been systematised and organised into vast archives like LAION (Schuhmann et al., 2022), a data store of five billion image-caption pairs that in turn has been instrumental in the training of Vision-Language Models (Zhang et al., 2023). With respect to these systems, two distinct kinds can be distinguished: _image-to-text_ auto-captioning systems such as BLIP-2 (Li et al., 2023), and _text-to-image_ generative systems such as Stable Diffusion, Midjourney and Dall-E (Mostaque, 2022; Midjourney, 2022; Ramesh et al., 2022). It is the second type of system that we focus upon especially here, though as discussed in the methods section, we examine examples of the first as well. With generative AI text-to-image systems, the input of a text 'prompt', an instruction made up of typically English words that specify a subject, style and format, generates synthetic images that, despite having no direct referent in their training sets, can integrate parts of that prompt in often evocative and striking ways. This synthesis is possible via an intensive process that uses a previously trained neural network to progressively modify what is at first a random image into one shaped by the set of tokens included in the prompt. Both prior network training and this process of image synthesis work to produce recognisable pictorial representations through purely computational processes. This recognisability is at the same time thoroughly disjoined from the'real' that Crary (1992), for example, identifies as embedded in modes of human vision and perception. Accordingly, this way of seeing simultaneously constitutes a culturally and commercially generative system of visual apprehension, and a 'gaze' that is askew and uncanny (Parisi, 2019). We focus here on how this apprehension works to reproduce visual representations of heritage sites that have been subject to the explicit focus of the heritage gaze. Specifically, we selected a small group of images from UNESCO's World Heritage database of sites. We developed a small Python application that first uses a mix of technical methods to read and decode these images into textual prompts, and then submits those prompts to AI image-text systems to generate candidate reimagining of the original images. Our purpose here was not to evaluate the fidelity of either auto-captioning or image generation systems. Instead, it was to produce a small archive of internally diverse representations that demonstrate semantic patterns and structures that emerge in contemporary mechnised ways of seeing. These enabled in turn a commentary on the relationships between the generated image and its prompt, the represented object of heritage and - more speculatively - its human interpreter. We undertook this exercise with two objectives. The first is to consider the ways in which the image model captures, 'understands,' and recreates the heritage site and the specific gaze that is directed towards these sites. The second is to extend the exploration of the politics of visual representation of global sites of heritage through the medium of the synthetically produced image. Properties of this synthesis, we argue, can condense, and refract highly disparate human representations of heritage, marking out more clearly its own preoccupations and ideological attachments. ## Conceptualising the Machine Gaze The gaze, often with attached qualifiers ('male,' 'colonial,' 'tourist'), has an extensive history in heritage and adjacent fields of cultural studies. A common thread to distinct conceptualisations, from Mulvey's (1975) seminal essay on the male gaze to Urry and Larsen's (2011) discussion of the tourist gaze, is that _seeing_ is never only a perceptual act, but is always informed by background assumptions, desires, prejudices and power relations that inform interpretation of what is seen. As Urry and Larsen further analyse, such background considerations apply no less when the gaze is directed toward nonhuman artefacts: landscapes, buildings, exhibits, and other cultural artefacts packaged for consumption by the tourist. This interpretation is re-doubled by the profound effects of photography and social media, which reify and circulate images of such objects without in any way diluting consumer desire to attend to the original. However, no gaze ever consumes its object fully. Waterton and Watson (2015: 26) for example describe another mode of apprehending visual heritage, one which is configured and curated by experts 'around a nexus of value endowed by pastness, scarcity and aesthetics.' Even when it rubs up against the tourist gaze, in the form of the expert guide for instance, it remains differentiated in its attention to masterly knowledge and 'official' narratives of heritage making' rather than the mundanely picturesque as its primary register (Sterling, 2016). And even critics of the tourist industry acknowledge how the non-expert viewer or tourist is not merely a passive spectator but participates in the construction of meaning and performance of heritage places (Urry, 2004; Smith 2006). The tourist gaze is polyvalent and complex (Urry and Larsen, 2011) and as Sterling has argued (2016: 248) in the context of heritage, itself only one element in a network that also must encompass globalisation, capitalism, technical practices of photography and the individualised affective engagement with place. Common to many discussions of these two systems - the tourist and the expert gaze - is attention towards their shared outwardly-directed nature. In both cases, the gaze is conferred _by_ an individual or collective human subject _towards_ an artefact or site, an act which serves to constitute or enmesh the object within various discursive and interpretative frames. MacCannell's (2001) has critiqued Urry's unidirectional analysis of the gaze - borrowed from Foucault's analysis of the medical field's interested observation of the patient's body - with an alternative account drawn from Lacanian analysis. Though Lacan's (1977) famous Seminar XI treatment oscillates between two meanings of the French _le regard_ - translated into English as the _look_ and the _gaze_ - it is the second of these meanings, both counterintuitive and influential, that concerns MacCannell. According to Lacan the gaze is what takes place when the human observing subject becomes aware that they _themselves_ have entered the frame of observation. No longer objective, the viewer becomes a subject precisely at this moment of uncanny recognition - when they catch themselves in a mirror, or less directly, become aware of their situatedness as an observer. MacCannell's application of this reflexive sense of the gaze helps, in his argument, to recuperate the agency of the heritage spectator. Viewing heritage does not simply involve a consuming tourist or calculative state, but - at least at particular moments - involves a transformation of the spectator into a subject aware of their own historicity. The tourist experiences for example the strange sense of becoming an object for some other, future viewer or visitor - and as this object, also becomes a proper subject. Resisting efforts to subsume all touristic appreciation to that of cliche, Sterling (2016) has similarly argued that the seeing tourist is also an embodied figure, one who apprehends their own materiality in heritage encounters, and to varying degrees is also managed through deliberately arranged scaffolds and signs by heritage site managers. The body in Sterling's account in a certain sense anchors the otherwise clicked gaze within the singularity of the individual subject. Despite for example Urry and Larsen's interests in photography and social media, these varied articulations share the sense that the holder of the gaze is human. The development of visual language models that do not capture or store images constructed by a human operator seems to expand how 'gaze' must be conceptualised. In describing how machines see with and on behalf of humans, Denicolai (2021) invokes various terms, including'machinic vision','mechanical perception' and 'robotical gaze' - expressions we collapse into the term'machinic gaze'. How this gaze perceives objects and sites of heritage reflects, in part, the regard of human engineers and photographers who design and feed data into algorithms. We argue however that this synthetic reflection is not a simple transparent mirroring of a collective and diffused human apprehension of heritage. More is involved, in other words, than a mechanical reproduction of the touristic, expert, or other forms of human seeing. The technical perception and production of images involves instead a different and inhuman gaze that is directed towards heritage, creating associations of form, aesthetics, and value, and constructing a reflected heritage myth. In the context of machine learning, Mackenzie and Munster (2019) have described the general synthetic production of images via algorithms as a kind of 'invisuality', a neologism that conveys their inward-looking quality. Rather than responding to a set of natural or cultural conditions - in the sense that Urry describes the 'tourist gaze' for instance - image synthesis relies exclusively upon the intricate mappings of language terms and clusters of pixels established through the system's training. Image synthesis constitutes an inhuman kind of 'platform seeing' (Mackenzie and Munster, 2019), with its outputs constitutive of a concentrated yet heterogenous form of'mechanical reproduction' (Benjamin, 1986). This machinic gaze can be further distinguished by the precise operations performed by the technical apparatus: observing via a camera, analysing an image, compositing an image from trained patterns, or some combination of these actions. In each case the machine gaze carries forward both the direct aesthetic meaning of an apprehension of an object, and the indirect or reflexive meaning. This latter sense requires an adjustment, a three-way relationship between the object, the machine, and the human subject who then looks upon the machine's synthetic representation of the object, observing in it - in its similarities and differences to how we ourselves might interpret the object - a form of refracted mirroring of the human subject itself. In looking at heritage artefacts, this in fact performs a double mirroring, as the object of heritage already contains within itself a form of reflection of the observer. Following MacCannell's discussion of Urry's notion of the gaze, we expand upon this reflexive property in terms of a reversal of lens back upon the spectating and contributing human subject. ## Methods In this section we map this conceptual understanding of the machinic gaze to a small set of images that were analysed and used to generate, through algorithms, other, purely synthetic images. These activities were conducted in sequence: we selected digital photographs, then analysed these images with three procedures (BLIP-2, Google Vision API, image EXIF metadata) to assemble a brief textual description for each image. These assembled descriptions were then submitted in turn to three image generation systems in the form of 'prompts' (Stable Diffusion; Realistic Vision, and MidJourney) to produce a series of image samples - 240 in total. Finally, we interpreted these images in terms of subject, style, composition, and deviations from the source images. We briefly discuss each of these steps below. ## i Image selection We used the archive of photographs from the official website of UNESCO's World Heritage Sites as base images. Most of these photographs have been taken by experts appointed directly by UNESCO's World Heritage Programme Office or by individual State Parties and are intended to serve simultaneously as official visual documentation of the site and ostensibly communicate a sense of its 'outstanding universal value.' We filtered sites on the UNESCO World Heritage in Danger list and limited our selection to sites that were cultural and those which met criterion (iv), 'to be an outstanding example of a type of building, architectural or technological ensemble or landscape which illustrates (a) significant stage(s) in human history' (UNESCO, 2008). These selection criteria ensured a consistency in the vocabulary of the ways the sites were documented and presented. Our final selection of the five site images (see Appendix) was motivated by several factors. First, we wanted to work with a limited set of images that had a common underlying subject, but which are marked by visibly contrastive features. Second, we wanted buildings that are distinctive in form but likely underrepresented in the LAION training set - 'iconic' heritage sites such as the Eiffel Tower would likely be oversampled and therefore more readily'reconstructed' by text-to-image models. Third, we selected images that highlighted both contrastive and canonical or representative features of the sites. Having selected sites based on a set of common properties, we then selected photographs of buildings or urban settings that instead contrasted with respect to building typology, geographic region, historical style, and the description of function of the site. We generally opted for photos that were medium range (which were in the majority), as close-ups and distant shots obscured specific cultural and geographic features. In one case (Sana'a), we chose instead a wide-angle cityscape. In all cases, we selected photographs of intact buildings rather than ruins, again to maximise opportunity for the algorithms to identify distinguishing features. We also only selected Creative Commons-licensed photos. ## ii Expert and machine readings of heritage sites For each of these five sites, we employed two techniques to 'engineer' prompts. The first reflects the 'expert' view of the site, taking the UNESCO-supplied description as the verbatim prompt. The second used a series of technical approaches to extract information from the selected site photos, and then constructed a synthetic and entirely automated prompt. In producing the two types of prompts we compare how expert and machine readings or 'gazes' of these sites produce distinct (or similar) aesthetic visual results. Technique one is rooted in geographical aspects of place and in 'qualitative' or value-based descriptions of place. The UNESCO WHS descriptions are designed to evoke a sense of place in a way that mirrors the accompanying photographs. The photographs we selected have been taken by UNESCO-appointed experts and for the most part a similar group of experts write the descriptions of the sites. Both privilege certain kinds of expert 'viewing' or a gaze of the sites themselves, attentive to what is most salient, distinguishing or 'outstanding,' and what therefore has to be compared against a repertoire of other built forms and sites. We used the UNESCO WHS _Description_ tab as the text from which to select and edit the prompt. Our decision to focus on criteria (iv) sites, allowed us to closely examine UNESCO's own emphasis on visuality and aesthetics as constitutive of heritage value.1 Footnote 1: This technique has limitations: sites which have been nominated before the UNESCO Operational Guidelines have been ratified have had much shorter and less ‘rigorous’ nomination dossiers which meant shorter and less detailed ‘description’ sections. Some of these descriptions have not been updated since the site was first nominated. Some photographic galleries were much smaller and contained older images that may not accurately reflect the state of the site today. To consider how algorithms'see' these images, we employed a mix of algorithmic and deep learning techniques, ranging from extracting captions via BLIP-2, to probabilistic analysis of images via Google Vision's API and extraction of image EXIF metadata. Each of these modes of machine reading, produced a textual description. BLIP-2 is a vision-language based model which uses pre-trained image encoders and large language models to extract textual descriptions of images through an encoder-decoder transformer system (Li, Asaverse, Hoi 2023). Here we use BLIP-2 to generate short textual descriptions and other text-based queries on our base image set. To enrich the prompt, we combined this text with labels extracted from Google Vision's Application Programming Interface (API). These labels included computed image properties such as dominant colours, objects, locations, architectural features, geometric shapes, natural features, colour schema, as well as individual objects. This machine reading was able to arrive at probabilistic responses that identified certain building categories and adjectives, like'momument' and 'heritage.' Equally, it was inexact and imperfect; often identifying features of the place, like geographical locations, precise architectural styles, and historic periods, were omitted or misidentified. We also extracted metadata from the digital image itself, including the type of camera, focal length, exposure time, and use of flash. These parameters helped in building the final prompt when we attempted to resynthesise the image content. ### iii. Machine synthesis and its interpretation The prompts or instructions produced through both methods were then submitted to three image generating systems. We selected Stable Diffusion and Midjourney since in 2023 these were the most widely discussed text-to-image systems, eclipsing the earlier attention paid to OpenAI's Dall-E model. Stable Diffusion is a text-to-image model that has been made open source by its developer, StabilityAI, and can be downloaded and operated on consumer devices. Midjourney is a service-based system that requires a subscription to operate, via the Discord social media platform. Both perform similar functions, converting a natural language prompt into one or more images that aim to'represent' that prompt in a meaningful way. We used the latest versions of these two systems at time of writing: version XL in the case of Stable Diffusion and version 5.2 in the case of Midjourney. As Stable Diffusion models are released as open source, they can be adapted, or 'fine-tuned,' on much smaller data sets of images to produce particular styles or aesthetics. For further contrast we used an older version of Stable Diffusion (version 1.5) that had been fine-tuned to generate photorealistic images, in a model named 'Realistic Vision.' For each of the five sites, we applied the prompts generated through the approach described above to each of the three systems. We specified each system to generate four images for each prompt, producing a data set of 120 images (5 sites x 3 systems x 4 images). For comparison, we also applied the UNESCO-supplied description for the selected site as a prompt to the same combination of site, system and image variations, doubling the size of our data set to 240.2 Together these stages - source image selection, image analysis, image generation, image interpretation - conducted across different technical embodiments enable a characterisation of the machine gaze of heritage sites. Footnote 2: To aid reproduction, and to support similar studies, the scripts and datasets have been published on GitHub. Finally, we interpreted these sets of images in terms of their composition, form, subject selection, framing, and colour palette. This interpretation, as we reflect upon in our findings and discussion, involves reflection upon the acts of seeing and reading of machine-generated images. It builds necessarily upon our own backgrounds in heritage and media studies, and consequently involves a specific form of what has been theorised as the expert gaze. Despite the obvious limits of such interpretation, we look to avoid a specific judgement upon these machinic productions in terms of their approximation to some notion of 'ground truth' or as a quantitative exploration of bias within the underlying datasets of these systems (Salvaggio, 2022). ## The Machine Imagines Heritage We discuss here three sets of images that contrast internally (across models and prompt) and externally (across sites). We use 'H-M' and 'M-M' to distinguish human-prompt-machine-generated from machine-prompt-machine-generated images. ### Old Towns of Djenne _Figure 1_ shows a mid-distance elevational aspect of the mosque of Djenne, which is one the key structures identified in the description of the World Heritage Site. The adobe mosque appears in multiple photographs of the site, as one of the distinctive architectural landmarks within the urban ensemble of Djenne. The photograph frames the mosque tightly, editing out the immediate context of the marketplace or townscape that surrounds the mosque. _Figure 2_ shows results of the 'H-M' process: four outputs (in rows) of three image models (in columns), in response to the prompt that was extracted from the description of Djenne on the UNESCO website, which included phrases like 'typical African city', 'intensive and remarkable use of earth','mosque of great monumental and religious value' (UNESCO, 2023). In the case of Stable Diffusion (both versions), while the colour scheme of the UNESCO image is retained, no version of the mosque is produced in any of the images. SDXL (left column) shows, at different resolutions, a grid-like configuration of mud brick structures that approximates the sub-Saharan vernacular, but without the specificities of Djenne's architectural proportions or ornamentation. The Realistic Vision outputs (centre column) produce an approximation of a generic sub-Saharan settlement, small adobe buildings with thatched roofs - neither characteristic of the mosque nor of the general town. Midjourney (right column) produces images that are quite distinct from the reference image, but that resemble other images of the townscape of Djenne, showing markets, houses, and people in transit. This set of images shows in particular the compositional nature of Midjourney's generated images: in each case some version of the mosque is recognisable, but in the background, shot in shadow and occasionally at oblique angles. People in the foreground feature in a quasi-cinematic way: in two cases, one or two people appear close to the presumed camera, as though on a journey, while more distant figures appear as accidental subjects. In all cases, people appear in some variant of an assumed local dress. For the 'M-M' reading of the source image, we obtained the following: _photograph of a large sand castle with people walking in front of it_, **Building center, Sky, Cloud, Travel, Landscape, Sand, Aeolian landform, Facade, History, Ancient history, Archaeological site, Historic site, Art, Arch, Soil, Horizon, Singing sand, Tourism, Castle, Desert, Tourist attraction. Colors: #bb9667, #b7c1c3, #b78e5c, #9b7344, #977649, #6f4b1f, #aa9373, #694e26, #634d2d, #8d795a Shot with a E3700, at a resolution of 300 pixels per inch, year 2005, exposure time of 5/1806, Flash did not fire, auto mode, focal length of 27/5 The first part, in italics, represents the BLIP2 caption; the second, in bold, a textual representation of properties extracted from the Google analysis; and the third, metadata properties of the source image. The misrecognition of the mosque as a sandcastle in the Figure 1: Old Towns of Djenne Source: UNESCO Figure 2: Human Prompt / Machine Generation; SDXL, Realistic Vision, Midjourney Figure 3: Machine Prompt / Machine Generation; SDXL, Realistic Vision, Midjourney machine reading can be attributed to an alignment of the language of castles with similar visual patterns in the training data. _Figure 3_ represents the outputs, following the same pattern as _Figure 2_. In each case, the anchoring characteristic is the first part of the prompt, 'large sandcastle.' In one case (Midjourney, bottom), there are recognisable aspects of the source image, including the mosque's exterior ornamentation. But in most cases the 'castle' produced more closely resembles Disneyfied castle caricature, diverting quite starkly from the rectilinear form of the mosque. A recurring similarity in most of the images produced is a lack of surrounding built context, both the mosque, and the castle appear to be isolated monumental objects in the frame. In other respects, and despite the prompt specifying colours, camera type, and exposure time, the reference to sandcastle appears to over-determine the colour palette and saturation level. Compared to Figure 2 (H-M), in Figure 3 (M-M) the sand, of both castle and foreground, is lighter, and the sky clear rather than hazy. Keywords such as 'history' and 'archaeology' also change the sense of scale and context, with the implied camera position being now more distant. The scene is also deracinated: the form of the 'castle' is drawn from a wide range of typological and stylistic references, and though diminutive, the 'people' referenced in the prompt are dressed in global rather than 'local' attire, tourists who apprehend the monumental structure rather than locals who live around it. The presumed holder of the gaze is, in other words, no longer solely a figure imagined as behind the camera, but firmly embedded within it. ### Old City of Sana'a _Figure 4_ shows the original UNESCO image (top) of the Old City of Sana'a, along with two generated outputs from Midjourney 5.2: the first (middle, human-machine or "H-M") is the result of the UNESCO, human-authored description used as a prompt, and the second (bottom, machine-machine or "M-M") the output of the machine-generated prompt. In this case, the UNESCO description places emphasis on the cityscape, with phrases like 'rammed earth and burnt brick towers,' 'densely packed houses, mosques' but also specifies colours - 'white gypsum' 'bistre colored earth,' 'green bustans.' The BLIP generated prompt correctly identifies the image subject - 'old city of Yemen' but also the frame - 'an aerial view.' Here the machine-generated prompt was: photograph of an aerial view of the old city of yemen, **Building center, Sky, Daytime, Window, Architecture, Landscape, City, Urban design, Landmark, Cityscape, Facade, Roof, Human settlement, Urban area, Medieval architecture, Metropolis, Arch, Mixed-use, Archaeological site, Ancient history, Historic site, History, Turret, Dome, Town, Monument, Bird's-eye view, Tourism, Classical architecture, Holy places. Colors: #cdc3b8, #cfc2b1, #ab9b8a, #a69b93, #e7ddd2, #83766e, #887868, #e9dccb, #5f534d, #3f342e.**_Shot with a DSC-T9, at a resolution of 72 pixels per inch, year 2009, exposure time of 1/500, no flash, focal length of 1139/100_ As was with the case of the Midjourney outputs for Djenne, both generated outputs show a tendency to emphasise geographic features identified in the textual description or source image. Mountains are exaggerated; and in the 'H-M' case parts of the city hug a cliff-face and overlook a river, in sharp contrast to the source photo. Tonally the 'H-M' image also employs stronger use of contrast (brightly illuminated buildings on the left compared to those in shadow on the right), and a greater colour dynamic - browns, vivid blues, and varying greens - reflects the especially chromatic verbal description ('spacious green bustans'). The 'M-M' image on the other hand is strikingly similar, both in broad elements of architectural form and image composition, to the source. Just as with 'giant sandcastle' in the case of Djenne, here both the identification of aspect ('aerial view') and location ('old city of yemen') work to determine scale, perspective and chromatism of images for all three models. In the case of the selected Midjourney image, the identification of an 'arch' object by the Google API - barely discernible in the source image - is brought into the fore as a photographic conceeit, as 'found' frame for the distant cityscape. Though not evident in this source image, even another of the UNESCO images of Sana'a employs the same framing device - a convention of the'serious' or expert photographer the machine has learned to reproduce. Despite the inclusion of a palette extracted from the source though, the colours of the sky and buildings are once again Figure 4: Sana’a; Top: Original; Middle: Midjourney (H-M); Bottom: Midjourney (M-M) more lurid and saturated than those that appear in the official 'expert' gaze - a kind of machine equivalent to an Instagram filter designed to appeal instead to some imagined would-be tourist to the city. This last feature is unsurprising for several reasons: the training sets include more 'tourist' than 'expert' images, reinforced by the very inclusion of the term 'tourism', alongside 'archaeological history' in the generated prompt; more contemporary images featured in those training sets also use greater colour range than even those from the 2000s decade; the reference to a specific location; and finally, Midjourney itself is a commercial system that has been 'fine-tuned' to produce arresting images precisely through use of high contrast. And yet, in the final case we discuss here, this effect is in fact reversed. ### Tombs of Buganda Kings at Kasubi _Figure 5_, featuring representations of the Tombs of the Buganda Kings at Kasubi, uses the same pattern as _Figure 4_: at the top is the original UNESCO image, followed by two synthetic images, this time generated by Stable Diffusion XL, selected for the purpose of contrast. The middle image (H-M) is again produced from the UNESCO textual prompt, while the bottom image (M-M), from a prompt constructed from machine-generated captions and image metadata. The UNESCO description in this case emphasises the materiality of structures, with the phrases 'organic materials,' 'wood, thatch, reed, wattle, and daub' but also references form 'circular and surmounted by a dome.' The machinic prompt locate the structure and identifies the image as a 'photograph of the roof (is) made of straw.' Machine-generated prompt: photograph of the roof is made of straw, **Building center, Cloud, Sky, Land lot, Tree, Thatching, Shade, Grass, Tints and shades, Roof, Monument, Triangle, Soil, Historic site, Symmetry, Landscape, Building material, Hut, House. Colors: #83726b, #9a7360, #392d29, #d7b9a4, #f2f3f6, #7c685d, #211918, #bb9783, #645650, #6b584b.**_Shot with a DSC-W50, at a resolution of 72 pixels per inch, year 2007, exposure time of 1/80, Flash did not fire, auto mode, focal length of 47/5_ The first photograph of the Kasubi tombs, representing a front elevational aspect to the main structure, focuses primarily on the symmetry of the structure, its materiality ('thatch and reed' in particular) and form, while the tight framing of the camera angle and the relative absence of other objects and context to add a sense of scale, creating a sense of monumentality of the fairly austere structure. The photograph of the single structure devoid of context emphasises a monumentality that is not reflected in the UNESCO description, which instead identifies intangible aspects of the tomb including the continuity of its use and its associated meaning. These non-visual cues acknowledge that the building aesthetics and form are not solely constitutive of its value as a heritage site. One of the generated images was a black and white photograph, which we assume is in response to the specific mention of dates (1882/1884) in the prompt potentially directing the colour scheme. The tonality, frame, and context of H-M image) are closest to photographs of the late nineteenth and early twentieth century archaeological surveys. The subject of the M-M image is notionally closer to the original in terms of morphology, materials, and a focus on roof form. The foreground landscape echoes the materiality of the subject, while the background reproduces vegetation and tonality often depicted in images of the African savanna. The tight framing of the structure in the photograph and the difficulty in assigning a sense of scale mimics the original image, but the central difference between the two is in framing the subject, which shifts the emphasis from the monumental in the original to something more vernacular in the M-M image. ## Anamorphosis and Heteroscopia: Two Properties of the Machinic Gaze The algorithmic reading and synthesis of the five UNESCO World Heritage Sites offers an interesting counterpoint to UNESCO's own textual descriptions. All five of the images, when read via BLIP-2, focused on the descriptions of form, scale, material, and composition, erasing any sense of aesthetic judgement or valuation and instead generating descriptions for precision and conciseness with varying levels of accuracy. For instance, while the caption generated for the historic centre of Sanaa accurately identified Figure 5: Kasubi; Top: UNESCO; Middle: SDXL (H-M); Bottom: SDXL (M-M) 'an aerial view of the old city of Yemen', the caption generated for the photograph of the Old Towns of Djenne was 'a large sand castle with people walking in front of it' while the photograph of the Tombs of Buganda Kings at Kasubi was 'the roof is made out of straw'. The misrecognition of the Great Mosque of Djenne as a sandcastle reflects perhaps most clearly the distortion introduced by a machinic reading of this kind. However, even the simplification of the Kasubi tombs to essentially an image of a roof allows us to reflect upon our own interpretation of the five images as sites of globally recognised heritage. The second layer of algorithmic reading of the image, via Google Vision's API followed a mathematical extraction based on probabilistic interpretation. In each of the images, elements such as'sky,' 'grass,' and 'building center,' were identified, alongside other identifying descriptors such as'medieval architecture', 'arch,' 'archaeological site', but also specific descriptions such as 'Classical architecture', or 'Byzantine architecture'. Occasionally seemingly contradictory descriptors would be generated for the same image, once again illustrating the slippage between image, and meaning in the absence of a referent informing the machinic gaze. To make sense of the specificity of machinic gaze we borrow two terms from media analysis that have special relevance to current machine learning techniques of image consumption and production: anamorphosis and heterotopia. For Lacan (1977), anamorphosis refers to the oblique angle that an image requires of a human subject to interpret its representation. In our treatment here, we argue the machine produces - to varying degrees - skewed perspectives and alternative identifications that in turn pose the reflexive question of how culturally settled ways of seeing - as tourist, as heritage expert and so on - are themselves constituted. A mosque is interpreted as a 'giant sand castle' and image synthesis then renders this as an artefact that appears to violate connotations of the original object: as something to be gazed upon with reverence, to be preserved, and so on. This property is present most obviously as a form of technical error that needs to be corrected, either in the machine or in the specific instructions or prompts given to it. However, the error is at the same itself generative of questions as to how and why we perceive it _as_ an error - what in other words conditions our own acts of seeing and judging. Broadly relevant to how we perceive the outputs of generative AI in any context, in the field of heritage it queries, almost in the form of an implicitly theoretical demand, how we attribute aesthetic value to certain objects over others. In concrete terms, the machinic gaze is partly anamorphic with respect to human modes of perception due to the methods employed in their training. Images are processed iteratively, first with coarse filters that aim to identify, for example, horizontal and vertical lines, then with finer filters that progressively distinguish more subtle gradations in form and colour. Buildings - as relatively geometrically regular objects of a certain scale - are likely to be seen as alike, regardless of functional distinctions between, for example, a place of worship and a playful structure designed to imitate a castle. Such distinctions, if they feature at all, depend in turn upon the relative mass of images and labels in the training set. Hence the apparent confusion between a certain type of mosque and a sandcastle reflects the proportionate mass of labelled images of Djenne, relative to other sites - and the corresponding value attributed, in the human (tourist and expert) gaze, to that site. To 'correct' this error would involve different practices of touristic attention (or modified weightings of the training data), to better 'align' this vision with human expectation. Conversely, it is precisely this orthogonal or anamorphic perspective that in turn reflects upon existing practices of human observation and perspective - the privileging of certain sites over others, the concentration of canonical representations of'mosques' and 'castles' and re-projection of localised settings into the global imaginary of tourism and heritage. As our explorations also show, the visual machines of Stable Diffusion and Midjourney - considered as an algorithm, a neural network, or an AI system - already'see' as a multiplicity. To describe this, we make use of the concept of _heteroscopia,_ a term coined by Jaireth (2000). Jaireth gives heteroscopia two meanings: the first refers to a general scopic regime or visual culture of a historical period, while the second refers to the ways a given image may incorporate or reference other images, and so be more or less heteroscopic. Our own use adapts Jaireth's concept to the context of computer vision. In these systems, all image outputs are heteroscopic in Jaireth's second sense: they come from nowhere other than an archive of existing images and are necessarily and only images constituted by other images. But in this machinic context we argue for two other meanings of the term. First, 'hetero' signifies something 'other' - an inhuman 'gaze' that is sometimes banal, sometimes nonsensical, sometimes sublime, but always holding a fundamentally different relationship to the constellation of object-image-text. The technical act of "diffusion" in models like MidJourney and Stable Diffusion involves a twin process of adding and subtracting noise to a large corpus of images to learn to discriminate forms, styles, and colour compositions (Croitoru et al., 2023). Models are trained to compose images, in other words, solely via a 'universe' of other images and their accompanying labels. They are heteroscopic in the sense that no human could ever work from a written demand to an image in a comparable way. The second meaning of heteroscopia constitutes part of our claim for the need for human interpretation of machine-generated images. Heteroscopia in this sense refers to multiplicity of 'gazes' these systems make available in response to prompts. This is not simply a case, we argue, of prompts containing stylistic cues that direct the system to produce images concordant with, for instance, photorealism, surrealism, pointillism, or some other aesthetic effect. In addition, these systems build networks between prompt terms that unveil different - and sometimes, to human eyes, quite novel - imagistic representations that require a frame of interpretation. In the context of the heritage image, we need a language that describes how, for instance, efforts to reproduce the old towns of Djenne construct instead whimsical sandcastles that impossibly dwarf human characters in the foreground. No existing heritage taxonomic overlay can quite work to make sense of these creations, and even existing artistic nomenclature would struggle to 'locate' these examples of machine heteroscopia. This step of algorithmic reading, which is devoid of the human 'expert' or the 'tourist' gaze that relies on a constant referent to ideas of heritage value but instead focuses purely on elements of the image reveals the extent of meaning we implicitly attach to images of heritage sites. Deploying the machine gaze towards heritage photographs allows us to occupy a position of tourist or expert or in some cases both, but in each case, we are able to reflect upon the presumed author / generator of the image. On the other hand, multiple historic and visual referents are embedded within each of the five UNESCO site descriptions. Read alongside the description, the image of the Djenne is inscribed with multiple aesthetic judgements and associated ideas of heritage 'value.' ## Heritage and the Machine Gaze The privileging of the visuality and aesthetics of heritage sites in UNESCO is, we argue, distorted, and refracted through the machine gaze, and through the operations we identify as anamorphosis and heteroscopia. In highlighting elements of both similarity and difference, through visual representation, the fetishisation of architectural and artistic form, ornamentation, and material can be examined through both sets of images. In the first set, where the human textual description is used as visual description / reinsertion, we observe a greater degree of diversity in both subject and framing, but consistent in the images produced is a privileging of a certain kind of aesthetic that aligns to the idea of heritage value being inscribed and prescribed visually. In the second set, which is produced through a machinic reading and resynthesis, even though the subject of the image shifts substantially, the framing does not. We argue that heteroscopia and anamorphosis help to cluster and aggregate these features into refracted and concentrated delineations that otherwise exist as more diffused tendencies or proclivities: how the tourist and the expert sees. These tendencies appear more or less evident across two of the site / model / prompt combinations. Sana'a (with Midjourney) is reproduced through something like the tourist gaze - imagined at a distance, with saturated colour -- while outputs prompted by Kasubi (with SDXL) prompts appear closer to an expert's view - muted palette, with the photographic subject brought to the foreground. Djenne shares elements of both, but veers into alternative registers of the cinematic and fantastical. In calling for such interpretations, the machine here acts to bring these gazes themselves into focus. And with the act of interpretation itself, we move invariably away from attention to purely quantitative variances - inherent in the very mechanisms by which machine learning techniques aim to approximate a training set - to emphasise instead a process of human judgement and critique. We conclude on a speculative note about the effects of this process. MacCannell's (2001) critique of Urry's concept of the tourist gaze stresses the reflexive character of the gaze left out in treatments that discuss solely its dispositional, surveillance and outward looking aspects. In Lacan's treatment of the gaze, which MacCannell draws upon, its significance is the drawing back in of the viewing subject into the picture or
2302.13145
Absynthe: Abstract Interpretation-Guided Synthesis
Synthesis tools have seen significant success in recent times. However, past approaches often require a complete and accurate embedding of the source language in the logic of the underlying solver, an approach difficult for industrial-grade languages. Other approaches couple the semantics of the source language with purpose-built synthesizers, necessarily tying the synthesis engine to a particular language model. In this paper, we propose Absynthe, an alternative approach based on user-defined abstract semantics that aims to be both lightweight and language agnostic, yet effective in guiding the search for programs. A synthesis goal in Absynthe is specified as an abstract specification in a lightweight user-defined abstract domain and concrete test cases. The synthesis engine is parameterized by the abstract semantics and independent of the source language. Absynthe validates candidate programs against test cases using the actual concrete language implementation to ensure correctness. We formalize the synthesis rules for Absynthe and describe how the key ideas are scaled-up in our implementation in Ruby. We evaluated Absynthe on SyGuS strings benchmark and found it competitive with other enumerative search solvers. Moreover, Absynthe's ability to combine abstract domains allows the user to move along a cost spectrum, i.e., expressive domains prune more programs but require more time. Finally, to verify Absynthe can act as a general purpose synthesis tool, we use Absynthe to synthesize Pandas data frame manipulating programs in Python using simple abstractions like types and column labels of a data frame. Absynthe reaches parity with AutoPandas, a deep learning based tool for the same benchmark suite. In summary, our results demonstrate Absynthe is a promising step forward towards a general-purpose approach to synthesis that may broaden the applicability of synthesis to more $\ldots$
Sankha Narayan Guria, Jeffrey S. Foster, David Van Horn
2023-02-25T19:27:56Z
http://arxiv.org/abs/2302.13145v2
# Absynthe: Abstract Interpretation-Guided Synthesis # Absynthe: Abstract Interpretation-Guided Synthesis Sankha Narayan Guria University of Maryland, USA Jeffrey S. Foster Tufts University, USA David Van Horn University of Maryland, USA ###### Abstract. Synthesis tools have seen significant success in recent times. However, past approaches often require a complete and accurate embedding of the source language in the logic of the underlying solver, an approach difficult for industrial-grade languages. Other approaches couple the semantics of the source language with purpose-built synthesizers, necessarily tying the synthesis engine to a particular language model. In this paper, we propose Absynthe, an alternative approach based on user-defined abstract semantics that aims to be both lightweight and language agnostic, yet effective in guiding the search for programs. A synthesis goal in Absynthe is specified as an abstract specification in a lightweight user-defined abstract domain and concrete test cases. The synthesis engine is parameterized by the abstract semantics and independent of the source language. Absynthe validates candidate programs against test cases using the actual concrete language implementation to ensure correctness. We formalize the synthesis rules for Absynthe and describe how the key ideas are scaled-up in our implementation in Ruby. We evaluated Absynthe on SyGuS strings benchmark and found it competitive with other enumerative search solvers. Moreover, Absynthe's ability to combine abstract domains allows the user to move along a cost spectrum, _i.e.,_ expressive domains prune more programs but require more time. Finally, to verify Absynthe can act as a general purpose synthesis tool, we use Absynthe to synthesize Pandas data frame manipulating programs in Python using simple abstractions like types and column labels of a data frame. Absynthe reaches parity with AutoPandas, a deep learning based tool for the same benchmark suite. In summary, our results demonstrate Absynthe is a promising step forward towards a general-purpose approach to synthesis that may broaden the applicability of synthesis to more full-featured languages. Key words and phrases: program synthesis, abstract interpretation 2023 202 execution [Torlak and Bodik2014], counter-example guided synthesis [Solar-Lezama2013], or over-approximate semantics as predicates [Feng et al.2017, Kim et al.2021, Polikarpova et al.2016] (often requiring termination measures and additional predicates for verifcation). This is infeasible for many industrial-grade languages such as Ruby or Python. Other approaches are strongly coupled with the semantics of the source language with purpose-built solvers [Reynolds et al.2015], but this necessarily ties the synthesis engine to the particular language model used. In this paper, we propose Absynthe, an alternative approach based on user-defined abstract semantics that aims to be both lightweight and language agnostic. The abstract semantics are lightweight to design, simplifying away inconsequential language details, yet effective in guiding the search for programs. The synthesis engine is parameterized by the abstract semantics [Cousot and Cousot1977] and independent of the source language. In Absynthe, users define a synthesis problem via concrete test cases and an abstract specification in some user-defined abstract domain. These abstract domains, and the semantics of the target language in terms of the abstract domains, are written by the user in a DSL. Moreover, the user can define multiple simple domains, each defining a partial semantics of the language, which they can combine together as a product domain automatically. Absynthe uses these abstract specifications to automatically guide the search for the program using the abstract semantics. The key novelty of Absynthe is that it separates the search procedure from the definition of abstract domains, allowing the search to be guided by any user-defined domain that fits the synthesis task. More specifically, the program search in Absynthe begins with a hole tagged with an abstract value representing the method's expected return value. At each step, Absynthe substitutes this hole with expressions, potentially containing more holes, until it builds a concrete expression without any holes. Each concrete expression generated is finally tested in the reference interpreter to check if it passes all test cases. A program that passes all tests is considered the solution. (SS 2 gives a complete example of Absynthe's synthesis strategies). We formalize Absynthe for a core language \(\mathcal{L}_{f}\) and define an abstract interpreter for \(\mathcal{L}_{f}\) in terms of abstract transformer functions. Next, we describe a DSL \(\mathcal{L}_{meta}\) used to define these abstract transformers. Notably, as Absynthe synthesizes terms at each step, it creates holes tagged as abstract variables \(\tilde{x}\), _i.e.,_ holes which will be assigned a fixed abstract values later. We give evaluation rules for these transformers written in \(\mathcal{L}_{meta}\), that additionally narrows these abstract variables to sound range of abstract values. For example, given a specification that requests Pandas programs that should evaluate to a data frame, a term \((\Box:\tilde{x}_{1})\).query\((\Box:\tilde{x}_{2})\) is a viable candidate that queries a data frame. However, semantics of \(\mathcal{L}_{meta}\) help with constraining the bounds on \(\tilde{x}_{1}\) and \(\tilde{x}_{2}\) such that these holes are substituted by values of a DataFrame and String respectively. Finally, we present the synthesis rules used by Absynthe to generate such terms. Specifically, we discuss how Absynthe specializes term generation based on the properties of the domain, such as a finite domain enables enumeration through domain, or a domain that can be lifted to solvers can use solver-backed operations, or domains expressed as computations not supported by dedicated SMT solvers. (SS 3 discusses our formalism). We implemented Absynthe as a core library in Ruby, that provides the necessary supporting classes to implement user-specific abstract domains and abstract interpretation semantics. It further integrates automatic support for \(\top\) and \(\bot\) values and abstract variables, as well as ProductDomain to combine the individual domains point-wise. The Absynthe implementation has interfaces to call a concrete interpreter with a generated program to check if a program satisifies the input/output examples. Finally, we also discuss some optimizations to scale Absynthe for practical problems, such as caching small terms and guessing partial programs based on testing predicates on the input/output examples, and some limitations of the tool. (SS 4 discusses our implementation). We evaluate Absynthe as a general-purpose tool on a diverse set of synthesis problems while being at par on performance with state-of-the-art tools. We first use Absynthe to solve the SyGuS strings benchmarks (Alur et al., 2017) using simple domains such as string prefix, string suffix, and string length to guide the search. Though Absynthe operates with minimal semantic information about SyGuS programs, it still performs similar to enumerative search solvers such as EuPhony(Alur et al., 2017), solving most benchmarks in less than 7 seconds. SMT solvers such as CVC4, or Blaze that rely on precise abstractions perform much faster than Absynthe, but require large specification effort. We further evaluate the impact of our performance optimizations and verify that Absynthe's synthesis cost adjusts with the expressiveness of the domain. More specifically, the string prefix and suffix domains written in Ruby generate a concrete candidate 0.41ms average, whereas string length domain being a solver-aided domain takes around 16.93ms per concrete candidate on average due to calls to Z3. Next, we use Absynthe synthesize an unrelated benchmark suite, for which it is harder to write precise formal semantics--Python programs that use Pandas, a data frame manipulation library. We evaluate Absynthe on the AutoPandas benchmarks (Bavishi et al., 2019), a suite of Pandas data frame manipulating programs in Python. The AutoPandas tool trains deep neural network models to synthesize Pandas programs. Absynthe is at par with AutoPandas, including a significant overlap in the benchmarks both tools can solve, despite using simple semantics such types and column labels of a data frame while running on a consumer Macbook Pro without specialized hardware requirements. (SS 5 discusses our evaluation). In summary, we think Absynthe represents an important step forward in the design of practical synthesis tools that provide lightweight formal guarantees while ensuring correctness from tests. ## 2. Overview In this section, we demonstrate Absynthe by using it to synthesize data frame manipulation programs in Python using the Pandas library (Reback et al., 2022). In this example, we abstract data frames as sets of column names, and use a lightweight type system for Pandas API methods to effectively guide synthesis. A _data frame_ is a collection of data organized into rows and columns, similarly to a database table. Data frame manipulation is a key task in data wrangling, a preliminary step for data science or scientific computing tasks. For example, Figure 1 shows a data frame manipulation synthesis task taken from the AutoPandas benchmark suite (Bavishi et al., 2019). The goal is to use the Python Pandas library (Reback et al., 2022) to produce the data frame in Figure 1(d), given the two input data Figure 1. Data Frame Manipulation Example (Bavishi et al., 2019). The synthesis goal is to produce (d) given inputs (a), (b), and a query string (e). Absynthe synthesizes the solution (c). frames in Figure 0(a) and 0(b) and a query string (Figure 0(e)). In this case, the output joins the input rows with the same id but with different values in valueA and valueB columns. The Pandas library provides a wide range of methods that perform complex data frame manipulation. For example, calling left.merge(right, on: ['col' ]) joins the data frames left and right on column col. As another example, calling df.query(str) returns a new data frame with the rows of df that satisfy query string predicate str (as in Figure 0(e)). To keep the synthesis task tractable, Abstrthe restricts its search to Python code consisting of input variables arg0 through arg2; constants such as column names 'id', 'valueA', and 'valueB' or row labels 0, 1, \(\ldots\), 13 from the data frames; array literals and indexing; and dictionaries (for keyword arguments). Additionally, for this discussion we will limit Abstrthe to the merge and query methods just mentioned, even though our evaluation (SS 5.2) supports many more methods. Nonetheless, even with this restricted search space, naive enumeration of possible solutions times out after 20 minutes. In contrast, using Abstrthe, we can guide the search using abstract interpretation to find a solution in 0.47 seconds. Figure 2: Abstract domain definition for column names domain (a) and types domain definition (c). Abstract semantics for the required methods are defined in (b) using ColNames domain and (d) using PyType domain. _Abstract Domains and Semantics_. The first step in using Absynthe is to identify appropriate abstract domains for the abstract interpretation and implement the abstract semantics. Typically, we develop the domain by looking at the input/output examples and thinking about the problem domain. In our running example, we observe that the data frames use columns id, valueA, and valueB, but each frame has a slightly different set of columns. This gives us the idea of introducing a domain ColNames that abstracts data frames to a set of column labels. Abstract values, drawn from an abstract domain, represent a set of concrete values in the program. The abstract semantics define the evaluation rules of the program under values from this abstract domain. This approach has seen considerable success in practical static analysis tools such as ASTREE [Cousot et al. 2005] and Sparrow [Oh et al. 2012]. Figure 1(a) shows, similar to these tools, the definition of the ColNames domain, which is a class whose instances are domain values. Absynthe is implemented in Ruby, and Absynthe domains subclass AbstractDomain, which provides foundal definitions such as \(\top\) and \(\bot\) (see SS 4). A value in the ColNames domain stores the set of columns it represents in the instance variable (@cols, which by line 2 can be read with an accessor method \(\cols\). All abstract domains _require_ a partial ordering relation \(\subseteq\) on the domain that returns true if and only if the first columns label set ( rhs. \(\cols\)) is a subset of the second set (@cols). Finally, the \(\cup\) method returns a new abstract value containing the union of the column names of the two arguments. The \(\cup\) method is optional, however we define this as it will be used in the abstract semantics. After defining the abstract domain, next we need to define the abstract interpreter to give semantics to the target language in our abstract domain. Figure 1(b) defines the abstract interpreter for ColNames domain as ColNameInterp class. All abstract interpreters are defined as a subclass of AbsInterp class, provided by Absynthe. It needs a definition of the interpret class method (the preceding self. denotes it is a class method), that given an environment env, and a term prog reduces it to a value of type ColNames. The interpret is a standard recursive interpreter, so we omit the definition of interpret for brevity. Then we define the pd_merge and pd_query class methods that define the operations for the Pandas merge and query methods on values from ColNames domain. A call to left.merge(right, opt) in the source term under abstract interpretation is computed via a call pd_merge(abs(left), abs(right), abs(opt)), where abs() indicates the abstract values of the arguments. In the column name abstraction, we only need to compute the column names of the resulting data frame, which is just the union of the column names of the input data frame (line 7). Notice the opt argument can be ignored, as it impacts how the data frames are merged in the concrete domain, but the set column names of the final data frame is unaffected. Similarly, a call to df.query(pred) is abstractly evaluated via a call to pd_query(abs(df), abs(pred)). Since the data frame returned by query has the same columns as its input data frame, pd_query simple returns the abstract data frame df (line 11). Absynthe can also combine multiple domains together pointwise. We observe that the Pandas API methods expect values of a specific type. Hence, we also introduce a PyType abstract domain as a lightweight type system for Python. Figure 1(c) defines the abstract domain, which stores a type in the @ty field as a type from RDL [Foster et al. 2020], a Ruby type system. We build on RDL for representing Python types because it comes with built-in representations for nominal types, generic types, etc. and a subtyping relation between them. The \(\subseteq\) method for PyType simply calls the subtyping method \(\leq\) of RDL types. The subtyping method \(\leq\) is a special-case of the partial ordering relation \(\subseteq\). Figure 1(d) defines gives the abstract semantics for merge and query in the PyType domain. The method pd_merge checks that the types of left and right are subtypes of DataFrame, _i.e._, the type that represents Pandas data frames as shown in Figure 1, and that opt is a dictionary with a key on that admits an array of strings. If this check is satisfied, the return type is DataFrame. Otherwise, pd_merge returns nil, which Absynthe interprets as T, _i.e._, any value is possible. Note, in a type checker, if the arguments do not match the expected types a type error occurs. Here, in contrast, we are computing what would be a valid abstraction, and since we don't have a specific type we can assume T, _i.e._, anything can happen. Later, during synthesis the search procedure will appropriately do the pruning by _type-checking_ when it is provided a user specification. pd_query also checks if the receiver is a subtype of DataFrame and the query string is a String. If so, it returns DataFrame, otherwise it returns nil. These domains are combined together using a ProductDomain class, provided by Absynthe. Here we write X to pair elements from the ColNames domain and the PyType domain. For example, {'id', 'valueA'} X DataFrame denotes all data frames that have the columns 'id' and 'valueA'. The ProductDomain also comes with a ProductInterp that evaluates product domain values with respective individual semantics and combines these into a final product abstract value. Synthesizing SolutionsAn Absynthe synthesis problem is specified by giving input/output examples for the synthesized function. Synthesis begins by abstractly interpreting the input/output examples to compute an _abstract signature_ for the function. We have automated this for the AutoPandas benchmark suite. The upper-right corner of Figure 3 gives the abstract signature for out example. In particular, the first argument is a DataFrame with columns 'id' and 'valueA'; the second argument is a DataFrame with columns 'id' and 'valueB'; and the third argument is a String and has no columns. The synthesized function should return a DataFrame that has columns 'id', 'valueA', and 'valueB'. Additionally, Absynthe also uses a set of constants that can be used during the synthesis process. It constructs this from the rows and columns of the dataframes in the input/output example: {'id', 'valueA', 'valueB', \(0,1,\ldots,13\)}. Absynthe iteratively produces candidate function bodies that may contain _holes_\(\Box:a\), where each hole is labeled with the abstract value \(a\) its solution must abstractly evaluate to. Synthesis begins (left side of figure) with candidate C0, which is a hole labeled with the abstract return value of the function. At each step, Absynthe replaces a hole with an expression that satisfies its labels. For example, candidate C1 is not actually generated because its concrete value 'valueA' is not of type DataFrame. The process continues until the program has been full concretized, at which point is it tested in the Python interpreter against the input/output examples. Synthesis terminates when it finds a candidate that matches the input/output examples. For our running example, Figure 0(c) shows the solution synthesized by Absynthe. The rest of the figure illustrates the search process. The candidate C2 does not satify the abstract specification on columns, so it is also never generated. The candidate C3 instead expands the hole to Figure 3: Steps in the synthesis of solution to the problem in Figure 1. Some choices available to the synthesis algorithm has been omitted for simplicity. a call to query, which itself has holes for the receiver and argument. Note we omit the abstraction labels here because Absynthe has not fixed the abstract value for that hole yet. Absynthe treats these as abstract variables that can be used during abstract interpretation, but will be eventually substituted with a fixed abstract value as the search proceeds (discussed in SS 3). After a single set of expansion of holes, Absynthe runs the abstract interpreter on all candidates (including the partial programs). Running C3 through the abstract interpreter calls the pd_query function from PyTypeInterp (Figure 2d). From the evaluation of the pd_query Absynthe can infer the first hole has to be a subtype of DataFrame and the second hole should be a subtype of String. Thus candidates like C4 will not be generated as it is ill-typed (arg0 is a DataFrame). We use filling the remaining hole in C5 to illustrate another feature of Absynthe, enumerating finite abstract domains. Absynthe has the upper bound of this hole at DataFrame, it will substitute all possible values from PyType that are subtypes of DataFrame. Since there is only one type _i.e._, DataFrame, it synthesizes expressions of that type at the hole. For the next candidate C6, again by running abstract interpreter bounds for \(\Box\) are determined. The _ in the \(\Box\) signifies that ColNames domain still is an abstract variable, while the types have been concretized. Absynthe can determine bounds for variables only if the abstract transformers have conditionals (discussed in SS 3.1), not present in pd_merge of ColNameInterp. Running the abstract interpreter eliminates candidate C7 as the partial program will not satisfy the synthesis goal. Eliminating partial programs removes a family of concrete programs, narrowing the search space further. Absynthe next generates candidates C8 and C9. C8, however, is eliminated because the ColNames domain interpreter computes the final data frame will have columns { 'id', 'valueA'}. Eventually, the keyword argument to the merge method is filled with an array. Some ways of filling that argument fail the test cases (C10), but C11 passes all tests and is accepted as the solution (after being wrapped in a Python method defintion), also shown in Figure 0(c). ## 3. Formalism In this section we formalize Absynthe in a core language \(\mathcal{L}_{f}\). Figure 3(a) shows the \(\mathcal{L}_{f}\) syntax. Expressions in \(\mathcal{L}_{f}\) have values \(v\), drawn from a set of _concrete values_\(\mathbb{V}\); variables \(x\); holes \(\Box:a\) tagged with an abstraction \(a\); and function application \(f(e,\ldots,e)\). Note that these are external functions \(f\), e.g., to call out to libraries. Programs in \(\mathcal{L}_{f}\) consist of a single function definition \(\textbf{def}\ m(x)=e\) of a function \(m\) that takes an argument \(x\) and returns the result of evaluating \(e\). Abstractions \(a\) include abstract values \(\tilde{v}\) drawn from an abstract domain \(\mathbb{A}\). We assume this domain forms a complete lattice with greatest element \(\top\), least element \(\bot\), and partial ordering \(a_{1}\subseteq a_{2}\). Abstractions also include _abstract variables_\(\tilde{x}\), which Absynthe uses to label holes whose abstractions cannot immediately be determined. For example, if Absynthe synthesizes an application of a function \(f\), it labels \(f\)'s arguments with abstract variables. During synthesis, Absynthe maintains bounds on such variables to narrow down the search space (see below). We refer to abstract values from SS 2 as _abstractions_ in this section to avoid the ambiguity between abstract variables and values. Concrete values are lifted to abstract values using the abstraction function \(\alpha\), mapping concrete values to abstract values, _i.e._, \(\alpha\) maps \(\mathbb{V}\) to \(\mathbb{A}\). Likewise, abstract values map to a set of concrete values using the concretization function \(\gamma\), _i.e._, \(\gamma\) maps \(\mathbb{A}\) to the \(\mathcal{P}(\mathbb{V})\). We write \(v\in\tilde{v}\) as a shorthand for checking that \(v\) is in the concretization of \(\tilde{v}\). We assume that for each function \(f\), we have a corresponding abstract transfer function \(f^{\#}\) that soundly captures its semantics. Finally, during synthesis, Absynthe maintains two variable environments: \(\Gamma\), binding variables \(x\) to their abstractions, and \(\Delta\), binding abstract variables \(\tilde{x}\) to their bounds. Abstract variable bounds are written as a tuple of the lower and upper bound respectively (details in SS 3.1). _Abstract Semantics_. Next, we define semantics to abstractly interpret candidate programs in our domain. Figure 3(b) presents the relation \(\Gamma\vdash e\Downarrow a\) that, given an abstract environment \(\Gamma\), evaluates an expression \(e\) to an abstract value \(a\). E-Val lifts a concrete value to the abstract domain by applying the abstraction function. E-Var lifts a variable to an abstract value by substituting the value from the environment \(\Gamma\). E-Hole abstractly evaluates a hole to its label. Finally, E-Fun recursively evaluations a function application's arguments and then applies the abstract transfer function \(f^{*}\). _Synthesis Problem._ We can now formally specify the synthesis problem: Given an abstract domain \(\mathbb{A}\), a set of abstract transformers \(f^{*}\), and an abstract specification of the function's input and output \(a_{1}\to a_{2}\), synthesize a set of programs \(P\) such that NoHole(\(P\)), i.e., \(P\) has no holes in it, and \(x:a_{1},\emptyset\vdash P\Downarrow a_{2}\), i.e., \(P\) abstractly evaluates to \(a_{2}\) given that \(x\) has abstract value \(a_{1}\). Then, the final solution is chosen as a synthesized candidate \(P\) that passes all input/output examples. ### Abstract Transformer Function DSL Figure 4(a) shows \(\mathcal{L}_{meta}\), the DSL to define abstract transformer functions \(f^{*}\) for Absynthe. The primary purpose of the DSL is to let users define \(f^{*}\) that can handle both abstract values \(a\) and variables \(\tilde{x}\) correctly. It is expressive enough to write the abstract transformer function for domains in SS 2. Expressions \(\hat{e}\) in \(\mathcal{L}_{meta}\) can be either such abstractions \(a\), variables \(y\), function application \(g(\hat{e})\) and if-then-else statements. We consider \(g\) as uninterpreted abstract functions. The conditionals \(b\) for if statements include **top?** that tests if an expression is \(\top\), **bot?** that tests if an expression is \(\bot\) Figure 4. Syntax, relations, and abstract semantics of \(\mathcal{L}_{f}\). **var?** that tests if an expression is an abstract variable \(\tilde{x}\), and **val?** that tests if an expression is a abstract value \(a\). Additionally, expressions \(\hat{e}\) can test for ordering using \(\subseteq\) or can call an abstract function \(g(\hat{e})\). The else branch of these conditionals evaluate to \(\top\), _i.e._, it evaluates to the largest possible abstraction \(\top\) if a test of ordering fails. This is done to soundly over-approximate program behavior, while sacrificing precision. The abstract transformer is defined as a function \(f^{\#}\) that takes the input abstract value as argument \(y\) and computes the output abstraction by evaluating the expression \(\hat{e}\). Figure 4(b) shows selected big-step evaluation rules for the abstract transformer functions written in \(\mathcal{L}_{meta}\). Under an abstract environment \(\Gamma\) and a bounds environment \(\Delta\), expression \(\hat{e}\) evaluates to a new bounds environment and a value \(v\). In general these rules reflect standard big step semantics, except for the \(\subseteq\) operation, where the bounds get constrained because of the comparison. The rule A-JfT evaluates the branch condition \(b\) and evaluates \(\hat{e}_{1}\) if it is true. A similar rule (omitted here) can be written if the conditional evaluates to false. A-TopT checks if the expression \(\hat{e}\) evaluates to \(\top\). We omit evaluation rules for the false case and other branching predicates such as **bot?**, **var?**, and **val?** which are similar to **top?**. The rules for evaluating \(e\subseteq e\) are most interesting, as these test for the \(\subseteq\) relation while constraining abstract variables \(\tilde{x}\) to the range under which the relation \(e\subseteq e\) holds. In general, the Figure 5: Syntax and evaluation rules of \(\mathcal{L}_{meta}\). abstract variable narrowing reduces the range of \(\tilde{x}\) to a sound range for that evaluation through \(f^{*}\). In effect it is finding satisfiable range for \(\tilde{x}\) for that branch. A-VarConst tests for the \(\subseteq\) relation when \(\hat{e}_{1}\) evaluates to a variable \(\tilde{x}\) and \(\hat{e}_{2}\) evaluates to a values \(\tilde{v}\). In such a case, if \(\tilde{v}\) is within the range of the variable \(\tilde{x}\) the term evaluates to true, while updating the upper bound of \(\tilde{x}\) to \(\tilde{v}\). This narrows the abstract variables, while still being sound under which the partial order relation \(\subseteq\) holds true. A similar symmetrical rule exists (omitted here) where the left hand evaluates to \(\tilde{v}\) and right hand evaluates to \(\tilde{x}\). Finally, A-VarSub gives the rules for comparing two abstract variables \(\tilde{x}_{1}\) and \(\tilde{x}_{2}\). It uses a metafunction \(T\) to describe the cases where \(\tilde{x}_{1}\) is contained in \(\tilde{x}_{2}\), or has some overlap, or \(\tilde{x}_{1}\) is less than \(\tilde{x}_{2}\). ### Abstraction-Guided Synthesis To perform abstraction-guided synthesis, Absynthe recursively replaces holes by suitable expressions and then tests fully concretized candidates. Figure 6 shows the rules for hole replacement. These rules prove judgments of the form \(\Delta,\Gamma\vdash e_{1}\narrowarrow e_{2}:a\), meaning in bounds environment \(\Delta\) and abstract environment \(\Gamma\), expression \(e_{1}\) takes a step by replacing a hole in \(e_{1}\) to yield a new expression \(e_{2}\). In particular, S-Val replaces \(\Box:a\) with a value \(v\) from the concrete set that \(a\) abstracts. Similarly, S-Var replaces a hole with a variable that is compatible with the hole's label. The next few rules are used to generate function applications, or more generally, any term that may have more holes. First, S-Finite generates function application when the domain from which \(a\) is drawn is finite, _e.g._, a simple type system that without polymorphic types or first class lambdas, or an effect system as used in Guria et al. (2021). This rule can produce multiple candidates with each hole tagged with distinct abstract values from the domain. Second, for abstract domains with infinite values that can be represented in a background theory solver, Absynthe applies the S-Solve. If the function application requires \(n\) arguments, only \(n-1\) arguments are concretized to a term. This gives the constraint \(f^{*}(a_{1},\ldots,a_{n})=a^{\prime}\) with only one unknown, \(a_{n}\), that can solved for and assigned to the hole. For \(f^{*}\) to be lifted to a SMT solver \(f^{*}\) should also have an interpretation a background theory supported by the solver. This is useful for representing predicate abstractions or numeric domains such as intervals or string lengths (used in SyGuS Figure 6. Hole replacement rules for \(\mathcal{L}_{f}\). evaluation in SS 5.1). Finally, S-Enumer replaces a hole with a function application with fresh abstract variables \(\tilde{x}_{i}\) for the arguments and return. Notice there is no guarantee \(f\) will produce a value of the appropriate abstraction. This is because, while we assume we have an abstract transfer function \(f^{\#}\), we do not know what abstraction it will compute without concretizing the arguments. However, unsound partial programs will be eliminated by the abstract interpreter as discussed below. Given only forward evaluation semantics and no other information about the domains, this is best way to construct partial program candidates. Absynthe can switch between bottom-up synthesis (S-Enumer) and top-down goal-directed synthesis (rest of the S- rules) depending on which rule is applied. While these rules are non-deterministic, the Absynthe implementation (SS 4) chooses and applies these rules for the correct domain in a fixed order to yield solutions. ``` 1:procedureGenerate(\(a_{1}\to a_{2}\), maxSize) 2:\(\Gamma\leftarrow[x\mapsto a_{1}]\) 3:\(e_{0}\leftarrow\Box:a_{2}\) 4:workList \(\leftarrow\)\([e_{0}]\) 5:whileworkList is not empty do 6:\(e_{curr}\leftarrow\)pop(workList) 7:\(\omega_{enumer}\leftarrow\{e_{t}\mid\Gamma\vdash e_{curr}\nrightarrow e_{t}:a\}\) 8:\(\omega_{valid}\leftarrow\{e_{t}\in\omega_{enumer}\mid\Gamma\vdash e_{t}\mid a \wedge a\subseteq a_{2}\}\) 9:\(\omega_{eoad}\leftarrow\{e_{t}\in\omega_{valid}\mid\textsc{NoHole}(e_{t})\}\) 10:\(\omega_{rem}\leftarrow\omega_{valid}-\omega_{eoad}\) 11:forall\(e_{t}\in\omega_{eoad}\)do 12:return\(e_{t}\)ifTestProgram(\(e_{t}\)) 13:endfor 14:\(\omega_{rem}\leftarrow\{e_{t}\in\omega_{rem}\mid\textsc{size}(e_{t})\leq \max\textsc{Size}\}\) 15:workList \(\leftarrow\)reorder(workList + \(\omega_{r}\)) 16:endwhile 17:returnError:No solution found 18:endprocedure ``` **Algorithm 1** Synthesis of programs that passes a spec \(s\) _Synthesis Algorithm._ Algorithm 1 performs abstraction-guided synthesis. The algorithm uses a work list and combines synthesis rules for candidate generation with search space pruning based on abstract interpretation, in addition to testing in a concrete interpreter. The ordering of programs in the worklist determines the order in which program candidates are explored (discussed in SS 4). The synthesis algorithm starts off with an empty candidate \(e_{0}\) as a base expression in the work list. At every iteration it pops one item from the work list and applies synthesis rules (Figure 6) in a non-deterministic order to produce multiple candidates \(\omega_{enumer}\). Each candidate is abstractly interpreted, and then checked to see if the computed abstraction satisfies the goal abstraction. If it is satisfied it is added to the set of valid candidates \(\omega_{valid}\) (line 8). As partial programs with holes represent a class of programs, abstractly interpreting these eliminate a class of programs if they are not included in the goal \(a_{2}\). Thus, the algorithm iterates through partial programs which are _sound_ with respect to the abstract specification. Any unsound programs generated by S-Enumer are pruned here. Finally, all concrete programs \(\omega_{eoad}\) are tested in the interpreter to check if a program satisfies all test cases, in which case it is returned as the solution. The remaining programs \(\omega_{rem}\) contain holes, so these can be expanded further by the application of synthesis rules. Only programs below the maximum size of the search space are put back into the work list, and the order of the work list is always based on some domain-specific heuristics (SS 4 discusses our program ordering). ## 4. Implementation Absynthe is implemented in approximately 3000 lines of Ruby excluding dependencies. It is architected as a core library whose interfaces are used to build a synthesis tool for a problem domain. Additionally, to support solver-backed domains, we developed a library (-460 lines) to lazily convert symbolic expressions to Z3 constraints and solve those in an external process. Absynthe uses a term enumerator that, at each step, visits holes in a term and substitutes it with values or subterms containing more holes applying the rules shown in SS 3.2. Absynthe requires users to define a translation from the ASTs to the source program and a method that tests a candidate to return if the test passed or not. Users may provide a set of constants for the language which are used as values to be used in the concretization function. In practice, this is useful when the language has infinite set of terminals (like Python), and selecting values from the set of constants makes the term generation tractable. For AutoPandas benchmarks, we infer such constants from the data frame row and column labels (SS 2). Absynthe explores program candidates in order of their size, preferring smaller programs first (line 15 of Algorithm 1). We plan to explore other program exploration order in future work. The synthesis rules presented in SS 3.2 are non-deterministic, however, our implementation fixes an order of application such rules. It prefers to synthesize constants and variables followed by function applications, hashes, arrays, etc. Moreover, based on the definition of abstract domains (discussed below), it can automatically choose to apply the S-Finite or S-Solver rules. If none of these specialized rules apply, it uses S-Enum rule to synthesize subterms. _Abstract Domains_. To guide the search, users need to implement an abstract domain. Absynthe provides a base class--AbstractDomain from which a programmer can inherit their own abstract domains implementation, like Figure 2. The base classes come with machinery that gives built-in implementation of \(\top\), \(\bot\), abstract variables \(\tilde{x}\), and supporting code for partial ordering between these abstract values. The user has to define how to construct abstract values for that domain (the initialize method in SS 2), the partial ordering relation \(\subseteq\) between two abstract _values_. The abstract variable narrowing (SS 3.1) is implemented as the \(\subseteq\) method in the AbstractDomain base class. Solver-aided domains (such as string length in SS 5.1) construct solver terms when initializing an abstract value, or apply functions that compute abstract values (including \(\cup\) and \(\cap\)). These terms are checked for satisfiability of \(a_{1}\subseteq a_{2}\) in the solver when the \(\subseteq\) method is invoked, and any solved abstract variables are assigned to its holes. If the solver proves the solver term unsatisfiable, the candidate is eliminated. The rule S-Finite is applied for domains with finite abstract values and S-Solve is used for domains whose values can be inferred using an SMT solver yielding top-down goal-directed synthesis. In case these cannot be applied, Absynthe falls back to using the S-Enum rule that is equivalent to bottom-up term enumeration. We plan to explore a more ergonomic API for the Absynthe framework in future work. Absynthe also provides a ProductDomain class to automatically derive product domains by combining any user-defined domains as needed. The \(\subseteq\) method on ProductDomain returns the conjunction of respective \(\subseteq\) on the individual domains it is composed of. _Abstract Interpreters_. Each abstract domain needs a definition of abstract semantics, inherited from the AbstractInterp class provided by Absynthe (as shown in SS 2). All subclasses override the interpret method that takes as argument the abstract environment and the AST of the term that is being evaluated. In practice, it is implemented as evaluating subterms recursively, and then applying the abstract transformer function written in a subset of Ruby (similar to \(\mathcal{L}_{meta}\) in SS 3.1) to evaluate the program in the abstract domain. A sound interpreter for ProductDomain is derived automatically, by composing the interpreters of its base domains. More specifically, it evaluates the term under individual base domains and then combines the results pointwise into a product. Concrete TestsAny synthesized term without holes that satisfies the abstract specification is tested by Absynthe in a reference interpreter against concrete test cases. Absynthe expects the programmer to define a test_prog method that calls the reference interpreter with the synthesized source program (as a string in the source language), and returns a boolean to indicate if the tests passed. The reference interpreter runs the test case, which in many cases boils down to checking the program against the provided input/output examples. If the program passes all test cases, it is considered the correct solution. If the program fails a test, it is discarded. OptimizationsIn practice, Absynthe uses a min-heap to store a work list of candidates ordered by their size. This eliminates the reorder step (Algorithm 1 line 1), saving an average cost of \(\mathcal{O}(n\log n)\) at each synthesis loop iteration. Additionally, we found certain common subterms occur frequently in the same program, _e.g._, computing the index of the first space in a string in a SyGuS program. Absynthe caches small terms (containing up to one function application) that do not have any holes to save the cost of synthesizing these small fragments. Whenever, a hole with compatible abstract value is found, these fragments are substituted directly without doing the repetitive work of synthesizing the function application from scratch again (similar to subterm reuse in DryadSynth[20]). Finally, Absynthe tests a set of predicates against given input/output examples, to guess a partial program instead of starting from just a \(\square\) term. For example, Absynthe has a predicate that checks if the output is contained in the input, then the output is a substring of the input. For the SyGuS language, if the predicate (str.contains output input) tests true, then the partial program is inferred to be (str.substr input \(\square\)). This reduces the problem complexity by cutting down the search space. Another predicate (str.suffixof output input) tests if the input ends with output, then it infers the partial program (str.substr input (str.len input)), _i.e._, the program is possibly a substring of the input from some index to the end. We evaluate the performance impact of the latter two optimizations in SS 5 (No Template column in Table 1). LimitationsWhile Absynthe is a versatile tool to define custom abstract domains and combine it with testing in a reference interpreter, the approach does have some limitations. First, Absynthe only works with forward evaluation rules over the abstract domain, in contrast to FlashMeta[19] that requires "inverse semantics", _i.e._, rules that given a target abstraction computes the arguments to the abstract transformer. While specifying only the forward semantics eases the specification burden for users, it require more compute time to synthesize subterms such as arguments to functions. Second, while we found product domain useful to combine separate domains, these domains remain independent through synthesis, unlike predicates where all defined semantics can be considered at the same time. We plan to explore methods to make product domains more expressive in future work. Third, problems where one can define full formal semantics are a better fit for solver-aided synthesis tools such as Rosette[20] or SemGuS[21]. We share performance benchmarks on SyGuS strings (which have good solver-aided tools) to give some evidence for this in our evaluation (SS 5.1). Notably, solver-aided tools can jointly reason about subterms. In contrast, when using solver-aided domains, Absynthe concretizes some of the subterms which requires enumeration through larger number of terms. Finally, Absynthe falls back on term enumeration when abstract domains do not provide any more guidance, often leading to combinatorial explosion for larger terms. ## 5. Evaluation We evaluate Absynthe by targeting it in variety of domains, to verify it can synthesize different workloads. The primary motivation is to evaluate the general applicability of abstract interpretation-guided synthesis to diverse problems rather than being a state-of-the-art tool at a single synthesis benchmark suite. The questions we aim to answer in our evaluation are: * How well does Absynthe work for problems traditionally targeted using solver-based strategies using the SyGuS strings benchmark (Alur et al., 2017) (SS 5.1)? We also discuss the performance impact of optimizations and program exploration behavior in Absynthe. * Can Absynthe be adapted to an unrelated problem (not handled by any tools that solve SyGuS benchmarks) where it is difficult to write precise formal semantics? We test this by using Absynthe to synthesize Python programs that use the Pandas library from the AutoPandas(Bavishi et al., 2019) benchmark suite (SS 5.2). ### SyGuS Strings _Benchmarks._ To test that Absynthe is a viable approach to synthesize programs that has been well explored in prior work, we target it on the SyGuS strings benchmark suite (Alur et al., 2017). We believe strings form a good baseline to compare Absynthe with other synthesis approaches that rely on enumerative search (Alur et al., 2017), SMT solvers (Reynolds and Tinelli, 2017), and abstract methods directed by solvers (Wang et al., 2017) (discussed in details in SS 6). In contrast, Absynthe uses only abstract domains with their forward transformers to guide the search. We _do not expect_ Absynthe to out-perform the past tools, rather to evaluate if it can solve most of the benchmarks at a lower cost of defining lightweight abstract domains and partial semantics upfront. SyGuS strings has 22 benchmarks with 4 variants of each--standard (baseline set of input/output examples), small (fewer examples than standard), long (more examples than standard), long-repeat (more examples than long with repeated examples). As our approach is dependent only on the abstract specification and testing, not on the number of examples, we show detailed results for the standard version of these benchmarks. These results generalize to all variants of each benchmark. As we aim to evaluate how abstraction guided search performs, we exclude any programs containing branches. Previous work like Rrsyn(Guria et al., 2021) and EuSolver(Alur et al., 2017) have used test cases that cover different paths through a program to do more efficient synthesis of branching programs. These can be adapted to a system like Absynthe with minor effort. Absynthe parses the SyGuS specification files directly to prepare the synthesis goal and load the target language. As SyGuS does not come with an official concrete interpreter for programs, we provide one written in Ruby that is compliant with the SyGuS specifications (Raghothaman and Udupa, 2014). Absynthe uses this interpreter as a black-box and does not receive any additional feedback other than the generated SyGuS programs satisfied the input-output examples or not. _Abstract Domains._ We defined the following abstract domains and their semantics to run the benchmark suite: 1. **String Length.** A solver-aided domain to lift strings to their lengths, while lifting integers and booleans without transformation. This means the concretization of the abstract value 5 can be the number 5 and the set of all strings of length 5, whereas the boolean abstract value true or false represents identical concrete values. 2. **String Prefix.** A domain to represent the set of strings that begin with a common prefix. For example, an abstract value with string "fo" is wider than an abstract value with string "foo", as the former denotes all strings starting with "fo" and the latter includes a subset of that, _i.e.,_ strings starting with "foo". The \(\subseteq\) operation checks if the prefix of one string starts with the prefix of the other. 3. **String Suffix**. A domain to represent the set of strings that end with a common suffix, similar to string prefix domain. The \(\subseteq\) operation checks if the suffix of one string ends with the suffix of the other. These domains were created by looking at the input/output examples in the synthesis specs, and encoding the simplest partial semantics that guides the reasoning. For example, a few problems have programs that start with or end with a string constant. This is how we designed the string prefix and suffix domains respectively. On the other hand, many problems produce strings of fixed lengths or the length of the output string is a function of the length of the input string. The string string length domain expresses semantics constraints of this kind. As the string length domain is solver-aided, it can handle symbolic constraints from abstract variables like the string length of a substring \(\operatorname{str}.\operatorname{substr}\) operation is \(\operatorname{j}\ -\ \operatorname{i}\) where \(\operatorname{i}\) and \(\operatorname{j}\) are the start and end index respectively. Although the string length domain does not preserve type information, SyGuS being a typed language (type-soundness enforced by the grammar) all programs in the language are type-correct by construction. Consequently, we did not need to write a type system as an abstract domain. Finally, we give abstract specifications in the selected abstract domains where required. Specifically, we run each benchmark without an abstract annotation, _i.e._, equivalent to \(\top\rightarrow\top\) specification which results in naive enumeration combined with abstract interpretation. If a benchmark times out, then we add an abstract annotation, such as \(\top\rightarrow\) "Dr. " for the dr-name example (Table 1). This specification means, Absynthe should find a function that given any input string (\(\top\)), it computes strings starting with "Dr. " only. _Results._ Table 1 shows the results of running the SyGuS strings benchmarks through Absynthe with the discussed domains. The numbers are reported as a median of 11 runs on a 2016 Macbook Pro with a 2.7GHz Intel Core i7 processor and 16 GB RAM. All experiments had a timeout of 600 seconds. In Table 1, _Benchmark_ column is the name of the problem, # _Ex_ shows the number of input/output examples. _Time_ shows the median running time of the benchmark along with the semi-interquartile range over 11 runs. The _Size_ and _Ht_ columns give the size of the synthesized program as the count of the AST nodes in the SyGuS language and the height of the synthesized program AST respectively. The # _Tested_ column lists the number of programs that were tested in the concrete interpreter before a solution was found. An abstraction that works well reduces this number compared to a worse abstraction or naive enumeration. _Domains_ column lists the domains used for synthesizing the program. These domains were provided as a specification in the abstract domain. \(\top\) denotes that an abstract specification was provided as a product of \(\top\) values in all individual domains for input and output, resulting in just term enumeration. The rows which mention the domain was provided abstract specs only from that domain, resulting in guidance from the provided specification. The # _Elim_ lists the number of partial programs (denotes a family of concrete programs) that were eliminated by running the abstract interpreter with the provided specification during the search. For the problems which used the \(\top\) domain, the abstract interpreter did not eliminate any partial programs, as specification admits all programs. Any row with - denotes time out of the benchmark under these abstract specifications. Most benchmarks are solved within \(\sim\)7 seconds, with exceptions being _name-combine-3_, _phone-6_, and _phone-7_ which take longer. In general a larger program takes much longer to synthesize, due to combinatorial increase in the number of terms being searched through as the AST size increases. For example, larger programs with same AST height take longer to synthesize due to higher number of function arguments. The number of examples do not impact the time for synthesis as most time is spent in abstract interpretation and term generation. Testing a candidate on the examples take minimal time. Absynthe performs reasonably well, solving around the same number of benchmarks as EuSolver[17]. We selected EuSolver as it is based on an enumerative search method like Absynthe. The timeout of 600 seconds only applies to our Absynthe evaluation, whereas EuSolver was evaluated with a timeout of 3600 seconds. Absynthe solves around 77% of the benchmarks despite being a tool written a Ruby, one of the more slower languages. We suspect additional performance gains can be had by writing the tool in performant language that compiles to native code. We plan to explore this in future work. Additionally, Absynthe does not have the problem of overfitting because the search algorithm does not use the input/output examples. It merely uses it as a test case, and since they do not influence term enumeration they do not cause overfitting with respect to the examples. Domain-specific synthesis costsAnother key advantage of the Absynthe approach is only pay for what you use. The time of synthesis is dependent on the semantics of the abstract domain. String prefix and suffix are implemented in pure Ruby and does not incur much cost for invoking the solver, so these still guide the search without much cost. However, the string length domain being a solver-backed domain, requires a call to Z3 for every \(\subseteq\) check. So it give more precise pruning, while taking a longer time for synthesis. Comparing the average time to generate all the concrete programs explored gives evidence for this. For example, consider _phone-6_ which explores \begin{table} \begin{tabular}{|r|r||r|r|r|r|r|r||r|r|} \hline Benchmark & \# Ex & Time (sec) & Size & Ht & \# Tested & Domains & \# Elim & No Cache & No Temp \\ \hline bikes & 6 & 1.70 \(\pm\) 0.02 & 7 & 4 & 4808 & \(\top\) & 0 & 2.55 & 35.05 \\ dr-name & 4 & 1.54 \(\pm\) 0.02 & 11 & 4 & 4797 & Prefix & 46610 & 139.53 & 2.92 \\ firstname & 4 & 0.03 \(\pm\) 0.00 & 7 & 3 & 4 & \(\top\) & 0 & 0.63 & 0.18 \\ initials & - & - & - & - & - & - & - & - & - \\ lastname & 4 & 0.02 \(\pm\) 0.00 & 10 & 4 & 15 & \(\top\) & 0 & 0.81 & 18.72 \\ name-combine & 6 & 0.21 \(\pm\) 0.00 & 5 & 3 & 566 & \(\top\) & 0 & 0.24 & 0.22 \\ name-combine-\(2\) & 4 & 6.01 \(\pm\) 0.06 & 9 & 4 & 9723 & Suffix & 48516 & 6.65 & 8.28 \\ name-combine-\(3\) & 6 & 47.86 \(\pm\) 0.23 & 9 & 5 & 117370 & Suffix & 124573 & 68.29 & 43.63 \\ name-combine-\(4\) & - & - & - & - & - & - & - & - & - \\ phone & 6 & 0.03 \(\pm\) 0.00 & 4 & 2 & 3 & \(\top\) & 0 & 0.03 & 0.12 \\ phone-\(1\) & 6 & 0.16 \(\pm\) 0.00 & 6 & 3 & 1189 & \(\top\) & 0 & 0.20 & 7.32 \\ phone-\(2\) & 6 & 0.05 \(\pm\) 0.01 & 7 & 3 & 41 & \(\top\) & 0 & 0.04 & 63.82 \\ phone-\(3\) & - & - & - & - & - & - & - & - & - \\ phone-\(4\) & 6 & 0.05 \(\pm\) 0.01 & 4 & 2 & 1577 & \(\top\) & 0 & 0.05 & 0.14 \\ phone-\(5\) & 7 & 0.03 \(\pm\) 0.00 & 7 & 3 & 18 & \(\top\) & 0 & 2.16 & 0.20 \\ phone-\(6\) & 7 & 100.54 \(\pm\) 0.51 & 14 & 4 & 5937 & Length & 12234 & - & 27.79 \\ phone-\(7\) & 7 & 103.92 \(\pm\) 0.37 & 14 & 4 & 54051 & Length & 12639 & - & - \\ phone-\(8\) & 7 & 0.72 \(\pm\) 0.00 & 10 & 4 & 217 & Length & 31 & 1.37 & - \\ phone-\(9\) & - & - & - & - & - & - & - & - & - \\ phone-\(10\) & - & - & - & - & - & - & - & - & - \\ reverse-name & 6 & 0.35 \(\pm\) 0.00 & 5 & 3 & 593 & \(\top\) & 0 & 0.41 & 0.42 \\ univ-\(1\) & 6 & 6.69 \(\pm\) 0.07 & 7 & 3 & 19683 & \(\top\) & 0 & 8.08 & 7.73 \\ \hline \end{tabular} \end{table} Table 1: Results of running Absynthe on SyGuS strings benchmarks. _# Ex_ lists the number of I/O examples; _Time_ lists the median and semi-interquartile range for 11 runs; _Size_ and _Ht_ reports the number of AST nodes and the height of the program AST respectively; _# Tested_ is the number of programs run in the concrete interpreter before a solution was found; _Domains_ lists the domains used to specify the abstract spec; and _# Elim_ lists the number of partial programs eliminated by the abstract interpreter during search. _No cache_ and _No Temp_ measure the performance of Absynthe when small expression cache and template inference (§ 4) are disabled respectively. 5937 candidates in 100.54 seconds (16.93ms average) with the string length domain, whereas _name-combine-3_ explores 117370 candidates in 47.86 seconds (0.41ms average) with the string suffix domain. Depending on how expensive a domain is, one can combine the domains to fit in a variety of synthesis time budgets. Impact of performance optimizationsWe explore the impact of performance optimizations discussed in SS 4. First, the performance of Absynthe on these benchmarks when the small expressions cache is disabled is reported in the _No cache_ column. It is slower than the baseline across all benchmarks. Notably, _phone-6_ and _phone-7_ reuse function application subterms. So without caching small expressions, these two benchmarks do repetitive work synthesizing the same expressions in different call sites, resulting in a timeout. Second, the _No Temp_ column reports the performance numbers of Absynthe when it is run on these benchmarks with the template inference by testing predicates is disabled. It is slower on most benchmarks than the baseline, and even causing timeouts on some (_phone-7_ and _phone-8_). The exceptions are _phone-6_ and _name-combine-3_, where the no templates version is faster than the baseline. Recall, that the inferred templates have holes, that have are tagged with a fresh abstract variable \(\tilde{x}\) resulting in enumeration of more terms. In contrast, the candidate generation rules (S-) applied during the program search that may potentially synthesize holes with more precise abstractions resulting in less terms being enumerated. We plan to explore mechanisms to infer template holes with more precise abstractions in future work. ### AutoPandas BenchmarksWe want to test if the approach used by Absynthe, of guiding the search with lightweight abstract semantics combined with testing to ensure correctness, is general enough to be useful for another domain. For this purpose we use the AutoPandas(Bavishi et al., 2019) benchmark suite from its artifact 1 as a case study. The benchmarks are sourced from StackOverflow questions containing the dataframe tag. Each benchmark contain the input data frames, additional arguments, the expected data frame output, the list of Pandas API methods to be used in the program, and the number of method calls in the final program. Footnote 1: GitHub: [https://github.com/rbavishi/autopandas](https://github.com/rbavishi/autopandas) Bavishi et al. (2019) define _smart operators_ to generate candidates and train neural models from a graph-based encoding on synthetic data to rank generated candidates. For a baseline, they consider an enumerative search synthesis engine that naively enumerates all possible programs using the methods specified in the benchmark. This narrows down the search space to a permutation of 1, 2, or 3 method calls specified upfront, instead of search over all supported Pandas API. In contrast, Absynthe works like enumerative search, but large classes of programs are eliminated by abstract interpretation of partial programs, or terms are constructed guided by the abstract semantics. Unlike SyGuS, all benchmarks in AutoPandas have only one input and output example. The synthesis goal is a multi-argument Python method that given the specified input produces the desired output. The evaluation of AutoPandas benchmarks uses the same Absynthe core as the SyGuS evaluation. We wrote a test harness in Python that loads the AutoPandas benchmarks written in Python and communicates with Absynthe core running as a child process. The Absynthe core is responsible for doing the enumerative search, while eliminating programs using abstract interpretation. Any concrete program generated by Absynthe is tested in the host Python interpreter. These operations are performed as inter-process communication over Unix pipes between the host Python harness process and the child Absynthe Ruby process. This allows the testing of generated programs in the host Python process, saving the overhead of launching a new Python process and importing Pandas packages (about 1-3 seconds) for every candidate. If the input/output examples are satisfied the synthesis problem is solved, else control is returned back to Absynthe which searches and sends the next candidate for testing. #### Abstract Domains The abstract domains used for AutoPandas benchmarks are: 1. **Types.** A domain to represent the data type of the computed values (Figure 2c). 2. **Columns.** A domain to represent dataframes as a set of their column labels (Figure 2a). Our Python harness infers the data types and the column labels from the input/output examples and the Absynthe core constructs the abstract domain values from PyType and ColNames domains respectively. These individual domains are combined pointwise using the product domain PyType \(\times\) ColNames, and Absynthe soundly applies the individual abstract semantics to compute values in the same product domain. The types domain in Absynthe is a wrapper around types from RDL (Foster et al., 2020), a type system for Ruby. Absynthe uses RDL as a library to build the PyType class (the ty field holds an RDL type as shown in Figure 2c). This allows us to reuse prior work that defines nominal types, generic types, finite hash types, singleton types, and their subtyping relations. We define the semantics for these RDL types for the Python language in an abstract interpreter PyTypeInterp to handle features such as standard method arguments, optional keyword arguments, and singleton types as arguments (like int). We define the concretization function \(\mu\) over these types, for example, nominal types can be concretized by all constants of the correct type from the set of constants or the singleton types are concretized to the singleton value itself. The semantics of the type domains are defined in terms of the PyType wrapper that calls into the relevant RDL methods. The example implementation of these domains in SS 2 is a simplified version of these domains. In practice, the AutoPandas benchmarks have input/output examples that are not just data frames, but also integers, Python lambdas, and method references (such as nunique from the Pandas library). Absynthe is soundly able to abstract these into the relevant domains. For types, integers become Integer and lambdas are inferred as a type Lambda. When these values are lifted to the columns domain, they are represented as \(\bot\) as these are not data frames, thus there is no way to soundly represent their column labels. Additionally, Absynthe infers a set of constants from the input/output examples as well. It adds any string or numeric row and column labels of the data frames, in addition to any string or numeric standalone values passed as arguments. This set is used to synthesize the constants during the application of the S-Val rules. ResultsTable 2 shows the results of running the AutoPandas benchmarks through Absynthe. The numbers are collected on a 2016 Macbook Pro with a 2.7GHz Intel Core i7 processor and 16 GB RAM, with a timeout of 20 minutes (consistent with the timeout of Bavishi et al. (2019)). The _Name_ column shows the name of the benchmark, _i.e._, the StackOverflow question ID from which the problem is taken. The _Depth_ column shows the length of sequence of method call chain in the final solution. The AutoPandas benchmarks are tuned to synthesize programs with a chain of method calls, where the bulk of the time spend is in synthesizing arguments to these method calls. This is characteristic of the Pandas API which accepts many arguments, often optional keyword arguments. The _Time_ column shows the median of 11 runs along with the semi-interquartile range, where - denotes that a benchmark timed out. The _Size_ lists the synthesized program size as number of AST nodes. Note that, this number is affected by both the depth of the synthesized program (the number of method calls) and the number of arguments to those methods. _# Tested_ lists the number of concrete programs generated by Absynthe that were tested in the Python interpreter. Finally, _AP Neural_ and _AP Baseline_ shares the benchmarks solved by the AutoPandas neural model and naive enumeration to aid in comparison with Absynthe. Two benchmarks, _SO_12860421_ and _SO_49567723_, are marked with a \({}^{*}\) as these were found in the AutoPandas artifact were not reported in the paper. Absynthe solves 17 programs, the same number of programs as AutoPandas neural model. However, the set of synthesized programs by both tools are different with a significant overlap. Benchmarks listed in Table 2 without any highlight shows the benchmarks that were synthesized by both tools. Benchmarks highlighted in blue were synthesized only by Absynthe but not by AutoPandas. Likewise, benchmarks highlighted in yellow are the benchmarks synthesized only by AutoPandas but not by Absynthe. The time taken to synthesize the programs is largely dictated by how the abstract semantics prunes the space of programs, hence it is proportional to the number of concrete programs generated and tested. The fact that, for the same program size, the number of AST nodes in the method arguments (the difference between size and depth) is indicative of solving time shows that synthesizing arguments is indeed the bottleneck of this benchmark suite. \begin{table} \begin{tabular}{|c|c||c|c|c||c|c|} \hline Name & Depth & Time (sec) & Size & \# Tested & AP Neural & AP Baseline \\ \hline SO\_11881165 & 1 & 0.20 \(\pm\) 0.00 & 6 & 40 & ✓ & ✓ \\ SO\_11941492 & 1 & 13.84 \(\pm\) 0.04 & 5 & 2507 & ✓ & ✓ \\ SO\_13647222 & 1 & - & & ✓ & ✓ & ✓ \\ SO\_18172851 & 1 & 0.42 \(\pm\) 0.00 & 3 & 70 & & \\ SO\_49583055 & 1 & 3.77 \(\pm\) 0.01 & 6 & 272 & & \\ SO\_49592930 & 1 & 0.22 \(\pm\) 0.00 & 3 & 21 & ✓ & ✓ \\ SO\_49572546 & 1 & 1.50 \(\pm\) 0.01 & 3 & 548 & ✓ & ✓ \\ SO\_12860421 & 1 & 686.50 \(\pm\) 1.68 & 11 & 1537521 & & \\ SO\_13261175 & 1 & 283.12 \(\pm\) 0.39 & 11 & 237755 & ✓ & \\ SO\_13793321 & 1 & 5.70 \(\pm\) 0.04 & 6 & 413 & ✓ & ✓ \\ SO\_14085517 & 1 & 216.14 \(\pm\) 0.38 & 7 & 12844 & ✓ & ✓ \\ SO\_11418192 & 2 & 0.10 \(\pm\) 0.00 & 5 & 11 & ✓ & ✓ \\ SO\_49567723 & 2 & - & & & ✓ & \\ SO\_49987108\({}^{*}\) & 2 & - & & & & \\ SO\_13261691 & 2 & 65.17 \(\pm\) 0.17 & 3 & 22322 & ✓ & ✓ \\ SO\_13659881 & 2 & 0.21 \(\pm\) 0.00 & 6 & 45 & ✓ & ✓ \\ SO\_13807758 & 2 & 54.92 \(\pm\) 0.26 & 6 & 3144 & ✓ & ✓ \\ SO\_34365578 & 2 & - & & & & \\ SO\_10982266 & 3 & - & & & & \\ SO\_11811392 & 3 & 6.88 \(\pm\) 0.03 & 4 & 921 & & \\ SO\_49581206 & 3 & - & & & & \\ SO\_12065885 & 3 & 0.24 \(\pm\) 0.00 & 6 & 286 & ✓ & ✓ \\ SO\_13576164 & 3 & - & & & ✓ & \\ SO\_14023037 & 3 & - & & & & \\ SO\_53762029 & 3 & 545.62 \(\pm\) 0.91 & 9 & 229233 & ✓ & ✓ \\ SO\_21982987 & 3 & - & & & ✓ & ✓ \\ SO\_39656670 & 3 & - & & & & \\ SO\_23321300 & 3 & - & & & & \\ \hline \end{tabular} \end{table} Table 2: Results of running AutoPandas benchmarks through Absynthe. The _Depth_ column shares the longest chain of method calls in the synthesized solution; _Time_ lists the median and semi-interquartile range of 11 runs for time taken to synthesize a program; _Size_ lists the number of AST nodes in the synthesized solution; _# Tested_ reports the number of concrete Python programs tested; _AP Neural_ and _AP Baseline_ shares the benchmarks that AutoPandas neural model and naive enumeration could synthesize. The benchmarks denoted with a \({}^{*}\) were a part of the artifact, but not reported in the paper [Bavish et al. 2019]. Benchmarks highlighted in blue and yellow shows the benchmarks only synthesized by Absynthe and AutoPandas respectively. For example, like _SO_11811392_ and _SO_12065885_ the type system quickly narrows down the search space, and the solution uses API methods that have 0 or 1 arguments only, making the arguments synthesis quick. DiscussionAbsynthe solves a harder synthesis problem because it does not use the list of methods to be used as provided in the specification. Instead, Absynthe uses the complete set of 30 supported Pandas API for every benchmark. Approximately, this gives us a choice of permutations of size 1, 2, or 3 (depending on the depth of the final solution) from 30 methods, without considering arguments from those methods. In contrast, the baseline enumerative search _APBaseline_ comparison limits the search to only the Pandas API methods that will be used in the final solution. Typically this limits the search space to 1, 2, or 3 methods as given in the specification. In other words, under naive enumeration, Absynthe explores a strictly larger set of programs than AutoPandas baseline. In the benchmarks where Absynthe failed to synthesize a solution, it falls back to term enumeration as the abstract domain was not precise. More specifically in the benchmarks with depth 3, Absynthe could do better by jointly reasoning about values in relational abstractions between multiple arguments of the same method. We plan to explore support for relational abstractions in future work. The neural model trained by Bavishi et al. (2019) is good at guessing the sequences that are potentially likely to solve the synthesis task. It, however, does not take into account semantics of the program, thus eliminating impossible programs from being considered. This shows up in _SO_18172851_ and _SO_49583055_ where both enumerative search and neural models failed, but Absynthe succeeds. Moreover, any updates to the neural model would need to be addressed with a new encoding or a retraining of the model on new data, a potentially resource consuming process. However, exploring the synergy of guidance from abstract interpretation combined with neural models similar to Anderson et al. (2020) to rank _sound_ program candidate choices is an interesting future work. ## 6. Related Work General Purpose Synthesis ToolsSemGuS (Kim et al., 2021) has the same motivation as Absynthe to develop a general-purpose abstraction guided synthesis framework. However, SemGuS requires the programmer to provide semantics in a relational format as constrained horn clauses (CHCs). While CHCs are expressive and have dedicated solvers (Komuravelli et al., 2016), correctly defining semantics as a relations is prohibitively time-consuming and error-prone. Moreover, SemGuS performs well in proving unrealizability of synthesis problems, but it has limited success in synthesizing solutions. In contrast, Absynthe is a dedicated synthesizer that is geared towards synthesizing programs based on executable abstract semantics. Absynthe can be thought of as an unrealizability prover if coarse-grained semantics, the focus of Absynthe, is sufficient to prove unrealizable. SemGuS also supports under-approximate semantics, which is an interesting future work in the context of Absynthe. Rosette (Torlak and Bodik, 2014) and Sketch (Solar-Lezama, 2013) are solver-aided languages that use bounded verification using a SMT solver to synthesize programs written in a DSL. In contrast, Absynthe relies on abstract interpretation to guide search, so it can reason about unbounded program properties. There has been parallel work in synthesis using Christiansen grammars (Ortega et al., 2007) that allows one to encode some program semantics as context-dependent properties directly in the syntax grammar. However, an abstract interpreter-based approach gives Absynthe more semantic reasoning capabilities (like polymorphism). Domain-specific synthesisSyGuS (Alur et al., 2013) being a standard synthesis problem specification format, has seen a variety of solver approaches. CVC4 (Reynolds and Tinelli, 2017) is a general-purpose SMT solver that has support for synthesizing programs in the SyGuS format. CVC4 has complete support for theory of strings and linear integer arithmetic, so it performs better than Absynthe (which is guided by simple abstract domains) for SyGuS. However, Absynthe's strength is generalizability to other kinds of synthesis problems as demonstrated in synthesis of AutoPandas benchmarks (SS 5.2). DryadSynth [Huang et al. 2020] explores a reconciling deductive and enumerative synthesis in SyGuS problems limited to the conditional linear integer arithmetic background theory. Some of their findings has been adopted by Absynthe (SS 4). EuSolver[Alur et al. 2017b] is an enumerative solver that takes a divide-and-conquer approach. It synthesizes individual programs that are correct on a subset of examples, and predicates that distinguishes the program and combines these into a single final solution. Absynthe is close to EuSolver, as it is also based on enumerative search, but it is also guided by abstract semantics as well. We plan to support synthesizing conditionals in future work. Past work solves synthesis problems using domain specific abstractions such as types and examples [Frankle et al. 2016; Osera and Zdancewic 2015], over-approximate semantics on table operations [Feng et al. 2017], refinement types [Polikarpova et al. 2016], secure declassification [Guria et al. 2022], abstract domain to verify atomic sections of a program [Vechev et al. 2010], and SQL equivalence relations [Wang et al. 2017a]. These abstraction can be designed as a domain and an abstract evaluation semantics can be provided to Absynthe for synthesizing such programs. However, Absynthe being a general purpose synthesis tool, will not have domain specific optimizations. We plan to explore Absynthe as platform deploying domain specific synthesis in future work. _Abstraction-guided Synthesis._Simpl[So and Oh 2017] combines enumerative search with static analysis based pruning, which is similar to Absynthe. However, the program search in Absynthe can be parameterized by a user provided abstract interpreter allowing the user to write specifications and semantics in a domain fit for the task-at-hand. Additionally, Absynthe can infer abstract values for the holes in partial programs, thus guiding the search using the abstract semantics (Figure 6). Blaze[Wang et al. 2017b] is very similar to Absynthe as it uses abstract semantics to guide the search. It adapts _counterexample guided abstraction refinement_ to synthesis problems by refining the abstraction when a test fails, and constructing a proof of incorrectness in the process. However, it starts with a universe of predicates that is used for abstraction refinement, which is a requirement Absynthe doesn't place on users. FlashMeta [Polozov and Gulwani 2015] is similar, but requires the definition of "inverse" semantics for operators using _witness functions_. Absynthe, however, requires only the definition of forward abstract semantics and attempts to derive the inverse semantics automatically where possible. _Learning-based approaches._ There has been a recent rise of learning based approaches to make program synthesis more tractable. AutoPandas[Bavishi et al. 2019] is an example of applying neural models to rank candidate choices constructed by other program generation methods (_smart operators_ in AutoPandas' case). DeepCoder[Balog et al. 2017] trains a deep neural network to predict properties of programs based on input/output examples. These properties are used to augment the search by an enumerative search or SMT solvers. Absynthe is complementary to these approaches and does not use machine learning. In future, we plan to explore extensions to Absynthe that recorders the program search order using a model learned on program text _and_ abstract semantics. EuPhony[Lee et al. 2018], on the other hand, uses an approach inspired by transfer learning to learn a _probabilistic higher order grammar_, and uses that in enumerative search to synthesize solutions. Probe[Barke et al. 2020] learn a probabilistic grammar _just-in-time_ during synthesis. Their key insight is that many SyGuS programs that pass a few examples have parts of the syntax that has higher likelihood to be present in the final solution. In contrast, Absynthe is complementary to the approach of learning probabilistic grammars; abstract domains can prune the space of programs, while the grammar can assign higher weights to the terms that should be enumerated earlier. We leave exploring the synergy between these approaches to future work. ## 7. Conclusion We presented Absynthe, a tool that combines abstract interpretation and testing to synthesize programs. It accepts user-defined lightweight abstract domains and partial semantics for the language as an input, and enables guided search over the space of programs in the language. We evaluated Absynthe on SyGuS strings benchmarks and found Absynthe can solve 77% of the benchmarks, most within 7 seconds. Moreover, Absynthe supports a pay-as-you-go model, where the user only pays for the abstract domain they are using for synthesis. Finally, to evaluate the generality of Absynthe to other domains, we use it to synthesize Pandas data frame manipulation programs in Python from the AutoPandas benchmark suite. Absynthe performs at par with AutoPandas and synthesizes programs with low specification burden, but no neural network training costs. We believe Absynthe demonstrates a promising design choice for design of synthesis tools that leverage testing for correctness along with lightweight abstractions with partial semantics for search guidance. ## Data Availability Statement The latest version of the tool Absynthe is publicly available on GitHub 2. A snapshot of Absynthe, along with source code, benchmarks used in the paper, supporting scripts and instructions to reproduce our results in SS 5 are available as a Docker image artifact (Guria et al., 2023). Footnote 2: [https://github.com/ngsankha/absynthe](https://github.com/ngsankha/absynthe) ###### Acknowledgements. Thanks to the anonymous reviewers for their helpful comments. This research was supported in part by National Science Foundation awards #1900563 and #1846350.
2308.10417
The Change You Want to See (Now in 3D)
The goal of this paper is to detect what has changed, if anything, between two "in the wild" images of the same 3D scene acquired from different camera positions and at different temporal instances. The open-set nature of this problem, occlusions/dis-occlusions due to the shift in viewpoint, and the lack of suitable training datasets, presents substantial challenges in devising a solution. To address this problem, we contribute a change detection model that is trained entirely on synthetic data and is class-agnostic, yet it is performant out-of-the-box on real world images without requiring fine-tuning. Our solution entails a "register and difference" approach that leverages self-supervised frozen embeddings and feature differences, which allows the model to generalise to a wide variety of scenes and domains. The model is able to operate directly on two RGB images, without requiring access to ground truth camera intrinsics, extrinsics, depth maps, point clouds, or additional before-after images. Finally, we collect and release a new evaluation dataset consisting of real-world image pairs with human-annotated differences and demonstrate the efficacy of our method. The code, datasets and pre-trained model can be found at: https://github.com/ragavsachdeva/CYWS-3D
Ragav Sachdeva, Andrew Zisserman
2023-08-21T01:59:45Z
http://arxiv.org/abs/2308.10417v2
# The Change You Want to See ###### Abstract The goal of this paper is to detect what has changed, if anything, between two "in the wild" images of the same 3D scene acquired from different camera positions and at different temporal instances. The open-set nature of this problem, occlusions/dis-occlusions due to the shift in viewpoint, and the lack of suitable training datasets, presents substantial challenges in devising a solution. To address this problem, we contribute a change detection model that is trained entirely on **synthetic data** and is **class-agnostic**, yet it is **performant out-of-the-box on real world images** without requiring fine-tuning. Our solution entails a "register and difference" approach that leverages self-supervised frozen embeddings and feature differences, which allows the model to generalise to a wide variety of scenes and domains. The model is able to **operate directly on two RGB images**, without requiring access to ground truth camera intrinsics, extrinsics, depth maps, point clouds, or additional before-after images. Finally, we collect and **release a new evaluation dataset** consisting of real-world image pairs with human-annotated differences and demonstrate the efficacy of our method. The code, datasets and pre-trained model can be found at: [https://github.com/ragavsachdeva/CYWS-3D](https://github.com/ragavsachdeva/CYWS-3D) ## 1 Introduction From the way leaves rustle in the wind to the shifting patterns of clouds in the sky, our world is in a constant state of flux. Yet detecting and localising changes in complex 3D scenes remains a challenging task for computer vision. Imagine being able to identify the changes between two images of the same scene captured at separate moments in time and from different viewpoints, as shown in Figure 1. This is the challenge we aim to address in this work. With applications in fields such as robotics, facility monitoring, forensics and augmented reality, our work has the potential to unlock new possibilities for understanding and interacting with our dynamic world. We formulate the problem we study as follows: Given a pair of 2D images of a 3D scene, captured with a significant shift in camera position and at different temporal instances, we wish to localise the changes between them, if any. In particular, we wish to capture _everything_ that is physically different **in the regions that are visible in both the images** while disregarding areas that appear or disappear from view due to the shift in camera pose or occlusion. This includes objects that may have been added or removed from the scene, and text or decorations that may have been added to an object, but we wish to ignore photometric differences such as a lighting change. Under this setting, differentiating true changes from occlusions or dis-occlusions is intrinsically ill-posed using 2D images alone. In other words, answering the question "Is this object missing in the other im age, hidden behind another object, or simply out of frame?" fundamentally requires the 3D shape of the scene, which is not directly available to us apriori. Furthermore, it is not possible to compute the shape of the given 3D scene using two-view stereo methods as they rely on corresponding points in the two images which are inherently non-existent in case of changes such as missing objects. Consequently, in the absence of the scene's ground truth geometry, it is theoretically infeasible to reason about the relative position, scale and shape of the objects in the scene, and how or where they might appear when observed from a different viewpoint (see Figure 1). This, coupled with the lack of large-scale real-world datasets that include image pairs of changing scenes captured from significantly different viewpoints, makes devising a solution to this general two-view change detection problem very challenging. Nevertheless, our objective is to detect changes in "in the wild" real world images with minimal constraints and operate on RGB images only, without assuming access to ground truth geometry, depth, camera parameters, camera poses etc. Our solution is to build on the insight that once the two views are _registered_, it is relatively easy to determine what has changed. We thus proceed in two stages: (1) _register_ by warping the spatial features from one image to the other, and this fundamentally needs to be in 3D; (2) _determine the differences_ by training a detector on top of these registered spatial feature maps in order to identify the significant changes, and ignore "nuisance" variables such as changes in lighting. Briefly, we first use an off-the-shelf pre-trained visual backbone to extract spatial feature maps for an RGB image pair. We then lift these 2D feature maps to 3D by making use of state-of-the-art monocular depth estimation models, followed by a differentiable feature registration module (DFRM) to align and render the features maps back to 2D in the other view. Finally, we utilise a simple detection head to process these features and output the changes. To overcome the issue of lack of real-world training data, we train our model exclusively on synthetic data with controlled 3D changes. In order to permit sim2real, we keep the visual backbone frozen throughout training, and decode _difference_ of registered features. Since the notion of change necessitates a pair of corresponding regions with some differences, our formulation requires model predictions in _both_ the images. For instance, if a car is present in one image but missing in the other, we expect the model to put bounding boxes around both - where the car is, and where the car "should have been". Furthermore, since the changes in "in the wild" images are customarily **open-set**, our model is designed to be **class-agnostic**. We demonstrate that a model trained in this fashion zero-shot generalises to a wide variety of datasets including 2D scenes, 3D scenes, synthetic and real-world images. In the following we provide an overview of the existing literature (Sec. 2), details of the proposed method (Sec. 3), experiments and results (Sec. 4), and finally some concluding remarks (Sec. 5). We will release the code, datasets and trained model. ## 2 Related Work The problem of exploring visual changes has been studied in several different flavours previously. In this section we loosely group these works into two categories, 2D and 3D, and describe how our problem setting relates/differs from them. **2D (-ish):** A typical scenario for the change detection problem is one where we have a pair of before-after RGB images, where the camera is either fixed i.e. the images are related by an identity transformation (e.g. images from surveillance cameras), there is a planar-scene assumption (e.g. bird's eye view or satellite images), or in the general case there is limited shift in viewpoint (e.g. street scenes looking at distant objects/buildings), and the model is expected to identify the changes between them. Most of the existing works in the change detection literature belong to this category. [18, 17, 21, 12] tackle the change captioning problem where the goal is to describe the changes in an image pair in natural language. These methods mainly evaluate their approach on the STD [10] (images from fixed video surveillance camera), or CLEVR-based change datasets [18, 21, 12] (synthetic images of 3D objects of primitive shapes). Since the problem these works address is change captioning, their approaches do not deal with precise localisation of changed regions. [27, 1, 28, 14] tackle the change localisation problem, specifically for street-view images where the goal is to produce segmentation masks for changed regions. These methods mainly evaluate their approach on TSUNAMI [27] (panaromic, street-view images), VL-CMU [1] (images of urban scenes with macroscopic changes) and PCSD [28] (panoramic, street-view images). While these datasets do not technically have the planar-scene assumption or a fixed camera, the scenes are of distant buildings and there is no "peeking behind" objects due to a shift in camera pose. Since pixel-wise change annotation is expensive, these datasets often suffer from _non-comprehensively labelled test sets, limited to a set classes_. Recently, [26] proposed a _class-agnostic_ method that tackles the change detection problem for arbitrary images that are related by a homography transformation, as a bounding-box based detection problem. They evaluate their approach on STD [16], Kubric-Change [26] (3D objects resting on a 2D plane, camera is not fixed but the images are captured from bird's eye view) and others. Similar to the methods above, we also tackle the change detection problem in a pair of RGB images. However, unlike previous methods our setting involves **general two-view images of 3D scenes** (we do not assume fixed camera or planar scenes and in our case there is a significant shift in camera pose) and we particularly focus on making our model work on **open-set**, real-world images. The problem formulation closest to ours is that of [26] in that [26] also train a class-agnostic model that produces bounding boxes around changed regions in both the images except they train and evaluate on 2D scenes. **3D:** The change detection problem has also been studied explicitly for 3D data (e.g. multiview before-after images, 3D scans etc.). [19] tackle the change captioning problem in a 3D setting by assuming multi-view images are available both before and after the change. [20] further propose an end-to-end framework for describing scene changes from various input modalities, namely, RGB images, depth images, and point cloud data. Recently, [22] proposed a task to explicitly localise changes in the form of 3D bounding boxes from two point clouds and describe detailed scene changes _for a fixed classes of objects_. Unlike these works, we only assume access to a single-view image for both before and after scenes and **do not operate on explicit 3D data** like point clouds. Since RGB images are more readily available, it allows our method to be easily applied to real-world scenes. The closest setting to our is of [5] who tackle the change detection problem for general two-view images, except their model is trained and tested on the same synthetic setting with 4 fixed classes1. Footnote 1: Not available publicly at the time of writing. ## 3 Method ### Overview Given an image pair of a 3D scene, captured with a significant shift in camera viewpoints, our goal is to localise the changes between them in the form of bounding box predictions for _each_ image. In particular, we only wish to capture changes in the regions that are visible in both the images while disregarding areas that appear or disappear from view due to the shift in camera pose. Our approach, which is overviewed in Figure 2, begins by extracting dense spatial image descriptors from each image using a pre-trained transformer-based visual backbone, which are then processed to obtain feature maps at multiple spatial resolutions using a U-Net [24] style encoder. Next, in order to reason between changes and occlusions/dis-occlusions, we need 3D information. Inspired by [31], our DFRM "lifts" the features to 3D, and differentiably _registers_ and renders them. After obtaining registered features, we compute their _difference_ in order to identify what has changed. Finally, we decode the difference of registered features using a U-Net style decoder, followed by a bounding box prediction head. We next describe the architecture, which is illustrated in Figure 3. ### Architecture Backbone:Given two images \(I^{1}\in\mathbb{R}^{3\times H\times W}\) and \(I^{2}\in\mathbb{R}^{3\times H\times W}\), we first encode \(I^{1},I^{2}\) using a pre-trained frozen visual backbone, represented by \(\Phi_{B}(\cdot)\), to obtain two sets of feature descriptors per image represented by \(f^{1}_{s},f^{1}_{d}=\Phi_{B}(I_{1})\) and \(f^{2}_{s},f^{2}_{d}=\Phi_{B}(I_{2})\). In practise, \(\Phi_{B}(\cdot)\) is the ViT-B/8 [6] architecture, where \(f_{s}\) and \(f_{d}\) represent shallow and deep features. Figure 2: **DFRM:** Given two images, we first obtain corresponding points and depth maps which are used to estimate the 3D transformation \(T_{r\to q}\). Then given a feature map for each image along with the depth map, we create a point cloud of \(c\)-dimensional feature vectors, warp it using the estimated transformation and render it to 2D grid. Notice the black regions in the rendered grid. These regions are dis-occluded and contain \(c\)-dimensional \(\mathbf{0}\) vectors. We obtain a soft visibility mask to counteract the effect of dis-occluded regions when computing the difference. **U-Net Encoder:** We process \(f_{d}^{1},f_{d}^{2}\) using a U-Net style encoder, represented by \(\Phi_{E}(\cdot)\), to obtain \(g_{n}^{1}=\Phi_{E}(f_{d}^{1})_{n}\) and \(g_{n}^{2}=\Phi_{E}(f_{d}^{2})_{n}\) after each downsampling block \(n\), resulting in a set of multi-resolution feature maps \(G^{1}=\{f_{d}^{1},g_{1}^{1},g_{2}^{1},g_{3}^{1},g_{4}^{1}\}\) and \(G^{2}=\{f_{d}^{2},g_{1}^{2},g_{2}^{2},g_{3}^{2},g_{4}^{2}\}\), for image \(I^{1}\) and \(I^{2}\) respectively. **Feature Registration and Difference:** Given \(G^{1},G^{2}\), we use a differentiable feature registration module (described in Sec. 3.3) to obtain a _warp_ of feature maps at each spatial resolution such that the original features of one image are registered with the warped features of the other image. We then compute their element-wise difference and mask out occluded/dis-occluded regions. Specifically, for feature maps at spatial resolution \(i\), we obtain the feature map \(H_{i}^{1}=v_{2\to 1}(G_{i}^{1}-\tau_{2\to 1}(G_{i}^{2}))\) and \(H_{i}^{2}=v_{1\to 2}(G_{i}^{2}-\tau_{1\to 2}(G_{i}^{1}))\), for \(I^{1},I^{2}\) respectively, where \(\tau_{r\to q}\) represents the 3D feature warp operator from image \(I^{r}\) to \(I^{q}\), and \(v_{r\to q}\) represents the _soft_ visibility mask. **U-Net Decoder:** Following this, we decode the set of feature maps \(H^{1}\) and \(H^{2}\) using a U-Net decoder modulated with scSE blocks [25], represented by \(\Phi_{D}(\cdot)\) to produce feature maps \(k^{1}\) and \(k^{2}\) respectively. **Bbox Head:** Finally, feature maps \([f_{s}^{1}\parallel k^{1}]\) and \([f_{s}^{2}\parallel k^{2}]\), where \([\parallel]\) is the concatenation operation (along channel dimension), are fed into a CenterNet head [32], which minimises the detection loss function as described in [32], to produce bounding boxes around changed regions in both the images. The motivation for concatenating shallow features \(f_{s}\) with \(k\) is that it serves as the final skip connection before the prediction head, which is consistent with the typical U-Net style model, and additionally shallow features are known to capture more _positional_ information, as opposed to deep features which capture more _semantic_ information [2], and therefore can help with localisation. ### Differentiable Feature Registration Module (DFRM) Given images \(I^{r},I^{q}\in\mathbb{R}^{3\times H\times W}\), along with their feature maps \(f^{r},f^{q}\in\mathbb{R}^{c\times h\times w}\) respectively, we use the following three-step process, represented by \(\tau_{r\to q}(\cdot)\), to warp \(f^{r}\) such that \(f^{q}\) and \(\tau_{r\to q}(f^{r})\) are registered. See Figure 2 to visually conceptualise all the moving parts. **Step 1: Estimate a 3D linear transformation.** In order to register the two feature maps, we must estimate a 3D transformation between the two images. To do so, first we obtain a set of \(n\) corresponding points \(P^{r},P^{q}\in\mathbb{R}^{n\times 2}\) (in normalised coordinates) in image \(I^{r},I^{q}\) respectively, using a correspondence extractor represented by \(\mathcal{C}(\cdot)\). Following this, we estimate the depth maps \(D^{r},D^{q}\in\mathbb{R}^{3\times H\times W}\) using a monocular depth estimator represented by \(\mathcal{D}(\cdot)\). Finally, to estimate the transformation, we first back-project each point \((x_{j},y_{j})\in P^{r},P^{q}\) as, Figure 3: **Architecture:** Given two images, we first extract dense spatial feature maps using a pre-trained visual backbone. Following this, a CNN-based encoder is used to extract visual descriptors at multiple spatial resolutions. We then use a differentiable feature registration module (DFRM) to warp features from one image to another (such that they are registered), and take their difference (only one resolution is shown for the purpose of visualisation, however this operation is performed at multiple resolutions). Finally, these difference of feature maps are processed by a CNN-based decoder followed by a bounding box detection head. For brevity, we only show the prediction for one of the images, however, the pipeline is symmetric for the other. Please see Sec. 3 for more details. \[\hat{P_{j}}=\begin{bmatrix}d_{j}x_{j}&d_{j}y_{j}&d_{j}&1\end{bmatrix}^{T} \tag{1}\] where \(d_{j}\) is the depth value at point \(P_{j}\), resulting in two sparse 3D point clouds \(\hat{P^{r}},\hat{P^{q}}\in\mathbb{R}^{n\times 4}\) in homogenous coordinates. Following this, we estimate a transformation matrix \(T_{r\to q}\in\mathbb{R}^{4\times 4}\), such that \(T_{r\to q}\hat{P^{r}_{j}}\approx\hat{P^{q}_{j}}\), using the following closed-form solution: \[T_{r\to q}=\left(\hat{P^{r}}^{+}\hat{P^{q}}\right)^{T} \tag{2}\] where \(\hat{P^{r}}^{+}\in\mathbb{R}^{4\times n}\) is the Moore-Penrose inverse of \(\hat{P^{r}}\). **Step 2: Lift the features to 3D and warp.** For each \(c\)-dimensional feature vector in \(f^{r}\in\mathbb{R}^{c\times h\times w}\), we project its normalised 2D grid coordinates \((x_{j},y_{j})\) to a point (\(x_{j}^{\prime}/k,\;y_{j}^{\prime}/k,\;z_{j}^{\prime}/k\) ) in 3D, where \[\begin{bmatrix}x_{j}^{\prime}&y_{j}^{\prime}&z_{j}^{\prime}&k\end{bmatrix}^{ T}=T_{r\to q}\begin{bmatrix}d_{j}x_{j}&d_{j}y_{j}&d_{j}&1\end{bmatrix}^{T} \tag{3}\] where \(d_{j}\) is the estimated depth value of this point, resulting in a 3D point cloud of \(f^{r}\) feature vectors that are aligned with \(f^{q}\). **Step 3: Differentiable feature rendering.** Given the 3D point cloud, we render it to the 2D grid using a differentiable renderer \(\mathcal{R}(\cdot)\). However, instead of rendering RGB colours, we render \(c\)-dimensional feature vectors for each point. For this, we employ a differentiable point cloud renderer, as in [31]. The advantages of this differentiable point cloud renderer are two fold: (1) it solves the "small neighbourhood" problem, wherein each feature-point projects to only one or a few pixels in the rendered view, by splatting the points to a disk of controllable size, and (2) the "hard z-buffer" problem, wherein each rendered pixel is only affected by the nearest point in the z-buffer, by accumulating the effects of \(K\) nearest points. Both of these allow for better gradient propagation during training. In addition to obtaining the rendered features, we also obtain a visibility mask \(v_{r\to q}\) to deal with occluded/dis-occluded regions (see Figure 2). This is obtained by setting the 2D coordinates of the rendered points to \(1\) on a grid initialised with \(0\)s. However, due to the splatting behaviour of the renderer, the visibility mask obtained is _soft_ and not binary. **Alternate registration strategies:** Since DFRM is training-parameter free, during inference \(\tau_{r\to q}(\cdot)\) may utilise alternate registration strategies. For instance, if ground truth depth is known for each image, it can directly replace \(D^{r},D^{q}\) obtained using the monocular depth estimator \(\mathcal{D}(\cdot)\). In addition to ground truth depth, if the camera intrinsics and extrinsics are known, points can be directly back-projected and warped using relative camera poses without needing to estimate the transformation \(T_{r\to q}\). Furthermore, if only \(D^{r}\) is available, Perspective-n-Point methods [9] can be used to warp features \(f^{r}\) onto \(f^{q}\). On the other hand, if it is known apriori that the images are of planar scenes, depth is not needed and each 2D grid coordinate can be warped using \(T_{r\to q}\in\mathbb{R}^{3\times 3}\) which can either be supplied or estimated using standard homography estimation methods. If the scene consists of multiple planes (e.g. floor and wall), it may also be possible to obtain desired results using a multi-grid [15] or multi-plane homography estimation [9] to register the images. ### Details and discussion The ViT-B/8 [6] backbone \(\phi_{B}(\cdot)\) is initialised with DINO model weights [4]. DINO features are known to encode powerful high level semantic information at fine spatial granularity and recent works have shown that scalar product of DINO features can be used to compute high quality semantic correspondences [2]. Orthogonally to computing correspondences, we utilise these powerful DINO features to compute changes. To prevent the model from overfitting to synthetic data and corrupting the quality of DINO features, we keep the backbone frozen during training. In order to allow sim2real, our decoder only ever operates on the _difference_ of features, which is negatively proportional to the scalar product i.e. the higher the scalar product the smaller the difference, and therefore we only consider the similarity of features (or the lack of) rather than the features themselves (whether synthetic or real). Following [2], we increase the resolution of model features by changing the stride of the patch extraction (from 8 to 4) and adjusting the positional encoding appropriately via interpolation, allowing us to operate on more granular spatial features. The shallow features and deep features \(f_{s},f_{d}\) are extracted from the _keys_ of the Multiheaded Self-Attention layers of the third and the last block respectively, an insight also from [2]. For correspondence extractor \(\mathcal{C}(\cdot)\), we experimented with [11, 30] but found SuperGlue [29] to work the best at generating high quality dense correspondences and at high inference speed. After extracting 2D correspondences from SuperGlue, we filter out outliers using RANSAC. Finally, for the monocular depth estimator \(\mathcal{D}(\cdot)\), we tried [23, 7] but found the recently released ZoeDepth [3] to consistently produce better results. ## 4 Experiments This section describes the data we used to train our model, evaluation benchmarks and baselines, and various implementation details. Please see Table 1 for an overview of the datasets used and Figure 4 for some example images. ### Training datasets Due to the lack of existing large-scale real-world datasets for the change detection problem, as formulated in this work, we fall-back to training our model entirely on synthetic data. Specifically, we train our model jointly on the following two datasets: **KC-3D:** Similar to [26], we make use of the Kubric dataset generator [8] to curate 86407 image pairs (of which 4548 images pairs are for validation) of 3D scenes with controlled changes. The scenes consist of randomly selected set of 3D objects spawned at random locations on a randomly textured plane, where we iteratively remove the objects and capture "before" and "after" image pairs. However, unlike [26] where they capture birds-eye view images only, we capture images from a wide variety of camera poses inside a cylindrical space around the objects. In addition, we also capture the depth maps, camera intrinsics and extrinsics for each image to supply to DFRM during training. Figure 4: **Qualitative results:** We show the bounding box predictions in yellow (solid) of our model on all the test sets, along with the ground truth in blue (dashed). **COCO-Inpainted:** While KC-3D captures the underlying challenges of our task in terms of viewpoint shift quite well, it lacks diversity in terms of kinds and sizes of objects. Therefore, we additionally utilise the recently introduced COCO-Inpainted dataset [26] for training. While this is a 2D dataset, where the image pairs are perturbed by an affine transformation, it helps the model learn to predict changes of various kinds and sizes. ### Testing datasets To test the performance of our model, we evaluate it on the following test sets. **KC-3D:** Following the same pipeline as before, we curate an additional 4548 image pairs of 3D scenes from Kubric for testing purposes (making the total number of image pairs in KC-3D to be 90955). **RC-3D:** To quantify our model's capacity to generalise to real-world images, we manually collect and label a small-scale test set consisting of 100 images pairs, capturing a diverse set of common objects found in everyday places like office, kitchen, lounge etc. The images were captured using a handheld Apple iPad Pro (4th Gen) and Apple iPhone 14 Pro, which come with a built-in LiDAR giving us aligned RGB-D images. **Cyws Test Sets:** Since the problem we are tackling subsumes the change detection problem in planar scenes, we also evaluate our model on 2D test datasets proposed in [26], namely COCO-Inpainted, VIRAT-STD, Kubric-Change, and Synthett-Change. ### Baseline and metrics To the best of our knowledge, no prior works have tackled the change detection problem in a _general two-view and class-agnostic_ setting like us which makes it difficult to directly compare our work with prior art. Nevertheless, we use cyws [26] as a baseline as their formulation is the same as ours, except restricted to 2D transformations. To allow for a fair comparison, in addition to reporting the results using their open-sourced model, we also finetune their pretrained model on the same training dataset as ours for \(100\) epochs (best model is picked using lowest loss on val set). We use the average precision metric to report our results, similarly to [26]. ### Training details We trained the model on 4\(\times\) A40 GPUs for 50 epochs using the DDP strategy with a batch size of 16, where the best model is chosen using a validation set (COCO-Inpainted val set as in [26] + a KC-3D val set). Images were augmented with CropAndResize, HorizontalFlips and ColourJittering, and resized to \(224\times 224\) due to the DINO-ViT [4] requirements. Since our training data is entirely synthetic, during training we use ground truth data for \(\tau_{r\to q}(\cdot)\) rather than estimating it. The overall objective was optimised using Adam [13] with learning rate of 0.0001 and weight decay of 0.0005. ### Results We evaluate our model, which is trained on synthetic data described in Sec. 4.1, on a diverse set of testing datasets as described in Sec. 4.2 with no further training/finetuning. Table 2 contains the quantitative results in terms of average precision, while we show some qualitative predictions of our model in Figure 4 and some failure cases in Figure 5. **3D scenes** (KC-3D, RC-3D): From Table 2 it is evident that our model produces impressive results on both synthetic and "in the wild" real-world images. Particularly in the case of RC-3D, the model produces almost 4\(\times\) better results than the baseline and performs well even in the most challenging setting when only RGB image pairs are available as input. Despite only having been trained on synthetic data, the remarkable performance of the model on real-world images validates the design choice of only using feature differences and not features directly (unlike cyss, where their co-attention module concatenates the cross-attended features). On the other hand, we observe a surprising result from the cyss model, which is able to produce impressive results on KC-3D dataset when finetuned on it. It is likely that the cyss model is "over-fitting" to the Kubric setting given that the set of objects and scenes in Kubric [8] are limited (different scenes just have different random combinations from the same set of objects and backgrounds). Despite this fact, it is still interesting that the model is able to reason about changes despite not being 3D aware. Nevertheless, this performance does not generalise to real-world images as observed by its poor results on RC-3D. **2D/fixed-camera scenes** (Cypws test sets): In the 2D setting, we found that our model is often comparable but not strictly better than cyss. In particular, we observed that our model particularly struggles with detecting really small changes in comparison to cyss which is likely due to the fact that cyss operates at higher input resolution than us (\(256\times 256\) vs \(224\times 224\)) and that the number of trainable parameters in our model is 31.5M which is much less than 49.5M in cyss. Furthermore, we found that a lot of the annotations (for COCO-Inpainted small and VIRAT-STD), are extremely small (handful of pixels, almost indiscernible to human-eye). This becomes problematic for our model when the input resolution is reduced to \(224\times 224\). In addition, the ground-truth annotations for VIRAT-STD are noisy (both false positives and missing annotations, see Figure 4 where our model predicts valid changes that are missing from ground truth). ### Limitations Despite the remarkable ability of our model to localise changes in real-world general two-view images, it suffers from a few limitations. A potential concern is the large size of the model. While the number of trainable parameters is 31.5M (which is less than 49.5M in cyss), accounting for the frozen DINO-ViT backbone [4] with 85.8M parameters, our total model size is roughly 117M parameters. This makes it much slower to train and infer from than cyss. However, it must be acknowledged that this large backbone comes with the added benefit of making our model generalisable to real-world images even though it has only been trained on synthetic data. Another potential cause of concern is that it relies on a good estimated registration, which in turn relies on reliable correspondences and depth, which may not be always available. On the other hand, it also means that as better correspondence extractors and monocular depth estimators become available, our model's results can improve at no additional cost. ## 5 Conclusion In the ever-changing landscape of our world, the task of detecting changes in a 3D scene is daunting for both humans and machines alike. In this work we take a step closer towards solving this problem by automatically detecting changes in real-world images captured from significantly different viewpoints. Due to the lack of large-scale real-world training datasets for this problem, we propose a model that is trained entirely on synthetic data but can generalise to real-world scenes by leveraging the recent advances in Following previous works, we largely focused on detecting missing objects as they are easy to acquire and precise. However, we note that our model should not be confused with a "(missing) object detector". While it is trained on object-centric datasets, our model can zero-shot detect all sorts of open-set changes. **Acknowledgements:** We would like to thank Jaesung Huh and Robert McCraith for their assistance with data collection, and Luke Melas-Kyriazi and Jaesung Huh for proofreading the paper. This research is supported by EPSRC Programme Grant VisualAI EP/T028572/1 and a Royal Society Research Professorship RP\(\backslash\)R1\(\backslash\)191132. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline & test dataset & \multicolumn{4}{c|}{COCO-Inpainted} & \multicolumn{2}{c|}{VIRAT-STD} & \multicolumn{2}{c|}{Synthetic-Change} & \multicolumn{2}{c|}{Kubric-Change} & \multicolumn{2}{c|}{KC-3D} & \multicolumn{2}{c}{RC-3D} \\ \hline & depth & \multicolumn{4}{c|}{Const.} & \multicolumn{2}{c|}{Const.} & \multicolumn{2}{c|}{Const.} & \multicolumn{2}{c|}{Const.} & \multicolumn{2}{c|}{GT} & \multicolumn{2}{c|}{Est.} \\ \hline & registration & \multicolumn{2}{c|}{GT} & GT & \multicolumn{2}{c|}{GT} & Est. & Id. & Id. & Est. & GT & Est. & Est. \\ \hline method & training data & \multicolumn{4}{c|}{(Const. = Constant, Id. = identity, Est. = Estimated, GT = Ground Truth)} & \\ \hline cyss & \multicolumn{4}{c|}{oco-inpainted} & **0.46** & **0.79** & **0.85** & **0.63** & **0.65** & **0.89** & 0.76 & 0.13 & 0.12 \\ cyss & \multicolumn{4}{c|}{oco-inpainted} & 0.41 & 0.73 & 0.78 & 0.57 & 0.54 & 0.87 & 0.76 & **0.87** & 0.14 \\ ours & \multicolumn{4}{c|}{oco-inpainted} & 0.34 & 0.69 & 0.76 & 0.52 & 0.51 & 0.46 & 0.85 & **0.84** & 0.14 & 0.10 & 0.35 & 0.27 \\ ours & \multicolumn{4}{c|}{KC-3D} & 0 & 0.03 & 0.06 & 0.02 & 0.02 & 0.01 & 0.01 & 0.23 & 0.83 & 0.69 & 0.19 & 0.19 \\ ours & \multicolumn{4}{c|}{coco-inpainted} + KC-3D & 0.36 & 0.72 & 0.77 & 0.53 & 0.52 & 0.49 & 0.84 & **0.84** & 0.82 & 0.68 & **0.50** & 0.41 \\ \hline \hline \end{tabular} \end{table} Table 2: **Results:** We report the AP of cyss [26] and our model on test sets described in Sec 4.2.
2306.06901
Engaging Engineering Teams Through Moral Imagination: A Bottom-Up Approach for Responsible Innovation and Ethical Culture Change in Technology Companies
We propose a "Moral Imagination" methodology to facilitate a culture of responsible innovation for engineering and product teams in technology companies. Our approach has been operationalized over the past two years at Google, where we have conducted over 50 workshops with teams across the organization. We argue that our approach is a crucial complement to existing formal and informal initiatives for fostering a culture of ethical awareness, deliberation, and decision-making in technology design such as company principles, ethics and privacy review procedures, and compliance controls. We characterize some of the distinctive benefits of our methodology for the technology sector in particular.
Benjamin Lange, Geoff Keeling, Amanda McCroskery, Ben Zevenbergen, Sandra Blascovich, Kyle Pedersen, Alison Lentz, Blaise Aguera y Arcas
2023-06-12T07:11:21Z
http://arxiv.org/abs/2306.06901v3
Engaging Google Teams Through Moral Imagination: A Bottom-Up Approach for Responsible Innovation and Ethical Culture Change in Technology Companies ###### Abstract We propose a 'Moral Imagination' methodology to facilitate a culture of responsible innovation for engineering and product teams in technology companies. Our approach has been operationalized over the past two years at Google, where we have conducted over 50 workshops with teams from across the organization. We argue that our approach is a crucial complement to existing formal and informal initiatives for fostering a culture of ethical awareness, deliberation, and decision-making in technology design such as company principles, ethics and privacy review procedures, and compliance controls. We characterize some distinctive benefits of our methodology for the technology sector in particular. **Keywords:** Ethical Culture, Responsible Innovation, AI Ethics, Ethics in Technology, Moral Imagination, Culture Change Management, Ethical Awareness, Ethical Deliberation, Ethical-Decision-Making, Bottom-Up Methodology, Practitioner Perspective 1 Footnote 1: Winner (1980), Idhe (1990), Feenberg (1990), see also Weinstein et al. (2021). ## 1 Introduction The norms and values of technology teams shape which technologies are produced and how.1 But these norms and values are rarely made explicit and subjected to critical appraisal, leading to limited ethical reflection and potentially reinforcing biases in technological development. In response, there are growing calls to change the culture that shapes the production of technologies. These include calls for greater governance such as government regulation and industry self-regulation by internal ethics review committees2 and governing principles.3 Also included are calls for engineer education in computer science curricula and industry training modules,4 alongside technical best practice developments such as technical approaches to value alignment5 and algorithmic auditing.6 Footnote 1: [https://www.census.gov/](https://www.census.gov/) Footnote 2: [https://www.census.gov/](https://www.census.gov/) Footnote 3: [https://www.census.gov/](https://www.census.gov/) Footnote 4: [https://www.census.gov/](https://www.census.gov/) Footnote 5: [https://www.census.gov/](https://www.census.gov/) Footnote 6: [https://www.census.gov/](https://www.census.gov/) Footnote 7: Some terminological clarifications. First, we use the term “technologists” to cover a wide spectrum of roles involved in tech development, including but not limited to software engineers, data scientists, research scientists, product managers, engineering leads, UX researchers, UX designers, or managers. We use the terms ‘technologist teams’ and ‘research and product teams’ interchangeably and inclusively. Second, throughout the discussion, we refer to the concept of “ethical culture” understood, roughly, as “the shared values, beliefs, norms, policies, procedures, systems, and artifacts that shape the behaviors of members of an organization and support ethical conduct.” We consider ethical culture as the encompassing construct within which initiatives such as responsible innovation or value-sensitive design that specifically aim at embedding the values or interests of broader stakeholders in technology research and development can fall. These approaches, whilst critical for promoting socially and ethically responsible technological development, have under-addressed a central cultural dynamic of technology production, namely, the group forums where critical decisions about product and research direction are made day-to-day. In the technology industry, and in particular at Google, these decisions are often negotiated within engineering, product, and research teams, where largely autonomous, entrepreneurially-driven groups mutually decide which problems are addressed through technology and how, before elevating recommendations to managers and executives. In this paper, we describe a "Moral Imagination" methodology to drive a culture of responsible innovation, ethical awareness, deliberation, decision-making, and commitment in technology organizations.7 This approach aims to prompt a role obligation shift among teams that makes the consideration of the moral implications of their work an inherent part of their self-conception and day-to-day decision-making. As practitioners, we have developed and tested this capability building approach over the past two years at Google through over 50 workshops involving a range of research and product teams. Footnote 7: Some terminological clarifications. First, we use the term “technologists” to cover a wide spectrum of roles involved in tech development, including but not limited to software engineers, data scientists, research scientists, product managers, engineering leads, UX researchers, UX designers, or managers. We use the terms ‘technologist teams’ and ‘research and product teams’ interchangeably and inclusively. Second, throughout the discussion, we refer to the concept of “ethical culture” understood, roughly, as “the shared values, beliefs, norms, policies, procedures, systems, and artifacts that shape the behaviors of members of an organization and support ethical conduct.” We consider ethical culture as the encompassing construct within which initiatives such as responsible innovation or value-sensitive design that specifically aim at embedding the values or interests of broader stakeholders in technology research and development can fall. Our primary aim in this paper is to make the conceptual case for our approach, and not to present a study on the efficacy of the proposed methodology, which we are pursuing in other work. Neither do we intend to suggest that our approach is superior compared to other initiatives such as traditional ethics and compliance controls, review boards, or ethics committees. Rather, we see Moral Imagination as an important complementary effort that can serve as one part of a portfolio approach to responsible innovation at technology companies. The paper is structured as follows. Section 2 details the specific challenge that we are concerned with: equipping teams with the skills to responsibly navigate the increasing social and ethical requirements in developing their technology and products. We argue that a comprehensive culture of responsible innovation at tech companies requires interventions that work to adjust tech team norms. Against this backdrop, section 3 then develops our Moral Imagination approach. We suggest that there are three key capabilities that should be fulfilled in order to enable robust ethical culture change among teams within technology companies: Ethical Awareness, Ethical Deliberation and Decision-making, and Ethical Commitment. We then show how our framework enhances these capabilities. Section 4 concludes. ## 2 The Need for Norm Shift ### Overview Our argumentative strategy in this section is to show that - given a plausible assumption about the responsibility of technology companies in general - a crucial lever for enabling responsible innovation is currently not fully realized, and to then argue in the subsequent section that our approach can fill this gap. More specifically, our argument consists of the following four key claims: 1. **Tech's Practical Responsibility Requirement:** In light of the policy vacuum in which technology is developed, technology companies have a practical responsibility to consider how to produce technologies that are sensitive to the ethical and sociotechnical contexts in which those technologies will be deployed. 2. **Importance of Shaping Norms:** Shaping technologists' team culture and prevalent norms is a key element in responding to _Tech's Practical Responsibility Requirement_. 3. **Gap in Current Measures:** Typically employed hard and soft controls are necessary but insufficient to fully shape predominant team norms in the design and development stages. 4. **New Opportunity for Intervention:** Therefore, there is an opportunity to devise new interventions that can meet the specific requirements distinctive technologists team culture poses and complement existing measures. ### Elaboration of Argument 1. **Tech's Practical Responsibility Requirement:** In light of the policy vacuum in which technology is developed, technology companies have a practical responsibility to consider how to produce technologies that are sensitive to the ethical and sociotechnical contexts in which those technologies will be deployed. We take this assumption as given. Our claim here draws on what has been called the "pacing problem" for technology. The pace of technological innovation exceeds that of regulation.8 As a result, existing laws and policies do not necessarily provide guidance to technical companies about how to align their work with the needs and interests of society - what James Moor (1985) has also referred to as a "policy vacuum".9 Footnote 8: Garrett et al. (2020). Footnote 9: Moor (1985). The pacing problem presents two _ethical gray areas_. That is, ambiguities about how to proceed ethically when developing technology given that rules, laws, and policies do not straightforwardly provide guidance. The first gray area concerns the absence of appropriate regulation for new technology, in the sense that legislation is not in place to set relevant boundaries for technological systems. The second gray area concerns the _interpretation_ of existing laws. This means that when regulation does exist, it has likely been designed to be appropriately broad to address foreseeable new technology. By necessity, this level of generality requires that technologists need to be able to interpret it meaningfully for the relevant context of their own work or that its interpretation may be ambiguous in the context of novel and technological developments. Effectively navigating these ethical gray areas requires that tech companies understand and address the sociotechnical context in which their products will be deployed, in particular pre-empting various risks of harm that the technology might pose to affected stakeholders (customers, local communities, society at large, among others). 'Responsible Research and Innovation' as a concept, literature, and set of processes can be seen as a response to this practical reality. Methods such as anticipatory governance,10 technology assessment,11 upstream stakeholder engagement,12 and value sensitive design,13 have arisen with the goal of embedding stakeholder values into the design of technology accordingly. Footnote 10: Guston (2014). Footnote 11: Schot and Rip (1997). Footnote 12: Wilsdon and Willis (2004). Footnote 13: Friedman (1996). 2. **Importance of Shaping Norms:** Shaping predominant technologists' team culture norms is a key element in responding to _Tech's Practical Responsibility Requirement_. Figure 1: Pacing Problem in Technology We think that a key element in ensuring that technologists are adequately sensitive to the ethical and sociotechnical contexts of their deployed technology is changing the established informal rules and beliefs that govern the behavior of technologists' teams. There are several reasons for this, which all concern the structural and organizational features of technologists' teams, the deeply pervasive norms of engineering culture, and the agile nature of technology development. Firstly, large technology companies like Google are, from an organizational standpoint, best understood as networks of autonomous cross-functional teams, rather than as single, aggregate entities that act in accordance with unified sets of goals.14 These technology companies typically have 'distributed' as opposed to 'centralized' organizational structures that are organized around key research, products, and services. This means that teams typically possess a great deal of autonomy, and they often define, frame, and develop technology solutions before elevating recommendations to executives or senior managers. This mode of operation applies even in moments where technology companies undertake a concerted push toward a general goal. Footnote 14: See also Birhane et al. (2022). Secondly, there are deeply ingrained norms that govern technologist culture itself. Values that relate to technical systems such as simplicity, efficiency, scalability, and elegance tend to be the focus, without explicit reference and acknowledgement of a wider set of ethical values that are also embedded in the work. Of course, there are good reasons for the cultural centrality of these values, including the ability to innovate rapidly and at scale. Technologists tend to be familiar with productive critique, optimization, and trade-offs for these types of values; however, the same know-how for negotiating these value tensions does not apply straightforwardly to social and ethical values. In practice, when individuals begin to discuss a wider set of ethically significant considerations, we find that teams are less familiar with ways to facilitate discussion about them in a productive, meaningful, and actionable way. This is also not to deny that individual technologists care a great deal about the social impact of their work; in fact we experience that technologists often consider their work as part of a larger "life project of creating good in the world."15 However, existing norms are often not conducive to enabling the required awareness, information-seeking, deliberation, and follow through on complex ethical issues. Footnote 15: Smith (2021). Thirdly, in the early stages of technology development, there are often multiple paths that teams can pursue. Plans are actively in development, and concepts for technology research and products change rapidly, moving from a state of ambiguity into concrete formation over a period of weeks or months. In this state, the implicit beliefs, attitudes and social norms of teams exert preeminent influence over what and how technologies are built. Here social norms can determine how deliberative conversations proceed and technologists' perceptions of which activities are necessary to shape a development process. For these reasons, the ethical gray areas underwrite the need for norm change in responding to tech's practical responsibility requirement. Since technologists inevitably develop technologies in domains where the application of existing policies and laws is ambiguous or not applicable, and because they possess a great deal of autonomy in designing technology, they must - as teams be able to rely on well-developed norms of recognizing ethical issues, identifying when more information is needed, being able to reason through the different moral considerations at stake in a situation, and then practically acting on these in a way that translates them into robust commitments. 3. **Gap in Current Measures:** Currently available hard and soft controls are necessary but insufficient to fully shape predominant team norms in the design and development stages. Like other companies, Google has a large arsenal of mechanisms at its disposal to support an ethical culture and drive responsible innovation within the unique operating context of the technology sector. Traditionally the ethical culture of companies can be distinguished along two dimensions: formal and informal elements.16 The first, so-called "hard controls", refer to the concrete and explicit plans, policies, and procedures within an organization. Of these formal systems, many attempt to influence company culture through intervention at an individual level. Ethics training programs are a good example of this, including the code of conduct, whistle-blowing and speak-up training, privacy and data security training, diversity, equity, and inclusion training, as well as training for responsible corporate citizenship among others.17 In addition to these measures, at Google there exist multiple review boards that operate at the project level, assessing research and products against Google' AI principles, security, and privacy standards, for example.18 The second element of ethical culture is informal. These so-called "soft controls" include the implicit, intangible elements, such as the values, expectations, beliefs, myths and assumptions that prevail in the organization that are not explicitly formalized through policies and processes. These informal elements also greatly matter in shaping ethical culture since the implicit norms and beliefs are key drivers of ethical conduct. Footnote 16: See Kaptein (2011). Footnote 17: See Trevino and Weaver (2003). Footnote 18: Google. These ethics and compliance mechanisms, education and training programmes, and review boards are central to fostering a culture of responsible innovation within technology companies and our aim is not to criticize them or question their importance. However, we think that such mechanisms leave a gap with regards to the sustained promotion of a strong ethical culture in which technologies are consistently created with appropriate ethical foresight. While others have highlighted some19 of the reasons for this apparent "principles to practices gap,"20 in our experience, a main reason for this gap is that existing hard controls do not sufficiently influence the organization at a technology team level. Technologists' team norms and culture can vary somewhat widely even within a company, and these have substantial and direct impact over ethically significant technological design decisions. Footnote 19: Schiff et al. (2020). Footnote 20: Mittelstadt (2019). For example, while review boards, committees, and ethical commitments provide vital guardrails and checks on a product or research against critical ethical risks of harm, they mostly come into effect at later stages of development, and thus fail to encourage explicit reflection on the ethical costs and benefits of different design strategies at earlier stages of a product development, including product ideation. Often, these stages require foundational help in developing a mindset and vocabulary for ethical analysis that enables teams to become aware of the concrete moral implications of their work, how design choices relate to trade-offs between important values and can have ethical implications for key stakeholders, how the team can then deliberate through these in an ethically sound manner, and take concrete action for the further development of their product. To do so would require alignment with a collabora tive and bottom-up approach where teams build crucial ethical capabilities that are tailored to their specific issues and requirements at a fine-grained level. Ethics trainings that are offered at an individual level encounter a different set of limitations for influencing ethically relevant design decisions amongst technology teams. While individual education can influence individuals' beliefs, this doesn't guarantee that the individual can successfully convince others of similar beliefs or to influence technology direction by a team. These require team solutioning and commitment. Individuals whose moral intuition has directed them to broach the topic of the ethical dimensions of their work, are confronted with fears of analysis paralysis, lack of shared understanding of vocabulary and concepts, lack of confidence in moving through difficult conversations productively, and lack of understanding about how to integrate the moral dimensions into concrete technical design decisions. Indeed, often entrenched norms "keep opinions and behaviors in place even if individuals no longer privately support them, a phenomenon known as pluralistic ignorance."21 Footnote 21: Prentice and Paluck (2020). Footnote 22: For some key frameworks in the responsible innovation literature, see Owen et al. (2012), Stilgoe et al. (2013), and Van Oudheusden (2014). See Werhane (2008) and Werhane (1999) on moral imagination. The Fisher et al. (2006) “Midstream Modulation” approach is similar in spirit to our developed method here. 4. **New Opportunity for Intervention:** Therefore, there is an opportunity to devise new interventions that complement existing measures and can meet the specific requirements that distinctive technologists team culture poses. What is consequently lacking is to influence which and how technologies are built, and to complement existing initiatives, are measures that directly address the culture and norms about how technology teams produce their work in the context of their work. Formats must be flexible and able to adapt to the nature of a team's work, the various stages of their projects, and the idiosyncrasies of particular team cultures given embedded personalities and existing power dynamics. Addressing team norms directly in discussion about their work yields the opportunity to weaken existing norms and replace them with a new social contract that explicitly incorporates followthrough on team responsibilities in light of agreed-upon ethical commitments. ## 3 Moral Imagination In this section we propose a 'Moral Imagination' methodology that aims to promote a role-obligation shift among technologists by influencing the norms of behavior, rules, best practices, and beliefs of technologist culture at a team level. The methodology builds upon the Moral Imagination literature in business ethics, alongside ideas from the philosophy of technology and the responsible innovation literature.22 We first articulate what a Moral Imagination approach amounts to and its function in the context of technology companies (3.1). We then outline a framework that specifies three key ethical capabilities around which our approach is structured (3.2). Last, we propose a method for strengthening those capabilities based on our practitioner-experience of conducting more than 50 workshops with teams at Google (3.3). Footnote 22: Prentice and Paluck (2020). ### Moral Imagination for Technologists We define Moral Imagination as: **Moral Imagination:** The ability to i) register that one's perspective on a decision-making situation, including the available options and the normative factors relevant to adjudicating those options is limited; and to ii) creatively imagine alternative perspectives that reveal new approaches to that situation or new considerations that bear on the competing approaches. Crucial to this is "becoming aware of one's context, understanding the conceptual scheme or "script" dominating that context, and envisioning possible moral conflicts or dilemmas that might arise in that context or as outcomes of the dominating scheme."23 What developing Moral Imagination allows technologists to do is recognize the limitations of their pre-theoretic mental models about how their technology impacts the world, what the costs and benefits of that technology are, and what _their_ role is in ensuring responsible technological development. Footnote 23: Werhane (2008, p.3) Thus a central aim of our approach is to facilitate a _role obligation shift_ among technologists. It aims to shift teams' self-conception away from a mindset where ethical considerations are removed from perceived responsibilities - something that "falls outside of the job description" - toward a mindset where the consideration of the moral implications is an inherent part of the research and development process. It aims to prompt teams to realize what they do not yet understand about how their technologies impact users, and more broadly the sociotechnical dynamics and value tensions of their technologies. It further aims to empower teams to create a map for information gathering about the issues and topics the team didn't consider before.24 Footnote 24: For a detailed exploration and description of the various elements and content of the Moral Imagination workshop see Lange et al. (2023). ### Three Key Ethical Capabilities for Moral Imagination Our approach focuses on three ethical capabilities that we consider central to realizing Moral Imagination and to enhance these capabilities amongst teams to foster meaningful and productive norm change. What undergirds our focus on these capabilities is a conception of teams as moral group agents who have the ability to reach informed moral judgements through awareness and reasoning, act with intent, and to be held accountable for their own actions.25 These focal points also relate to our prior discussion insofar as the reality of autonomous technology teams operating in ethical gray areas requires an enablement approach that builds teams' ability to navigate complex ethical challenges and translate this into concrete actions and change along their product or research lifecycle. Footnote 25: See Rest (1986). **Ethical Awareness:** Ability to recognize normatively significant factors and implications (e.g. moral values, ethical risks of harms, constraints and rights violations) in situations, decisions, and other relevant choice scenarios. A precondition for robust ethical deliberation, decision-making, and commitment is to expand the team's perceptual paradigm beyond that of established technologists' norms, while also sensitizing the team to moral discourse.26 Developing an understanding of moral values, their normative force, action-guidingness, appropriate definitions of ethical terms for work-contexts, and how these relate to the technology and products that a team is developing are all crucial elements of this capability. In addition to shaping participants' understanding of moral values, ethical awareness also pertains to risks of harm to various stakeholder groups, especially in a sociotechnical and not just technical context. **Ethical Deliberation and Decision-Making:** Ability to engage in reasoning and deliberation in relevant choice scenarios, including tensions between value and other moral commitments, conflicts, moral dilemmas, and trade-offs. Once the team has a better understanding of the ethical dimensions of their work, alongside a grasp of key ethical vocabulary, teams can be introduced to conceptual tools which allow them to understand and negotiate situations in which the competing normative considerations come into conflict. This may encompass covering conceptual distinctions concerning 'pro-tanto' and 'all-things-considered', the gradeability of normative concepts and values, including different degrees to which conflicts can occur, the notion of weighing different moral factors that may relate to a choice situation for a team in a way that is ethically rigorous and robust, and the idea that which moral factors are apparent or significant may vary based on perspective. This point is relevant because the status quo of the technologists' culture is often primarily consequentialist and can accordingly be broadened by being introduced to different moral considerations besides outcome-oriented utility calculations. **Ethical Commitment:** Ability to derive and set concrete plans to guide further product development and/or research. Increased ethical awareness and decision-making capacities enables teams to navigate complex ethical challenges as part of their work. But building these capabilities will miss their mark if there is no commitment and accountability with respect to translating these insights into practical change. To that end, teams need to address what it means to act ethically and with integrity in their product or research context. This may mean deviating from widely accepted norms about the content, sequence, and pace of design, development, and release activities. What it means to operationalize ethical commitment varies depending on organizational structures. At Google, we co-develop with teams a set of actionable responsibility objectives that can inform Product Requirement Documents (PRD) and individual or team Objectives and Key Results (OKRs). ### Methodology In the previous subsection we discussed the key ethical capabilities that the Moral Imagination approach intends to influence to facilitate an ethical role obligation shift among technologist teams. In this subsection, we detail a practical four-step workshop method to strengthen these capabilities based on our experience of conducting Moral Imagination workshops with teams at Google. Our method expands upon existing responsible innovation frameworks such as those developed by Owen et al. (2012) and Stilgoe et al. (2013), as well as Fisher et al. (2006), by providing a tailored methodology for facilitating responsible innovation for product and research teams that engage in software development for at-scale technologies including artificial intelligence.27 Footnote 27: Here our aim is to sketch conceptually how the workshop format fosters the relevant capabilities. We elaborate on the practical details of the workshop in a separate practice-based piece of work. Our approach is operationalized through a series of workshops that are facilitated by a multidisciplinary team with academic backgrounds in ethics and/or practical experience with ethics in the technology industry. The workshop provides a structured engagement forum to assist teams typically at early stages of their work, for example, during the ideation, experimentation, prototyping, piloting, or re-imagining phases.28 Workshops are designed to draw attention toward the salient dimensions of technologists' work, and model and support how they can work through them together while building a shared capacity for ethical awareness, productive debate, solution finding, and planning. The workshops are specifically adapted and tailored to a team's progress and work: they are modular and involve content that is customized for relevance to the dilemmas teams face - though it is always centrally focused on the key ethical capabilities of Awareness, Deliberation and Decision-Making, and Commitment through Moral Imagination. Footnote 28: The benefits of engagement with innovation teams at an early to mid stage of their work has been outlined in Fisher et al. (2006). The workshops employ a non-didactic approach to ethics, in the sense that the aim is not to lecture participants about key moral principles and considerations. Nor is it to impose a particular ethical framework. Rather, our approach is to construct exercises that enable technologists to reframe their work through an ethical lens, and then re-envision their work and its corresponding responsible development process. In doing so, our goal is to align with the technology industry's culture of autonomy and entrepreneurship, while building momentum from many technologists' expressed desire to drive their innovations toward socially beneficial ends. At a high level, the Moral Imagination workshops involve the following four-step structure. 1. **(Reflection)** Externalization of a team's current moral intuitions, beliefs, and convictions about their work. Norms and beliefs about a team's work have to be made explicit to be challenged and altered. This first step therefore aims to surface and understand the particular ethical paradigm with which a team is operating in their day-to-day work by enabling teams to reflect on, articulate and clarify the values they feel are currently motivating or inherent in their work. Semi-structured discussions are used to surface the values that the team brings to bear in their work, including personal motivations, beliefs about the technological benefits, and envisioned characteristics of a world where the technology has been successfully deployed and is ubiquitous, aiming to formulate a positive vision for their technology. Building upon these discussions, facilitators introduce the concept of values in ethics, groups negotiate the most important values, and work to clarify and interpret them in the particular context of their technology. Teams also reflect on whether and in what respects current plans instantiate or fail to instantiate the stated values, and surface tensions between values that require tradeoffs. 2. **(Expansion)** Challenging a team's perspective for the purpose of reflecting on their moral intuitions. Envisioning possibilities for the acquisition of ethically relevant information to inform a team's approach, and for the work itself. Once teams' moral intuitions and beliefs have been made more explicit among the group, the next step is to _challenge_ those intuitions and facilitate the internalization of ethical considerations beyond those that were initially surfaced. As part of building ethical awareness, the focus at this stage is to challenge the teams' paradigm from an ethical point of view to help the group consider the key ethical implications that their work contains and also surface relevant knowledge gaps. The centerpiece of this section is a bespoke technomoral scenario that extends the underlying logic of each team's technology 5 or 10 years into the future. The scenario complicates the interplay of technology and society, ends on a cliffhanger, and emphasizes the importance of gaining different points of view as a means to anticipate ethical considerations. Participants role-play in small groups and argue the case against each other, putting to practice their ability to interpret values, argue for or against them in technological design, and build comfort with critical evaluation of their work. Throughout this section, further value tensions are solicited, documented, and described. An inclusion-focused exercise then aids participants in understanding the needs and interests of multiple stakeholder groups and how to include their voices to improve decision making. Participants are invited to a'veil of ignorance' scenario where they are encouraged to envision, articulate and elaborate on the issues that might arise for the stakeholder ecosystem.29 This exercise alerts participants to the possibility their team's perspective is limited, enumerates an initial set of perspectives from which the work would benefit, and emphasizes diverse perspectives collected equitably must be a high priority. Other exercises include anticipation of sociotechnical harm, in which teams are exposed to a taxonomy of harm and brainstorm a number of concrete adverse impacts their work could potentially have, alongside alternative paths for the work in light of those possible impacts. Footnote 29: See Weidinger et al. (2023) for a recent discussion of using Rawls’ veil of ignorance to align AI systems with principles of justice. 3. **(Evaluation)** Reasoning through a number of ethical perspectives about the team's work. _Reflection_ and _Expansion_ aim to build the ethical awareness of teams, specifically with an eye towards enabling a better grasp of ethically relevant factors including risks of harm. Once these have been surfaced and internalized in the context of the teams' own technologies, the next step focuses on helping the team learn to deliberate and reason through concrete ethical choice scenarios that are relevant to them. So, after a team's ethical paradigm has been made explicit and challenged by the team itself, ethical reasoning tools are successively introduced to enable the team to learn to reason through trade-offs and choice scenarios that they have identified as arising in the context of their work. During Moral Imagination workshops at Google, moral theories are introduced schematically, and presented as a set of reasoning tools that enable participants to approach a problem from different angles. Key notions such as the "weighing" of competing moral values, trade-offs, and gradability of moral commitments are introduced to teams. Exercises then involve participants responding to arguments, formulating responses from multiple perspectives, and discussing amongst each other to reach a consensus on how best to resolve a particular value tension. The elements described aim to provide teams with a shared foundation to have ethical conversations in a pluralistic manner that goes beyond entirely deontological or consequentialist paradigms. 4. **(Action)** Translation of insights and learnings into concrete team practices. This last step focuses on supporting teams in taking actions based on their learnings. Participants reflect on prior discussions and articulate ethical focus areas that can inform the technology concept and design in future work. Moderators work with participants during and after the workshop to shape these focus areas into responsibility objectives, which serve as actionable statements that can shape OCRs or be included in PRDs. The workshop's focus on discussion, clarification, and negotiation ensures that the responsibility objectives are broadly supported and considered as legitimate North Stars for the team. Moral Imagination workshops enable participants to challenge beliefs and begin to reshape the norms that guide decision-making and planning in their team. Importantly, the norm change at issue here is shared and co-constructed. Furthermore, in our experience, workshops render participants more aware of the value of seeking accurate information about how their work will function as a sociotechnical artifact, and also empower participants to interpret this information in the context of research and development processes. To that end, participants are able to proactively identify and mitigate ethical risks, which supports them in making better use of other ethics controls such as review boards, and in particular empowers teams with a degree of moral autonomy when engaging with these other ethical controls. This holistic approach, on which the Moral Imagination methodology complements more traditional hard controls, ultimately enables technologists teams to develop technologies in a way that is morally informed and which better meets the challenges for technology companies that we articulated in Section 2. ## 4 Conclusion In this paper, we introduced the Moral Imagination approach as a method for driving ethical culture change within technology companies. The approach is a "soft control" method that emphasizes externalization and multiperspectival evaluation of the norms and values that precipitate innovation within teams through semi-structured deliberation and negotiation, alongside co-development of action-oriented ethical commitments (for example, through OKRs and PRDs). The Moral Imagination approach has been executed over 50 times at Google, and is positioned alongside "hard controls" such as ethics and privacy reviews that together make up Google's portfolio approach to fostering a culture of responsible innovation in line with Google's AI Principles.30 We have argued that Moral Imagination is uniquely well-positioned to complement and address the limitations of more traditional "hard controls" in the context of technology companies, where team norms and values exhibit substantial influence over research and product decisions given the bottom-up and highly autonomous engineering culture that drives innovation within these companies. Our hope is that the Moral Imagination approach can serve as a template to foster a culture of responsible innovation across the industry. Footnote 30: Pichai (2018). While we are encouraged by the early results of the Moral Imagination approach, we continue to refine the approach and develop new tools and resources to scale the program within Google. This includes a dedicated empirical research track focused on measuring the efficacy of the approach as a method for ethical culture change, alongside the development of new workshop modules that aim to further upskill teams on topics such as ethical reasoning, critical reflection on metrics, and many other topics. Furthermore, we aim to contribute to and enrich the social conversation around ethical culture change within technology companies by publishing case studies alongside the findings of our empirical research, and also by externalizing the methodology to solicit participation and critical input from a broad range of stakeholders. ## Contributions BZ and AM designed the Moral Imagination program with input from KP, AL, and BA. BZ, AM, KP, and GK are practitioners on the program with programmatic support from SB. BL developed the normative framework presented in the paper. BL, GK, SB, BZ, and AM wrote the paper with BL as lead author. BA and AL provided feedback.
2305.06289
Learning Video-Conditioned Policies for Unseen Manipulation Tasks
The ability to specify robot commands by a non-expert user is critical for building generalist agents capable of solving a large variety of tasks. One convenient way to specify the intended robot goal is by a video of a person demonstrating the target task. While prior work typically aims to imitate human demonstrations performed in robot environments, here we focus on a more realistic and challenging setup with demonstrations recorded in natural and diverse human environments. We propose Video-conditioned Policy learning (ViP), a data-driven approach that maps human demonstrations of previously unseen tasks to robot manipulation skills. To this end, we learn our policy to generate appropriate actions given current scene observations and a video of the target task. To encourage generalization to new tasks, we avoid particular tasks during training and learn our policy from unlabelled robot trajectories and corresponding robot videos. Both robot and human videos in our framework are represented by video embeddings pre-trained for human action recognition. At test time we first translate human videos to robot videos in the common video embedding space, and then use resulting embeddings to condition our policies. Notably, our approach enables robot control by human demonstrations in a zero-shot manner, i.e., without using robot trajectories paired with human instructions during training. We validate our approach on a set of challenging multi-task robot manipulation environments and outperform state of the art. Our method also demonstrates excellent performance in a new challenging zero-shot setup where no paired data is used during training.
Elliot Chane-Sane, Cordelia Schmid, Ivan Laptev
2023-05-10T16:25:42Z
http://arxiv.org/abs/2305.06289v1
# Learning Video-Conditioned Policies for Unseen Manipulation Tasks ###### Abstract The ability to specify robot commands by a non-expert user is critical for building generalist agents capable of solving a large variety of tasks. One convenient way to specify the intended robot goal is by a video of a person demonstrating the target task. While prior work typically aims to imitate human demonstrations performed in robot environments, here we focus on a more realistic and challenging setup with demonstrations recorded in natural and diverse human environments. We propose _Video-conditioned Policy learning (ViP)_, a data-driven approach that maps human demonstrations of previously unseen tasks to robot manipulation skills. To this end, we learn our policy to generate appropriate actions given current scene observations and a video of the target task. To encourage generalization to new tasks, we avoid particular tasks during training and learn our policy from unlabelled robot trajectories and corresponding robot videos. Both robot and human videos in our framework are represented by video embeddings pre-trained for human action recognition. At test time we first translate human videos to robot videos in the common video embedding space, and then use resulting embeddings to condition our policies. Notably, our approach enables robot control by human demonstrations in a _zero-shot manner_, i.e., without using robot trajectories paired with human instructions during training. We validate our approach on a set of challenging multi-task robot manipulation environments and outperform state of the art. Our method also demonstrates excellent performance in a new challenging zero-shot setup where no paired data is used during training. ## I Introduction Significant progress has been made in recent years towards learning a generalist robot agent capable of accomplishing a wide array of skills across many environments [1, 2, 3, 4]. Central to this challenge is the ability to effectively specify tasks and rewards to the robot system in a user-friendly manner. In reinforcement learning, a task is commonly defined through a reward function [5]. However, designing good reward functions for each task is often challenging and restricting policy learning to a fixed set of tasks hinders generalization to new tasks. Goal-conditioned imitation and reinforcement learning [6, 7, 8, 9, 10, 11] can learn agents capable of performing a wide diversity of tasks with less supervision. But providing the right goal to define the task requires expert operators to come up with a suitable robot observation of the desired configuration. Other works have shown that we can learn to command generalist robots through language instructions [2, 12, 13, 14] and human videos [2, 15], which are easy to provide for non-expert operators and can generalize to unseen inputs, including behaviors beyond goal-reaching skills. Furthermore, being able to specify robot skills through language or video commands unlocks solving more complex long-horizon tasks by chaining human instructions [1, 16]. Nonetheless, these methods rely on annotated demonstrations for a large set of robot skills, which is often tedious to provide, especially since task annotation must be repeated for each new robot environment. In this work, we propose _Video-conditioned Policy learning_ (ViP), a method that learns to perform manipulation skills given a human video of the desired task in vision-based multi-task robotic environments (Figure 1). We demonstrate that, due to the similarity between robot manipulation and videos of humans performing manipulation skills, we can leverage existing large datasets of annotated human videos, such as the Something-Something-v2 dataset [17] (SSV2), to learn to map human videos to robot behaviors _in a zero-shot manner_, without training on paired data between human instructions and robot demonstrations. For instance, the robot may interact in an environment that includes a drawer, yet has received no supervision on what it is expected to do when commanded with a video instruction of a human closing a possibly different drawer. We do so by learning a video encoder using Supervised Contrastive Learning [18], where embeddings of videos of the same task are closer together in cosine distance, and show that video models trained on such large datasets of annotated human videos, which can be easily collected from the internet [17, 19, 20, 21], generalize to the robot video domain. In addition, we show that video encoders trained for human video action recognition readily provide relevant task embeddings for multi-task policy learning. Following recent trends in data-driven robotics, our approach learns from large collections of offline robot experience that can be collected Fig. 1: Given a human video instruction in a non-robotic scene, our video-conditioned policy ViP controls the robot to perform a similar task zero-shot, i.e. the agent never observes robot data paired with human instructions during training and figures out which manipulation skill it is expected to perform in its environment at test time. We illustrate three examples of different tasks demonstrated by people and the corresponding roll-outs generated by our method for the TableTop robotics environment. in different ways, e.g. by expert demonstrations given by motion planners, teleoperated play data, random data generation processes. Given an offline dataset of robot demonstrations, we learn a video conditioned policy to regress the action conditioned on both the robot state and a video embedding of the full trajectory the state-action pair belongs to. We also keep each embedding in a library of robot embeddings. At inference, we perform nearest neighbors regression on this library using the cosine distance to the embedding of the human instruction to generate an appropriate robot embedding that is both (a) relevant to the instruction and (b) is executable by the policy. We can then execute the human video instruction by decoding the selected embedding into a robot trajectory using the learned policy. Overall, our approach demonstrates that large collections of human videos can enable less supervision in data-driven robotics. Our contributions can be summarizes as follows: * we designed a method to train a policy conditioned on robot video embeddings given by a video encoder pretrained on a large dataset of annotated human videos; * our method can map human video instructions to robot manipulation skills without supervision from paired data to bridge the gap between the human and robot domain; * our approach outperforms prior works on a set of multi-task robotic environments. ## II Related Work Different methods have been explored in recent years towards robot learning with human videos. Many prior works hence consider the problem of learning to follow human demonstrations using different techniques such as pose or keypoints estimation [22, 23, 24], image translation and inpainting [25, 26, 27, 28], learning object centric representations [29, 30], simulators [31, 32] and meta learning [33]. Contrary to these prior works that often consider closely aligned human videos and robot environments e.g. humans demonstrating the task in the same lab environment as the robot, we assume that human video instructions are collected "in-the-wild" and therefore there exists a large domain gap between human videos and robot workspace. Recent works attempt to perform model pretraining from large-scale human videos in order to get good image representations for robotic control [34, 35]. Other works have shown that we can infer states and actions from diverse videos and use it for reinforcement learning [36, 37, 38, 39]. In our work, we show that encoders trained on large datasets of videos for human action recognition provide good task embeddings for robotic manipulation. Prior works have also considered leveraging large datasets of human videos to learn reward functions for robotics manipulations [40, 15]. The most relevant work to ours is Domain-agnostic Video Discriminator (DVD) [15] which also tackles the problem of commanding robots using in-the-wild human videos. DVD learns a video similarity by training a discriminator network to classify whether two videos are performing the same task on both annotated robot demonstrations and a subset of SSv2 videos. The similarity score is then used as a reward for planning with an action-conditioned video generation method [41] trained on randomly collected robot experience. In contrast, our approach does not require any annotated robot demonstration and can accommodate both randomly collected robot experience and expert demonstrations. Moreover, [15] plans several sub-trajectories as high-dimensional synthetic videos per episode whereas our approach plans a full trajectory in the embedding space of robot videos. Contrastive learning on large-scale datasets has led to significant progresses in a range of computer vision tasks [42, 43]. Contrastive learning has also been used to learn language-conditioned policies by training on large scale datasets of images [44, 45] and videos [46]. In this work, we leverage Supervised Contrastive Learning [18] for human action recognition on the SSv2 dataset to learn a mapping between human video instructions and robotic manipulation. Our work falls under outcome-conditioned action regression [47, 48, 7]. Such works cast reinforcement learning as learning policies conditioned on trajectory information such as future returns [49, 50, 51], many previous timesteps [52, 53] or a desired goal-configuration [54, 6, 7, 47] to learn multi-task policies. In contrast, our policy is conditioned on a video embedding of the full robot trajectory. [2] also showed that we can learn a video-conditioned controllers by conditioning the policy on both a video of the full robot trajectory and paired sets of human videos appropriately collected for each task. Other works [12, 13] show that pairing a small subset of robot data with language instructions enables generalizable language-conditioned task execution. Our method does not require any robot data paired with human instructions and, hence, can learn generic policies from unlabelled robot datasets. ## III Method In Section III-A we first present an overview of our approach that enables robots to mimic new tasks demonstrated by people in natural human environments. We then detail how we learn a generic video-conditioned policy from randomly-generated demonstrations in Section III-B. We further describe how we condition our policy on human videos in Section III-C. Finally, we describe how we learn a similarity function for matching human videos to robot roll-outs without using paired human-robot data in Section III-D. ### _Method overview_ We aim to perform a robot manipulation task conditioned on a human video in a vision-based multi-task environment. At test time our system receives an input video of a previously unseen task performed by a person, such as pushing a mug or closing a drawer, and controls a robot arm with the intent of performing a similar task in the robot environment. More formally, we consider a set of Markov Decision Processes (MDP) \(\mathcal{MDP}_{i}=(\mathcal{S},\mathcal{A},\mathcal{R}_{i},p)_{i}\) sharing the same observation space \(\mathcal{S}\), action space \(\mathcal{A}\) and dynamics \(p\) but with different reward functions \(\mathcal{R}_{i}\) corresponding to different tasks we would like to solve. We do not assume that the reward functions \(\mathcal{R}_{i}\) are observed. Instead, \(\mathcal{R}_{i}\) must be inferred through a human video of the task \(x^{h}\in\mathcal{X}\). Our goal is to learn a video-conditioned controller \(\pi(.|s,x^{h})\) that predicts actions \(a\in\mathcal{A}\) given the current states \(s\in\mathcal{S}\) and human videos \(x^{h}\in\mathcal{X}\) to maximize the reward functions associated with the input human video. There may exist a large domain gap between videos of humans and robots performing similar tasks. To bridge this gap, we leverage the large-scale video dataset Something-Something-v2 (SSv2) [17] with labeled human actions \(D^{h}=\{x_{i}^{h},y_{i}^{h}\}_{i}\) where labels \(y_{i}^{h}\) for videos \(x_{i}^{h}\) correspond to different manipulation actions such as Opening Something, Moving Something Away from the Camera, etc. Following DVD [15], we train a similarity function \(d(.,.)\) that assigns high values to pairs of videos representing the same task and low values to video pairs of different tasks. Unlike DVD, however, we learn such similarity without any annotated robot videos. Given a human video, we use the learned similarity as a reward \(R(.)=d(.,x^{h})\) for our controller. To learn the policy, we assume access to a dataset of unlabelled robot demonstrations \(D^{r}=\{x_{i}^{r},(s_{i}^{t})_{t},(a_{i}^{t})_{t}\}_{i}\), where \((s_{i}^{t})_{t}\) is a sequence of robot observations, \((a_{i}^{t})_{t}\) is the corresponding sequence of executed robot actions and \(x_{i}^{r}\) is a video of the demonstration. For instance, if the observation space \(\mathcal{S}\) only contains images (without e.g. proprioceptive information), the videos can simply correspond to the whole sequence of states in the trajectory \(x_{i}^{r}=(s_{i}^{t})_{t}\). This offline dataset can be collected in many different ways: by expert demonstrators, rollouts of other policies, through teleoperation or by random data generation. Importantly, we do not assume access to any further information about these demonstrations. Figure 2 presents an overview of our approach: we leverage a video encoder \(f_{\theta}\) trained for human action recognition on SSv2 and a similarity metric between videos. During training, we learn a behavior cloning policy \(\pi_{\phi}\) conditioned on robot embeddings given by the video encoder while storing all embeddings of the robot training dataset in a library. At inference, we first use this library and the similarity metric to predict a robot embedding relevant to the human video instruction, then rollout the policy in the environment. ### _Video-conditioned policy learning_ We now describe how to encode a large array of behaviors into a single policy. A key to our approach is the use of the embedding space of a video encoder pretrained on human videos to condition our policy. This video embedding can be seen as a task embedding for our generic multi-task policy. We will explain more in detail how we can obtain meaningful video embeddings for control in Section III-D. During training, we learn a policy \(\pi_{\phi}\) to regress an action given the current state and a video embedding of a full robot trajectory. Video embeddings act as context defining the global task. The policy is trained with behavior-cloning by minimizing the loss: \[\mathcal{L}_{\pi}(\phi)=-\mathbb{E}_{s,a,x^{r}\sim D^{r}}\log\pi_{\phi}(a|s,f_ {\theta}(x^{r})). \tag{1}\] At test time, this policy can be commanded to reproduce a robot video input \(x^{r}\) by first encoding the video into an embedding \(e^{r}\) and then executing the actions \(a\sim\pi(.|s,e^{r})\) predicted by the policy at each step \(s\) visited during the roll-out. As we will see from experiments, however, using human videos to directly condition the policy results in poor performance due to the large domain gap between robot and human videos. We therefore propose to first translate human videos to robot video embeddings as described in the next section. ### _Inference with human instructions_ At inference, our first step is to translate the human video instruction \(x^{h}\) into the robot video embedding \(e^{h}\) that both (1) corresponds to the target task and (2) is in distribution for the robot policy. While many methods can instantiate Fig. 2: (Left) During training, we learn a manipulation policy conditioned on robot video embeddings of the full robot trajectories from the robot dataset. At the same time, the robot video embedding of each trajectory in the robot dataset is added to an embeddings library. (Right) At inference, we encode the human video instruction into a human video embedding. We then average the robot embeddings from the library that have highest cosine similarity to the human embedding into a selected robot embedding. Finally, we execute the policy conditioned on this selected embedding. this, we choose to simply use nearest neighbors regression in the robot embedding space. We first encode the videos contained in the robot dataset \(D^{r}\) into a library of robot embeddings \(D^{e}=(e_{i}^{r}=f_{\theta}(x_{i}^{r}))_{i}\). We also encode the video instruction into a human embedding \(e^{h}\). We then perform \(k\) nearest neighbors regression using the distance function \(d\), by first computing all the distances between the human embedding and each embeddings in the library \(d_{i}=d(e^{h},e_{i}^{r})\), then averaging the top \(k\) embeddings of the library \(e^{r}=\frac{1}{k}\sum_{i\in I}^{k}e_{i}^{r}\). Finally, we perform policy rollout conditioned on this embedding. As result, although the policy was trained with behavior cloning, this approach allows us to maximize the similarity of the robot trajectory to the video prompt at inference. ### _Learning a task similarity from human videos_ Many choices of distance metric \(d\) between videos can be used to match a human video to an appropriate robot embedding. In this work, we consider adapting Supervised Contrastive Learning [18] to our video action recognition task on the Something-Something-v2 dataset. We learn our video encoder \(f_{\theta}(.)=f_{\theta}^{p}(f^{b}(.))\) composed of a backbone \(f^{b}(.)\) which maps videos to representation vectors and a projection network \(f_{\theta}^{p}(.)\) which maps representation vectors to embedding vectors \(e\), such that embeddings from the same class are pulled closer together in cosine distance than embeddings from different classes. As a result, the cosine distance characterizes the similarity between two videos. In contrastive representation learning, the backbone and projection net are typically trained end-to-end from scratch and, after training, the projection net is discarded and a classifier head is learned on top of the representations. Instead, we start from available pretrained backbones for SSv2 classification and simply train the projection net using the Supervised Contrastive Learning loss. Given a batch of \(N\) video/label pairs sampled from the SSv2 dataset \(\{x_{k}^{h},y_{k}^{h}\}_{k\in[1,..,N]}\sim D^{h}\), we build a multiview batch consisting of \(2N\) pairs, \(\{\tilde{x}_{l}^{h},\tilde{y}_{l}\}_{l\in[1,..,2N]}\), where \(\tilde{x}_{2k-1}^{h}\) and \(\tilde{x}_{2k}^{h}\) are two random augmentations of video \(x_{k}^{h}\) and \(\tilde{y}_{2k-1}^{h}=\tilde{y}_{2k}^{h}=y_{k}^{h}\). We train \(f_{\theta}\) to minimize the Supervised Contrastive Loss: \[\mathcal{L}_{SupCon}(\theta)=\] \[\sum_{i\in I}\frac{-1}{|P(i)|}\sum_{p\in P(i)}\log\frac{\exp \left(\langle f_{\theta}(\tilde{x}_{i}^{h}),f_{\theta}(\tilde{x}_{p}^{h}) \rangle/\tau\right)}{\sum_{a\in A(i)}\exp\left(\langle f_{\theta}(\tilde{x}_{i }^{h}),f_{\theta}(\tilde{x}_{a}^{h})\rangle/\tau\right)} \tag{2}\] where \(\langle.,.\rangle\) is the cosine similarity, \(I=[1,..,2N]\), \(A(i)=I\backslash\{i\}\), \(P(i)=\{p\in A(i):\tilde{y}_{p}^{h}=\tilde{y}_{i}^{h}\}\) and \(\tau\) is a hyper-parameter. After training, we use the distance \(d(x^{h},x^{r})=\langle f_{\theta}(x^{h}),f_{\theta}(x^{r})\rangle\) between videos \(x^{h}\) and \(x^{r}\) as measure of similarity that focuses on the semantic aspects of the video. As we show in the experimental section, despite being trained only on human videos, this similarity metric generalizes to robot manipulation tasks. In our experiments, we use the same video backbone as [40] and [15]. Alternative options for \(d\) include the DVD similarity network [15], which learns to classify whether two videos correspond to the same task and uses the classification score between human and robot videos as a distance metric \(d\) on top of the video encoder \(f^{b}\). While our approach implicitly corresponds to the same classification objective, supervised contrastive learning readily embeds our encoder with a similarity metric. ## IV Experiments This section presents experiments validating our proposed approach and its training procedure. We evaluate our method on several tasks in two challenging environments and compare results to the state-of-the-art method [15]. In particular, we present ablations of our method and show its advantage in zero-shot settings, i.e. for tasks that have not been observed during training. ### _Experimental Setup_ For our experiments, we consider the TableTop environments introduced in [15]. The agent controls a robot arm in four variations of a simulated environment containing a drawer, a cup in front of a coffee machine and a faucet handle. The four environments differ by object positions, camera locations and colors. Robot experience for policy training is collected by controlling the robot end effector to go through three keypoints randomly sampled in the environment. We follow the experimental setup of [15] and consider three tasks: close the drawer, push the cup and turn the faucet to the right. The image encoder representing the current state of the environment is learned end-to-end together with the policy. These environments challenge the ability of our approach to generate meaningful manipulation skills from random robot trajectories. Furthermore, we consider the Kitchen environment initially introduced in [10]. The agent controls a robot arm with a clamp in a kitchen environment containing diverse objects: a microwave, three doors, a kettle and light and knob switches. We consider 3 opening tasks: open the microwave, open the left door and open the sliding door. Robot demonstrations accomplishing these tasks are available from [34]. However, these demonstrations are not annotated and the agent learns jointly from all these demonstrations without knowing what each demonstration accomplishes. For the Fig. 3: Illustrations of environments used in our experiments. From left to right: TableTop env1-env4 followed by Kitchen left and right views. Kitchen environment we train the policy on top of R3M image representations following [34] using all demonstrations. This environment is challenging as it requires to distinguish which task is the one intended by the user among very similar opening tasks. Figure 3 illustrates the environments. Following [15], for each of these tasks, we consider three different human video instructions collected in diverse environments to prompt our method at inference. Note that, in both environments, our approach can use the same video encoder and similarity metric. ### _Comparison to DVD [15]_ We first validate our video-conditioned control policy and compare it to the planning method of DVD [15] on the TableTop environments in the following settings: * _Seen robot demos_: the video similarity is trained on a subset of task-related SSv2 classes and on labelled videos of robot demonstrations for the target tasks collected in env 1 and the rearranged version of env 1. * _Unseen robot demos_: the video similarity is trained on a subset of task-related SSv2 classes and on labelled videos of robot demonstrations corresponding to non-target tasks in env 1 and the rearranged version of env 1. We compare our approach to [15] that uses the DVD similarity on top of a learned action-conditioned video prediction model for control [41]. For the _Seen robot demos_ setting we take results reported in [15]. For the _Uneen robot demos_, we take the best success rate reported in [15] for environment 1 and run the code of the authors to obtain results for environments 2, 3 and 4 (marked with *). For a fair comparison of our control policy, we here run ViP using the same video similarity function as in DVD which was trained on paired data between human videos and robot demonstrations. Table I presents results for the TableTop environments where, for each environment, we report the average success rates across the three tasks and three human video instructions. ViP control policy outperforms DVD in both settings and most environments. We believe these improvements are due to comparing the human video prompt to full robot trajectories instead of shorter sub-trajectories and not relying on synthetic video generation for control as in [15]. To demonstrate stability of our approach, we report results over four training seeds while [15] reports results over evaluation runs. Moreover, our method is significantly faster than DVD as it operates in the space of pre-computed embeddings, while DVD requires to execute its video prediction model for each sampled trajectory. As result, in our experiments ViP requires less than 1 second to complete an episode in the TableTop environment while it takes more than 16 seconds to complete the same task with DVD. We observe that the gains obtained by ViP compared to DVD are lower in the _unseen robot demos_ setting. This can be attributed to the fact that our approach relies more on human videos of the target tasks to achieve good results. ### _ViP without paired data_ While DVD uses paired data with robot and human videos demonstrating execution of similar tasks, obtaining such data is cumbersome at scale, see discussion in Section I. In this section we demonstrate that ViP is able to cope with a more challenging setting where the training of video similarity is done using pairs of human videos only and without access to any robot data. Table II presents our results. In this setting, our approach (_ViP (Ours)_) significantly outperforms DVD, succeeding to perform intended tasks more than \(70\%\) of the time on average across environments, while DVD performs only slightly better than chance. Moreover, we can see that our results in this setting are similar to results in Table I where paired robot demonstrations are used for similarity learning. Indeed, having a global understanding of full robot trajectories makes our action recognition approach less reliant on robot demos, which are more important when planning for short horizon as in DVD. Finally, ViP with the DVD similarity metric also performs well, showing that our approach can accommodate alternative choices of video similarity. Qualitative results of ViP are presented in Figure 1 and in the supplementary video, where we show that failure cases include failing to manipulate the desired object and failure to perform the manipulation task. We further compare our approach to a baseline where we bypass our translation module and directly use human video embeddings to condition the ViP policy (_Human video as input_). Since ViP was only trained on robot video embeddings, embeddings of human videos are out of its training distribution. As result, the policy fails to perform the intended tasks. This highlights the importance of our translation procedure which matches human videos to a library of robot demonstrations. In Figure 4 we evaluate the sensitivity of our method to the number of nearest neighbor samples \(k\) used for translation of human videos in Section III-C. High values of \(k\) ensure that the generated embedding \(e^{r}\) is an average of many appropriate robot embeddings, whereas low values of \(k\) prioritize maximising the similarity score between the predicted robot embedding and the human instruction embedding. While our approach is relatively robust to this hyperparameter, too high values of \(k\) result in lower performance. On the other side, for \(k=1\), meaning that we choose a robot demonstration with the highest similarity to the human video, the performance drops. We hypothesise that performing a nearest-neighbor regression has a regularization effect, as naively maximizing the similarity might result in choosing an adversarial robot embedding that is misleadingly for the target task. We set \(k\) to \(1\%\) of the library size for the TableTop environments and to \(0.5\%\) for the Kitchen environment. In Table II we also ablate an alternative choice for video similarity where video embeddings obtained from the SSv2 action classification network are directly compared using a cosine distance measure _(Cosine distance in repr. space)_. Using such a naive video similarity together with ViP results in superior performance compared to DVD, however, it is outperformed by both the DVD similarity with ViP _(ViP (DVD similarity))_ and our full approach, highlighting the importance of training an explicit distance between videos for the success of our method. ### _Kitchen environment_ We evaluate our approach on the kitchen environments in the zero-shot setting. We compare our approach against a version of our method where we select a robot demonstration that solves the target task _(ViP (Oracle))_ as well as a version that randomly selects robot demonstrations from the library _(ViP (Random))_. We also compare against single-task R3M [34], where we trained specialized policies for each task using all the demonstrations available. We report the success rates for each task averaged over the 3 corresponding human video instructions across 4 training seeds in Table III. Comparison between R3M and _Oracle_ shows that our approach successfully learns from multi-task data for precise manipulation skills, achieving competitive results to R3M trained for single tasks. When prompted with a human video instruction in the left camera view (L), our approach successfully opens the correct door when presented with human videos of opening common doors and microwaves. However, the method fails to map human videos of opening sliding doors to appropriate robot skills. On the right camera view (R), our approach struggles more, often opening the wrong object when prompted with videos of sliding doors and microwaves. These results show the limit of training on SSv2, where different human videos of opening something are encouraged to be grouped together by our similarity metric disregarding manipulated objects. ## V Conclusion We propose ViP, a method that learns to map human video instructions to robot skills in a zero-shot manner. We show that by conditioning on video embeddings, we can learn from multi-task robot data without supervision and prompt our policy with an unseen human video instruction at test time. As a step further towards less supervision in data-driven robotics, we demonstrate that, by training on large datasets of diverse labelled human videos, we don't need to pair human instructions to robot data during training. Our experiments demonstrate that ViP can accommodate many different similarity metrics between human instructions and robot manipulations. Future work could explore other forms of similarities, such as mapping language instructions to robotic skills. **Acknowledgements:** This work was funded in part by the French government under management of Agence Nationale de la Recherche as part of the "Investissements d'avenir" program, reference ANR19-P3IA-0001 (PRAIRIE 3IA Institute), the ANR project VideoPredict, reference ANR-21-FAII-0002-01, and the Louis Vuitron ENS Chair on Artificial Intelligence. This work was granted access to the HPC resources of IDRIS under the allocation 2021-AD011012947 made by GENCI. We would like to thank Mathis Clautrier and Minttu Alakuijala for helpful feedbacks. Fig. 4: Average success rate achieved by the policy on the TableTop environments for different values of \(k\). We compare \(k=1\) against values of \(k\) corresponding to different percentages of the library size.
2308.00604
A Close Examination on Inelastic Collision
This paper examines the details of an inelastic collision when a bullet shoots a block vertically upward from below. With the assumption of constant interaction force between them, we obtain quantities of interest including the displacement for the block at the end of collision, the collision time, and in particular, the bullet's depth inside the block, which are not studied under the traditional assumption of very short collision time. We identify the condition under which the collision time can indeed be considered as negligibly short. The theory is applied to a well-known demonstration. Using the data extracted from photos of the demonstration, we calculate the magnitude of interaction force and the collision time, which can in no way be easily measured otherwise.
Zhiwei Chong
2023-07-28T04:22:20Z
http://arxiv.org/abs/2308.00604v1
# A Close Examination on Inelastic Collision ###### Abstract This paper examines the details of an inelastic collision when a bullet shoots a block vertically upward from below. With the assumption of constant interaction force between them, we obtain quantities of interest including the displacement for the block at the end of collision, the collision time, and in particular, the bullet's depth inside the block, which are not studied under the traditional assumption of very short collision time. We identify the condition under which the collision time can indeed be considered as negligibly short. The theory is applied to a well-known demonstration. Using the data extracted from photos of the demonstration, we calculate the magnitude of interaction force and the collision time, which can in no way be easily measured otherwise. Introduction Some problems in college physics are treated in an over-simplified manner. One such simplification is the assumption of very short collision time in an inelastic collision when finding the maximal height of a block shot by a bullet from below in textbook [1] and the "thought-provoking" demonstration in [2; 3; 4]. This approach, which we call the standard approach, is divided into two stages. The first stage is the collision between the bullet and the block in which the total momentum of the system is regarded to be conserved even though the net external force on the system, the gravity, is obviously nonzero. In the second stage, both move up together, and the maximal height is determined by the law of mechanical energy conservation. Other examples with similar over-simplification include the ballistic pendulum [5; 6; 7; 8] and billiard collisions [9; 10]. The standard approach needs to be examined closely for the following reasons. Firstly, it does not consider the bullet's depth inside the block, which is strikingly exhibited in Fig. 4 of Ref. [2]. Secondly, it is not self-consistent; mechanical energy is lost but there is no penetration for the bullet in the block since it treats their vertical displacements as the same. In reality, the bullet's vertical displacement is greater than the block's, and the loss in mechanical energy is due to the negative work done by the force between the bullet and the block. Lastly, there arises a puzzle. The block is considered to remain stationary during the process when the bullet embeds itself in the block, meanwhile it acquires a finite velocity during exactly the same process. In other words, it is reasonable to think that the block starts moving at the very moment the bullet contacts the block. It is worth pointing out that the introduction of the assumption of very short collision time is due to the lack of information about the complicated bullet-block interaction. This is the reason that the standard approach is silent on the bullet's depth inside the block even though it admits the loss in mechanical energy. Therefore, to study the bullet's depth inside the block, it is necessary to make an assumption on the bullet-block interaction. As a first step, resorting to Occam's razor for the moment [11], we make the simplest possible assumption: constant bullet-block interaction. The aim of this paper is to clarify the condition under which the collision time can indeed be considered as short and explain the reason that the block's displacement can be neglected during collision. Moreover, the bullet's conspicuous depth inside the block exhibited in the impressive demonstration in Ref. [2] provides an excellent opportunity to check the theory against experiment. Exactly the same issue as in this paper is addressed to the ballistic pendulum problem in Ref. [8]. However, a rather complicated differential equation is derived and numerical algorithm has to be invoked. The simple setup of the problem in this paper has two advantages. It leads to conceptually and analytically transparent results without losing any physics of interest. More importantly, the theoretical results can be directly used to analyze experimental observations in Ref. [2]. In particular, according to the data that can be extracted from the photos in Ref. [2], the magnitude of the interaction force during collision and the collision time can be calculated, which can in no way be directly measured otherwise. ## II The standard approach: very short collision time The statement of the problem or the setup of the experiment is as follows [1; 2]. A block of mass \(M\) lies on a horizontal table. There is a small hole beneath it through which a bullet is shot vertically up at the center of the block. The mass of the bullet is \(m\) and its speed when hitting the block is \(u\). The quantities of interest include the common speed they move up together, the time to reach the highest point, the maximal vertical displacement, and the loss in mechanical energy. The solution is divided into two stages. In the first stage, based on the assumption of very short collision time, the impulse from gravity is ignored. Thereby, the total momentum is _approximately_ conserved, that is, \[m\,u\,=\,(M\,+\,m)\,v, \tag{1}\] where \(v\) is the common velocity of the bullet and the block at the end of collision. It is obtained as \[v\,=\,\frac{m}{M\,+\,m}\,u. \tag{2}\] Then, both move up with the initial speed \(v\) and constant deceleration \(g\) due to gravity. In the second stage, by the law of mechanical energy conservation or simply by kinematic formulas, the maximal height \(h\) for the block is determined by \[\frac{1}{2}\,(M\,+\,m)\,v^{2}\,=\,(M\,+\,m)\,g\,h,\quad h\,=\,\frac{v^{2}}{2 \,g}. \tag{3}\] It is worth pointing out that the bullet and the block have the same vertical displacement in this approach, which implies that the bullet does not penetrate into the block, or the penetration is negligible. The time \(t\) to reach the highest point is \[t\,=\,\frac{v}{g}. \tag{4}\] The loss in mechanical energy \(L\) is the difference between the bullet's initial and the system's total kinetic energy at the end of collision, that is, \[L\,=\,\frac{1}{2}\,m\,u^{2}\,-\,\frac{1}{2}\,(M\,+\,m)v^{2}\,=\,\frac{M}{M\,+ \,m}\,\frac{1}{2}\,m\,u^{2}, \tag{5}\] which is a fraction of the bullet's kinetic energy just before collision. The quantities \(v,h,t\), and \(L\) in Eqs. (2), (3), (4), and (5), respectively, will be compared with their counterparts in Sec. III. A few remarks are in order here. The assumption of very short collision time plays a crucial role in this approach. On the one hand, the impulse from gravity, which is the product of gravity and the collision time, can safely be ignored due to this assumption. This is the very reason that we can apply the law of momentum conservation approximately in Eq. (1). On the other hand, it helps to explain why there is a finite velocity but negligible displacement at the end of collision. The velocity of the block is the product of the _average_ acceleration (even though it is not explicitly considered) and the collision time, while its displacement is one half of the product of the former and the _square_ of the latter. Numerically, when the collision time is very short, the block's displacement is a second order infinitesimal, while its velocity is a first order one. Hence, the former is negligible compared with the latter. ## III Constant interaction force The reason for introducing the assumption of negligible collision time is the lack of knowledge about the interaction force. Thereby, to go beyond the assumption of very short collision time, we need a plausible assumption on the bullet-block interaction: constant interaction force. With this assumption, more details of the collision can be studied. In particular, we can calculate quantities such as the collision time, the bullet's depth inside the block, and the block's vertical displacement at the end of collision, which are not addressed in the standard approach. In addition, we pin down the condition under which the collision time can be considered as very short: the interaction force is very large compared with the block's weight. Lastly, all the results under constant but very large interaction force reduce to those in the standard approach. The constant interaction force, the block's acceleration, and the bullet's deceleration are denoted as \(F,A\), and \(a\), respectively. Applying Newton's second law to each object gives \[F\,-\,M\,g = M\,A,\qquad A\,=\,\frac{F}{M}\,-\,g, \tag{6}\] \[F\,+\,m\,g = m\,a,\qquad a\,=\,\frac{F}{m}\,+\,g. \tag{7}\] The collision time \(\tau\) or the time when the bullet stops moving inside the block is determined by equating the velocity of the block, \(A\,\tau\), with that for the bullet, \(u\,-\,a\,\tau\), that is, \[\tau\,=\,\frac{u}{A\,+\,a}\,=\,\frac{M}{F}\,\frac{m}{M\,+\,m}\,u\,=\,\frac{M} {F}\,v\,=\,\frac{v}{r\,g},\quad r\,\equiv\,\frac{F}{M\,g}, \tag{8}\] where the second equality follows from Eqs. (6) and (7), and the third from Eq. (2). The ratio \(r\) plays an important role. When it is very large, or equivalently, the force \(F\) is very large compared with the the block's weight \(M\,g\), the collision time \(\tau\) can be considered as very short. Moreover, we will soon see that in the large \(r\) limit, all the results do reproduce those in Sec. II. Under the assumption of constant interaction force, we are able to calculate the block's vertical displacement \(y^{\prime}\) at the end of collision, which is not studied in the standard approach. Applying the kinematic formula gives \[y^{\prime}\,=\,\frac{1}{2}\,A\,\tau^{2}\,=\,\frac{1}{2}\,(r\,-\,1)\,g\,\left( \frac{v}{r\,g}\right)^{2}\,=\,\frac{r\,-\,1}{r^{2}}\,\frac{v^{2}}{2\,g}\,=\, \frac{r\,-\,1}{r^{2}}\,h, \tag{9}\] where the second equality follows from \(A\) in Eq. (6) and \(r\) and \(\tau\) in Eq. (8). The last equality follows from \(h\) in Eq. (3). As expected, when \(r\) is very large, \(y^{\prime}\) goes to zero. This is the reason that the block's vertical displacement is considered to be negligible in the standard approach. The common velocity \(v^{\prime}\) at which the block and the bullet start moving up together is \[v^{\prime}\,=\,A\,\tau\,=\,(r\,-\,1)\,g\,\frac{v}{r\,g}\,=\,\left(1\,-\, \frac{1}{r}\right)\,v. \tag{10}\] We see that \(v^{\prime}<v\), and \(v^{\prime}\) goes to \(v\) in Eq. (2) when \(r\) goes to \(\infty\). A remark is in order here. We see that the common velocity \(v^{\prime}\,=\,A\,\tau\) is proportional to the collision time \(\tau\), meanwhile the block's displacement \(y^{\prime}\,=\,\frac{1}{2}\,A\,\tau^{2}\) at the end of collision is proportional to \(\tau^{2}\). If the collision time \(\tau\) is very short, then the displacement for the block \(y^{\prime}\) is negligible compared with its velocity \(v^{\prime}\). Mathematically speaking, the common velocity \(v^{\prime}\) is a first order infinitesimal in \(\tau\) and the displacement for the block \(y^{\prime}\) is a second order infinitesimal. This is the very mathematical reason that the block acquires a finite velocity while its displacement is negligible at the end of collision under the assumption of very short collision time. The block's maximal vertical displacement \(h^{\prime}\) is the sum of its displacement \(y^{\prime}\) at the end of collision and that covered in the upward projectile motion thereafter, that is, \[h^{\prime}\,=\,y^{\prime}\,+\,\frac{v^{\prime 2}}{2\,g}\,=\,\frac{r\,-1}{r^{2} }\,h\,+\,\left(1\,-\,\frac{1}{r}\right)^{2}\,\frac{v^{2}}{2\,g}\,=\,\left(1\,- \,\frac{1}{r}\right)\,h, \tag{11}\] where the expression for \(h\) in Eq. (3) is used to obtain the last equality. We see that \(h^{\prime}\,<\,h\), and \(h^{\prime}\,\rightarrow\,h\) as \(r\,\rightarrow\,\infty\). Figure 1 compares between the two approaches their relative positions for the block and the bullet at their highest points. The total time for the block to reach the highest point is the sum of the collision time \(\tau\) Figure 1: Comparison between two approaches at their maximal displacements. The left figure shows the situation under the assumption of constant interaction force, and the right is for the standard approach. and that for the upward projectile motion, that is, \[t^{\prime}\,=\,\tau\,+\,\frac{v^{\prime}}{g}\,=\,\frac{v}{r\,g}\,+\,\left(1\,-\, \frac{1}{r}\right)\,\frac{v}{g}\,=\,\frac{v}{g}\,=\,t, \tag{12}\] where the second equality is obtained from \(\tau\) in Eq. (8) and \(v^{\prime}\) in Eq. (10), and the last equality follows from Eq. (4). A bit surprisingly, it takes the same amount of time to reach the highest point as in the standard approach. We believe it to be a coincidence which might depend on the specific form of interaction force. The bullet's maximal depth inside the block is the product of its average speed with respect to the block, \(\frac{1}{2}\,u\), and the collision time \(\tau\), that is, \[d\,=\,\frac{1}{2}\,u\,\tau\,=\,\frac{1}{r}\,\left(1\,+\,\frac{M}{m}\right)\, \frac{v^{2}}{2\,g}\,=\,\left(1\,+\,\frac{M}{m}\right)\,\frac{h}{r}, \tag{13}\] where the second equality follows from Eqs. (2) and (8), and the third from Eq. (3). The first equality needs more explanation. The bullet moves inside the block with constant acceleration. Before the collision, its speed with respect to the block is \(u\). At the end, it becomes 0. Thereby, the average is \(\frac{1}{2}\,u\). An alternative way to obtain this result is to find the difference in displacement between the bullet and the block at the end of collision, which gives the same answer in Eq. (13). It is also of interest to compare the bullet's vertical displacement in the two approaches. In the standard approach, it is equal to the block's displacement \(h\) in Eq. (3). With constant interaction force, it is the sum of the block's displacement \(h^{\prime}\) in Eq. (11) and the bullet's depth \(d\) in Eq. (13), that is, \[h^{\prime}\,+\,d\,=\,\left(1\,-\,\frac{1}{r}\right)\,h\,+\,\left(1\,+\,\frac{ M}{m}\right)\,\frac{h}{r}\,=\,\left(1\,+\,\frac{M}{r\,m}\right)\,h. \tag{14}\] Clearly, it is larger than \(h\) in the standard approach. The loss in mechanical energy \(L^{\prime}\) is the product of the constant force \(F\) and the depth \(d\), that is, \[L^{\prime}\,=\,F\,d\,=\,\frac{1}{2}\,F\,u\,\tau\,=\,\frac{1}{2}\,F\,u\,\frac{ v}{r\,g}\,=\,\frac{M}{M\,+\,m}\,\frac{1}{2}\,m\,u^{2}, \tag{15}\] where the last equality follows from \(r\) in Eq. (8) and \(v\) in Eq. (2). The right hand side of Eq. (15) is exactly the same as that in Eq. (5), that is, \[L^{\prime}\,=\,L. \tag{16}\] It is worth pointing out that \(L^{\prime}\) can also be obtained by calculating the difference between the bullet's kinetic energy just before the collision and the system's gravitational potential energy at the highest point. The equality in Eq. (16) implies that the total increase in gravitational potential energy should be the same in both approaches. Indeed, it can be readily verified that \[\left(M\,+\,m\right)g\,h\,=\,M\,g\,h^{\prime}\,+\,m\,g\,(h^{\prime}\,+\,d), \tag{17}\] where the left hand side is the increase in gravitational potential energy in the standard approach, and the right is that under constant interaction force. Equation (17) can alternatively be written as \[h\,=\,\frac{M\,h^{\prime}\,+\,m\,(h^{\prime}\,+\,d)}{M\,+\,m}, \tag{18}\] where the right hand side is interpreted as the vertical displacement for the center of mass. In other words, the equality shows that the maximal displacements for the center of mass are the same in both approaches, even though the maximal displacements for the block are different. The difference in maximal displacement, \(h^{\prime}<h\), can be understood in the following way. In the standard approach, the depth for the bullet inside the block is neglected, and both climb the same distance in the vertical direction. Meanwhile, under constant interaction force, their maximal displacements are not the same anymore; the bullet covers a larger vertical displacement than the block by the amount \(d\) in Eq. (13). Since the maximal displacements for the center of mass are the same in both approaches, the maximal displacement for the block has to be less than that in the standard approach to compensate for the larger maximal displacement for the bullet. We summarize our findings as follows. 1. The total time to reach the highest point, the loss in mechanical energy, and the maximal displacement for the center of mass are the same in both approaches. 2. The maximal displacement for the block (bullet) under constant interaction force is lower (higher) than that in the standard approach. 3. In the large \(r\) limit, that is, when the force \(F\) is very large compared with the block's weight, the results reduce to those in the standard approach. In particular, both the collision time and the bullet's depth inside the block vanish in this limit. ### Further Comparisons This subsection compares the velocities and displacements for the block at time \(\tau\) in both approaches. The velocity at time \(\tau\) or at the end of collision under constant interaction force is \(v^{\prime}\) obtained in Eq. (10). On the other hand, in the standard approach, the velocity for the block at time \(\tau\) is \[v\,-\,g\,\tau\,=\,v\,-\,g\,\frac{v}{r\,g}\,=\,\left(1\,-\,\frac{1}{r}\right)\,v, \tag{19}\] which is exactly the same as \(v^{\prime}\) in Eq. (10). In other words, the velocities are different at the end of collision for two approaches; it is \(v\) in the standard approach and \(v^{\prime}<v\) under constant interaction force. However, they are the same at time \(\tau\). The height of the block at time \(\tau\) or at the end of collision under constant force is \(y^{\prime}\) in Eq. (9). Meanwhile, the height of the block \(y\) in the standard approach at time \(\tau\) is \[y\,=\,v\,\tau\,-\frac{1}{2}\,g\,\tau^{2}\,=\,\frac{1}{r}\,\left(2\,-\,\frac{1 }{r}\right)\,h. \tag{20}\] The difference between \(y^{\prime}\) in Eq. (9) and \(y\) in Eq. (20) is \[y^{\prime}\,-\,y\,=\,\frac{r\,-\,1}{r^{2}}\,h\,-\,\frac{1}{r}\,\left(2\,-\, \frac{1}{r}\right)\,h\,=\,-\frac{1}{r}\,h. \tag{21}\] On the other hand, the difference between \(h^{\prime}\) in Eq. (11) and \(h\) in Eq. (3) is \[h^{\prime}\,-\,h\,=\,\left(1\,-\,\frac{1}{r}\right)\,h\,-\,h\,=\,-\frac{1}{r} \,h. \tag{22}\] Thereby, \[h^{\prime}\,-\,h\,=\,y^{\prime}\,-\,y. \tag{23}\] In words, the difference in maximal vertical displacement for the block is the same as that at time \(\tau\). This can be understood in the following way. The velocities at time \(\tau\) are the same for the two approaches as shown in Eqs. (10) and (19). Thereby, the displacement that the block ascends thereafter is the same. As a result, the difference in the maximal vertical displacements must be the same as that at time \(\tau\). The above findings are illustrated in Fig. 2. ## IV Estimation of interaction force and collision time The thought-provoking demonstration in Ref. [2] provides us with an excellent opportunity to check the theory in this paper. Unfortunately, the emails of the authors are not in use anymore, and we are not able to contact with them to get the data such as the mass of the block and bullet, the bullet's depth inside the block, and the model of the pistol which may provide the bullet's muzzle velocity. In particular, if we could get the video of the experiment, we would be able to measure the time for the block to reach its maximal height. The thickness the block is about 7 cm, and the pistol's caliber is.22 which enables us to estimate the mass and muzzle velocity of the bullet [2]. Moreover, Figs. 2 and 3 in the reference make it possible to roughly measure the block's maximal height and the bullet's depth inside the block. On the upper half of Table. 1, we list the available data as input to our calculation. We will explain how they are obtained in Sec. IV.1. We are ready to calculate the quantities that are either not available to us or cannot be measured directly. From Eqs. (13), (8), and (2), the depth can be expressed as \[d\,=\,\frac{1}{2}\,u\,\tau\,=\,\frac{1}{2}\,u\,\frac{v}{r\,g}\,=\,\frac{m}{r\, (M\,+\,m)}\,\frac{u^{2}}{2\,g}. \tag{24}\] On the other hand, the height \(h^{\prime}\) in Eq. (11) can be expressed in terms of \(m\) and \(u\) as \[h^{\prime}\,=\,\left(1\,-\,\frac{1}{r}\right)\,h\,=\,\left(1\,-\,\frac{1}{r} \right)\,\frac{v^{2}}{2\,g}\,=\,\frac{(r\,-\,1)\,m^{2}}{r\,(M\,+\,m)^{2}}\, \frac{u^{2}}{2\,g}, \tag{25}\] where the second equality follows from Eq. (3) and the third from Eq. (2). Plugging in the Figure 2: Comparison between two approaches at time \(\tau\). The left figure shows the situation under the assumption of constant interaction force, and the right is for the standard approach. values for \(m,u,d\), and \(h^{\prime}\) from Table 1 into Eqs. (24) and (25) gives \[r\,\approx\,1391,\quad M\,\approx\,175.68g. \tag{26}\] Plugging the values for \(r\) and \(M\) into Eq. (8) gives \[F\,=\,r\,M\,g\,\approx\,2394.42\ \mathrm{N},\quad\tau\,=\,\frac{v}{r\,g}\,=\, \frac{m\,u}{r\,(M\,+\,m)\,g}\,\approx\,0.33\ \mathrm{ms}. \tag{27}\] Note the magnitude of \(F\) is rather large; it is as large as the weight of a mass 244.08 Kg! The values for \(M,\tau\), and \(F\) are listed as calculated quantities on the lower half of Table 1. Another quantity that is of our interest but cannot be measured directly is the displacement of the block at the end of collision. From Eqs. (9), (13), and the data in Table 1, we obtain \[y^{\prime}\,=\,\frac{1\,-\,\frac{1}{r}}{1\,+\,\frac{M}{m}}\,d\,\approx\,0.73 \mathrm{mm}, \tag{28}\] which is much smaller than the bullet's depth in the block. These results are interesting. By measuring the maximal height \(h^{\prime}\) for the block and the bullet's depth \(d\) in the block, we are able to calculate the rather large interaction force \(F\) on one hand and the very short collision time \(\tau\) on the other. These two quantities can in no way be directly measured with any imaginable means. Finally, the total time \(t^{\prime}\) to reach the highest point is \[t^{\prime}\,=\,t\,=\,\frac{v}{g}\,=\,\frac{m}{M\,+\,m}\,\frac{u}{g}\,\approx\, 0.46\mathrm{s}. \tag{29}\] Had we obtained the video of the demonstration, we could have been able to compare this calculation with the time extracted from video to examine the validity of our assumption. \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline Input & \(h^{\prime}\) & \(d\) & \(m\) & \(u\) & / \\ \hline Value & 101.5 cm & 5.7 cm & 2.268 g & 350.1 m/s & / \\ \hline \hline Calculated & \(M\) & \(\tau\) & \(F\) & \(y^{\prime}\) & \(t^{\prime}\) \\ \hline Value & 175.68 g & 0.33 ms & 2394.4 N & 0.73 mm & 0.45 s \\ \hline \end{tabular} \end{table} Table 1: Input and Calculated Quantities for the Collision. ### Method to Obtain the Data The size of the block is given in Ref. [2], and its thickness is about 7 cm. From Fig. 2 in the reference, the maximal height \(h^{\prime}\) of the block is about 14.5 multiples of the block's thickness, that is, \(h^{\prime}\,\approx\,14.5\times 7\,=\,101.5\) cm. From Fig. 4, the bullet's depth inside the block is estimated as \(d\,\approx\,5.7\) cm. The pistol used in the demonstration is of caliper.22, but the model of the pistol is not provided. However, the mass of this type of bullet and its muzzle velocity can easily be obtained from the internet [12]. The mass of the bullet ranges from 30 gr to 40 gr (\(1\) gr \(\approx\) 0.065 g). The muzzle velocity ranges from 1070 fps to 1600 fps (\(1\) fps \(\approx\) 0.305 m/s). The muzzle energy is between 100 ft.lbf and 105 ft.lbf (\(1\) ft.lbf \(\approx\) 1.356 J). The range for the muzzle energy is rather narrow, and the range for the muzzle velocity is rather wide. Thereby, we use the muzzle energy and the bullet mass to calculate the muzzle velocity. In the following, the muzzle energy is chosen as 102.5 ft.lbf which is approximately 138.971 J. The mass of the bullet is chosen as 35 gr which is about 2.268 g. With the notations in this paper, these values are \(\frac{1}{2}\,m\,u^{2}\,=\,138.971\) J and \(m\,=\,2.268\) g from which the muzzle velocity is obtained as \(u\,\approx\,350.1\) m/s. ## V Conclusion and Discussion This paper closely examines the details of inelastic collision by introducing the assumption of constant interaction force to replace the assumption of very short collision time. Comparisons are made for various quantities of interest between the two approaches. In the large \(r\) limit, that is, when the interaction force is very large compared with the weight of the block, then all the results reduce to those under the assumption of very short collision time. However, there arises one question: to what extent do the results obtained in this paper depend on the peculiarity of the assumption of constant interaction force? In other words, will it still take the same amount of time for the block to reach its highest point when the interaction force is not constant anymore? Will the energy loss be the same? These are good questions for a student project.
2305.00820
Experimental Realization of Entangled Coherent States in Two-dimensional Harmonic Oscillators of a Trapped Ion
Entangled coherent states play pivotal roles in various fields such as quantum computation, quantum communication, and quantum sensing. We experimentally demonstrate the generation of entangled coherent states with the two-dimensional motion of a trapped ion system. Using Raman transitions with appropriate detunings, we simultaneously drive the red and blue sidebands of the two transverse axes of a single trapped ion and observe multi-periodic entanglement and disentanglement of its spin and two-dimensional motion. Then, by measuring the spin state, we herald entangled coherent states of the transverse motions of the trapped ion and observe the corresponding modulation in the parity of the phonon distribution of one of the harmonic oscillators. Lastly, we trap two ions in a linear chain and realize Molmer-Sorensen gate using two-dimensional motion.
Honggi Jeon, Jiyong Kang, Jaeun Kim, Wonhyeong Choi, Kyunghye Kim, Taehyun Kim
2023-05-01T13:39:19Z
http://arxiv.org/abs/2305.00820v1
Experimental Realization of Entangled Coherent States in Two-dimensional Harmonic Oscillators of a Trapped Ion ###### Abstract Entangled coherent states play pivotal roles in various fields such as quantum computation, quantum communication, and quantum sensing. We experimentally demonstrate the generation of entangled coherent states with the two-dimensional motion of a trapped ion system. Using Raman transitions with appropriate detunings, we simultaneously drive the red and blue sidebands of the two transverse axes of a single trapped ion and observe multi-periodic entanglement and disentanglement of its spin and two-dimensional motion. Then, by measuring the spin state, we herald entangled coherent states of the transverse motions of the trapped ion and observe the corresponding modulation in the parity of the phonon distribution of one of the harmonic oscillators. Lastly, we trap two ions in a linear chain and realize Molmer-Sorensen gate using two-dimensional motion. ## I Introduction For the last few decades, the coherent state has been the subject of intense theoretical and experimental investigation [1]. It is considered to be a quantum state with the most classical properties because its spread is the minimal allowed by the uncertainty principle and its trajectory of time evolution is identical to that of the classical harmonic oscillator [2]. Its multipartite extension, the entangled coherent state, has been a useful theoretical tool in various fields of quantum optics as it is the entangled superposition of the most "classical" quantum states. It has been used in theoretical studies concerning quantum information processing [3; 4; 5; 6; 7; 8; 9], quantum metrology [10], and fundamental tests of physics such as Bell's inequality and Leggett's inequality [11; 12]. Despite their sensitivity to decoherence, entangled coherent states have been experimentally realized in a few experiments involving photons [13] and superconducting circuits [14; 15]. The trapped ion has been an extremely valuable tool for studying the quantum world because it is highly isolated from the environment yet can be precisely controlled. The single-mode superposition of coherent states or cat states have been realized in trapped ion systems in various experiments using the motional state of the trapped ion [16; 17; 18; 19; 20]. There have been several theoretical works on the implementation of entangled coherent states in trapped ion systems [21; 22; 23; 24], but none have been experimentally implemented so far. In this work, we report on the realization of entangled coherent states with the two dimensional motion of a trapped ion. We implement the simultaneous spin-dependent force (SDF) on the ion in the two principal axes (X and Y) by making the transverse trap potential nearly isotropic so that the secular frequencies of the X and Y modes are very close. By choosing a laser detuning between the X and Y mode frequencies and driving a bichromatic transition with the blue and red sidebands, we excite the motional modes in the two radial directions concurrently with varying ratios of coupling strengths to each mode. For a single ion, we generate Lissajous-curve-like motion in two dimensions with various commensurate oscillation periods in each direction and observe corresponding periodic variation in the spin state [25]. With mid-circuit measurement, we decouple the spin from the motion and herald the entangled coherent state of motion in two transverse axes. This is verified by observing the modulation of phonon number parity, which results from the periodic entanglement and disentanglement of the two motional modes. Also, in an ion chain consisting of two ions, we demonstrate the successful generation of a Bell state using Molmer-Sorensen interaction where the geometric phase is accumulated via motion in two spatial dimensions, which reduces the required Rabi frequency compared to the one-dimension case. ## II Results We trap \({}^{171}\)Yb\({}^{+}\) ions in the center of a blade-type trap with dimensions specified in 1(a) [26; 27]. A radio-frequency (RF) voltage oscillating at 15.3 MHz is applied to the RF blades for radial confinement, and the other two blades are grounded. A DC offset voltage can be applied to the RF blades to rotate the radial principal axes and change the separation between the X and Y mode secular frequencies. The amplitude of the RF field is ac tively stabilized by a PID controller [28]. As can be seen in 1(b) and (c), the mean secular frequency for the radial modes is typically set to around \(\omega_{X,Y}\simeq 2\pi\times 1250\) kHz. The exact value changes by a few kHz every day due to the thermal drift of the RF voltage sampling [29]. The degeneracy of the transverse axes is lifted by \(2\pi\times 27.8\) kHz, with the Y axis frequency higher. A DC voltage of 1300 V is applied to the endcap electrodes, and the resulting axial secular frequency is \(\omega_{Z}=2\pi\times 120\pm 0.12\) kHz. The qubit states are defined as \(\left|\downarrow\right\rangle=\left|S_{1/2},F=0,m_{F}=0\right\rangle\) and \(\left|\uparrow\right\rangle=\left|S_{1/2},F=1,m_{F}=0\right\rangle\). For experiments involving a single ion, the qubit state is measured by the standard fluorescence detection method [30]. For a two-ion chain, we use histogram fitting to infer the population of the three possible classes of qubit states, \(\{\left|\downarrow\downarrow\right\rangle\}\), \(\{\left|\downarrow\uparrow\right\rangle\}\), \(\{\left|\uparrow\downarrow\right\rangle\}\) and \(\{\left|\uparrow\uparrow\right\rangle\}\)[31]. The red sideband, blue sideband, and carrier transitions are implemented by applying appropriate detunings to the stimulated Raman transition [32]. It is realized by two perpendicular 355-nm pulsed laser beams which enter the trap from the bottom (\(R_{V}\)) and the side (\(R_{H}\)). Their relative frequencies and phases are controlled by acousto-optic modulators (AOMs). The Raman transition momentum vector \(\Delta\vec{k}\) is perpendicular to the Z axis and has components in both radial axes with angles \(\theta_{Y}=24^{\circ}\) and \(\theta_{X}=66^{\circ}\) as shown in 1(a). This results in asymmetric Lamb-Dicke factors, \(\eta_{X}=0.05\) and \(\eta_{Y}=0.11\), for the modes. The beating of the pulsed laser is stabilized by a feed-forward system which shifts the driving RF frequency of the AOM that controls the (\(R_{V}\)) beam [33]. ### Entanglement of spin with multiple motional modes We will use the notation \(\left|s\right\rangle\left|a\right\rangle\left|b\right\rangle\) to specify the quantum state of the system, where \(s\) denotes the qubit state of the ion chain with possible values of \(\uparrow\) and \(\downarrow\) for a single ion and their tensor product for a chain of two ions. \(a\) and \(b\) indicate the quantum states of the X and Y modes either in Fock state or coherent state basis. We realize the SDF Hamiltonian by driving the red and blue sidebands of the motional modes simultaneously with the same strength. When there is a symmetric detuning from the sidebands, the position of the wave packet in phase space modulated by a frequency proportional to the detuning [17], resulting in a circular trajectory as shown in 2(e). For a single trapped ion in a two-dimensional harmonic potential subject to a symmetrically detuned bichromatic beam, we have the following interaction Hamiltonian \[\hat{H} =\frac{\hbar\Omega\eta_{X}}{2}\left(\hat{a}_{X}e^{-i(\delta_{X}t +\phi_{M})}+\hat{a}_{X}^{\dagger}e^{i(\delta_{X}t+\phi_{M})}\right)\hat{\sigma }_{\phi_{S}}+\] \[\frac{\hbar\Omega\eta_{Y}}{2}\left(\hat{a}_{Y}e^{-i(\delta_{Y}t +\phi_{M})}+\hat{a}_{Y}^{\dagger}e^{i(\delta_{Y}t+\phi_{M})}\right)\hat{\sigma }_{\phi_{S}} \tag{1}\] where \(\eta_{j}\) and \(\delta_{j}\) are the Lamb-Dicke factor for the \(j\)-th Figure 1: **Experimental setup.** (a) Schematic diagram of trap electrodes, ion, and pulsed laser beams for Raman transition. \(\Delta\vec{k}\) indicates the direction of momentum transfer, which is the difference of the two pulsed laser beams, \(R_{V}\) and \(R_{H}\). Upper right is the cross-section of the trap in the transverse plane. The angles between \(\Delta\vec{k}\) and the Y and X principal axes are \(24^{\circ}\) and \(66^{\circ}\), respectively. The ion chain is formed along the Z axis. (b) Representative spectra showing the blue sidebands for the transverse modes of a single ion and (c) of a linear chain of two ions where \(X_{cm}\) and \(Y_{cm}\) are in-phase modes and \(X_{tilt}\) and \(Y_{tilt}\) are out-of-phase modes. axis and the detuning from the center-of-mass mode of the \(j\)-th axis, respectively. \(\hat{d_{j}}(\hat{a_{j}}^{\dagger})\) is the phonon annihilation (creation) operator for the \(j\)-th axis and \(\Omega\) is the Rabi frequency of the Raman transition. The motion and spin phase of spin-dependent interaction is proportional to the difference, \(\phi_{M}=(\phi_{b}-\phi_{r})/2\), and sum, \(\phi_{S}=(\phi_{b}+\phi_{r})/2\), of the laser phases for the blue and red sidebands, \(\phi_{b}\) and \(\phi_{r}\). We will use \(\left|+\right\rangle=1/\sqrt{2}(\left|\downarrow\right\rangle+e^{i\phi_{S}} \left|\uparrow\right\rangle)\) and \(\left|-\right\rangle=1/\sqrt{2}(\left|\downarrow\right\rangle-e^{-i\phi_{S}} \left|\uparrow\right\rangle)\) to indicate the eigenstates of the \(\hat{\sigma}_{\phi_{S}}\) operator. We assume \(\phi_{S}=0\) for simplicity. The X axis terms and Y axis terms act on their respective Hilbert spaces, yielding the following time evolution operator with displacements in X and Y phase spaces defined as \(\alpha\left(t\right)=\eta_{X}\Omega/2\delta_{X}\left(1-e^{-i\delta_{X}t} \right)e^{-i\phi_{M}}\) and \(\beta\left(t\right)=\eta_{Y}\Omega/2\delta_{Y}\left(1-e^{-i\delta_{Y}t}\right) e^{-i\phi_{M}}\). The time evolution operator is \(\hat{U}(t)=\left|+\right\rangle\langle+\right|\hat{D}_{X}(\alpha(t))\hat{D}_{ Y}(\beta(t))+\left|-\right\rangle\left\langle-\right|\hat{D}_{X}(-\alpha(t))\hat{D}_{ Y}(-\beta(t))\) where \(\hat{D}_{X}(\alpha(t))\) and \(\hat{D}_{Y}(\beta(t))\) are the displacement operators defined as to the initial state of the ion, \(\left|\psi(t=0)\right\rangle=\left|\downarrow\right\rangle\left|0\right\rangle \left|0\right\rangle\), after sideband cooling and qubit initialization, we get the following wave function which exhibits spin-motion entanglement in both motional modes: \[\left|\psi(t)\right\rangle=\frac{1}{\sqrt{2}}(\left|+\right\rangle\left|\alpha( t)\right\rangle\left|\beta(t)\right\rangle+\left|-\right\rangle\left|-\alpha(t) \right\rangle\left|-\beta(t)\right\rangle) \tag{2}\] The time evolution of the spin state for various ratios of detunings to the X and Y modes, \(R=\delta_{X}/\delta_{Y}\), is presented in 2(a)-(d). The dashed lines are fits to the following equation [17] \[P_{\uparrow}\left(t\right)=\frac{1}{2}\left(1-e^{-\left(\bar{n}_{X}+\frac{1} {2}\right)\left|2\alpha(t)\right|^{2}-\left(\bar{n}_{Y}+\frac{1}{2}\right) \left|2\beta(t)\right|^{2}}e^{-t/\tau}\right) \tag{3}\] where \(\tau\) is an empirical decoherence rate and \(\bar{n}_{X}\) and \(\bar{n}_{Y}\) are mean phonon numbers of the X and Y modes, which in our system are \(\simeq 0.2\) and \(\simeq 0.1\), respectively. In each phase space, the wave packets periodically move in a circular trajectory whose period is defined by the inverse of the detuning of the bichromatic beam. When only one of the motional modes return to the origin in the phase space, the spin states only partially interfere and the measured spin state deviates from its original state, \(\left|\downarrow\right\rangle\). When the wave packets return to the origin in both phase spaces at the same time, the spin state fully returns to the initial state [17]. ### Generation of entangled coherent state and observation of phonon number parity modulation The tripartite entangled state of spin and the two motions can be transformed into an entangled coherent state (ECS) of the two motional degrees of freedom by projecting the spin state with mid-circuit measurement. Modifying this sequence to displace only a single motional mode will produce a single-mode cat state of motion, as experimentally shown by Kienzler _et al_ (see the Supplementary Material) [20]. We start the experimental sequence by cooling the ion to the ground state with sideband cooling pulses. Then we apply the two-mode SDF, which acts on both the X and Y motions simultaneously, for a duration of \(t_{SDF}\). In the following step, the ion is irradiated with a near-resonant 369.5-nm laser beam that serves as the detection beam. It is turned on for 500 \(\mu\)s, and the scattered photons are collected by a photomultiplier tube. We then drive a blue sideband Rabi oscillation on the Y mode for varying amounts of time and measure the spin state of the ion. This sequence is shown in 3(a). We post-select the wave function with \(\left|\downarrow\right\rangle\) spin state which is heralded by the detection of less than two photons during the mid-circuit detection phase. This results in the following wave function: \[\left|\psi_{ECS}(t)\right\rangle=\left|\downarrow\right\rangle\frac{\left| \alpha(t)\right\rangle\left|\beta(t)\right\rangle+\left|-\alpha(t)\right\rangle \left|-\beta(t)\right\rangle}{\sqrt{2+2e^{-2(\left|\alpha(t)\right|^{2}+\beta( t)\right|^{2})}}} \tag{4}\] The complimentary data sets which have more than or equal to two photons detected correspond to the entangled coherent state with the opposite phase \(\left|\alpha(t)\right\rangle\left|\beta(t)\right\rangle-\left|-\alpha(t) \right\rangle\left|-\beta(t)\right\rangle\), but we choose not to use them because photon scattering affects coherence of motional states via recoil [20]. The blue sideband Rabi oscillation is then fitted to the following model to retrieve phonon number distribution of the Y mode [34; 20; 35] \[P_{\uparrow}\left(t_{BSB}\right)=\sum_{n=0}^{N}\frac{p_{Y,n}(t)}{2}\left(1- \cos\left(\Omega_{n+1,n}t_{BSB}\right)e^{-t_{BSB}/\tau}\right) \tag{5}\] where \(p_{Y,n}(t)\) is the population for \(n\)-phonon state in Y-motion after applying the SDF for \(t\), and \(N\) is the maximum phonon number we consider, \(\Omega_{n+1,n}\) is the first order blue sideband Rabi frequency for \(n\)-phonon state, and \(\tau\) is the coherence time. When \(\alpha=0\), the wave function in 4 is reduced to a single-mode cat state of the Y mode \((\left|\psi_{Y}(t)\right\rangle=\left|\downarrow\right\rangle(\left|\beta(t) \right\rangle+\left|-\beta(t)\right\rangle)/\sqrt{2+2e^{-2\left|\beta(t)\right|^ {2}}})\) and the phonon population is expected to be only in the even number states. However, for a non-zero \(\alpha\), the interference between the two coherent states with opposite phases in the Y phase space, \(\left|\beta\right\rangle_{Y}\) and \(\left|-\beta\right\rangle_{Y}\), is suppressed by the motion in the X-axis, \(\left|\alpha\right\rangle_{X}\) and \(\left|-\alpha\right\rangle_{X}\). Consequently, the parity of the Y mode population is modulated as the size of the displacement in the X mode changes. The resulting time evolution of the phonon population distribution is as follows. \[p_{Y,n}(t) = Tr(\{\left|\downarrow\right\rangle\left\langle\downarrow\right| \otimes\hat{I}_{X}\otimes\left|n\right\rangle\left\langle n\right|\}\left|\psi_{ ECS}\right\rangle\left\langle\psi_{ECS}\right|) \tag{6}\] \[= \frac{e^{-\left|\beta(t)\right|^{2}}(\left|\beta\left(t\right) \left|2^{n}/n!\right|)}{1+e^{-2\left(\left|\alpha(t)\right|^{2}+\left|\beta(t) \right|^{2}\right)}}(1+\left(-1\right)^{n}e^{-2\left|\alpha(t)\right|^{2}})\] where \(\left\langle n\right|\) is the \(n\)-th number state of the Y mode. 6 results in a modulation of phonon number parity defined as \(\Pi\left(t\right)=\sum_{n}\left(-1\right)^{n}p_{Y,n}(t)\), because the interference between the odd number states is suppressed by entanglement with the X mode. We include the effect of imperfect sideband cooling in the model, which leaves some population in the 1-phonon state (see Methods). We demonstrate the generation of entangled coherent states at various values of \(R=\delta_{X}/\delta_{Y}\). 3(d) corresponds to \(R=-2\), where the ion is periodically displaced in the X axis at a frequency which is twice of the frequency of the periodic displacement in the Y axis. Therefore, according to 6, the parity of the phonon distribution of the Y motion is expected to be modulated at half the period of its periodic displacement. We repeat the same experiment with \(R=-2/3\). In this case, the parity modulation pattern is expected to span three periods of the Y displacement as shown in 3(e). The observed variation in phonon number parity is in good agreement with the theoretical model, and is a direct consequence of the entanglement of the two motional modes. 3(b) and (c) are the two representative phonon distributions. The Y mode displacement is maximum for both, but the phonon number parity is \(0.89\pm 0.09\) for 3(b) and \(0.22\pm 0.06\) for 3(c). Also, 3(c) shows a clear deviation from the single-mode cat state phonon distribution with a significant population in the \(\left|1\right\rangle_{Y}\) and \(\left|3\right\rangle_{Y}\) states. In 3(d) and (e), we also plotted the time evolution of the mean phonon numbers of the Y mode, which approximately corresponds to the square of the absolute value of the displacement in the Y mode phase space. The theoretical curves for the mean phonon numbers of the X and Y modes are calculated by using the Rabi frequency and 1-phonon state population of each mode inferred by fitting the phonon number parity data. ### Molmer-Sorensen gate with two-dimensional motion Next, we trap two ions in a linear chain and investigate how the two-dimensional coherent motion can be utilized in an ion chain by realizing Molmer-Sorensen interaction [36; 37; 38] involving modes from multiple principal axes. We first observe the time evolution of the two-qubit states under two-dimensional Molmer-Sorensen interaction, with the Rabi frequency and detuning calibrated to generate a Bell state (\(1/\sqrt{2}(\left|\uparrow\uparrow\right\rangle+\left|\downarrow\downarrow \right\rangle)\)). Then we measure the fidelity of the resulting state by observing qubit state parity oscillation, which is implemented by applying a \(\pi/2\)-pulse that acts on both qubits and scan Figure 2: **Entanglement of spin and the two motional modes.** In (a-d) time evolution of the spin state for various detuning ratios is observed. Partial disentanglement of spin and motion takes place when wave packets return to the origin in only one dimension. Complete disentanglement is observed when wave packets return to the origin in both dimensions. Error bars indicate quantum projection noise. Solid curves are fits to 3. \(R\) is the ratio of detunings to the radial modes, defined as \(R=\delta_{X}/\delta_{Y}\). Values of \(R\) estimated from fitting are (a) \(-0.261\pm 0.001\), (b) \(-0.653\pm 0.003\), (c) \(-1.547\pm 0.016\), and (d) \(-3.892\pm 0.071\). Times at which each motional mode is disentangled from the spin are indicated by vertical lines. Solid red lines correspond to the Y mode and dashed blue lines to the X mode. (e) A representative phase space diagram for the motional modes. In each phase space, the wave function evolves into a coherent superposition of two wave packets, corresponding to the \(\phi_{S}\) basis spin eigenstates. The trajectories are determined by Rabi frequency and detuning from each mode. ning its phase, \(\phi\)[37]. The gate fidelity is estimated at \(R=-1/3\), which in this context is defined as the ratio between the detunings to the X-cm mode and the Y-tilt mode, \(\delta_{X_{cm}}/\delta_{Ytilt}\). The measured gate fidelity is \(89.7\pm 0.6\) % which is comparable to our single axis Molmer-Sorensen gate fidelity, \(93.2\pm 0.6\) %. This indicates that Molmer-Sorensen interaction can be expanded into multiple dimensions naturally. Also, the gate Rabi frequency is reduced compared to the single axis case, because more phase spaces contribute to the geometric phase (\(\Phi_{n}(t)=\eta_{n1}\eta_{n2}/(2d_{n})^{2}\left(d_{n}t-\sin\left(d_{n}t\right) \right)\Omega_{0}^{2}\) where \(\eta_{n1}\) and \(\eta_{n2}\) are the Lamb-Dicke factors of each ion for the \(n\)-th mode, \(d_{n}\) is the detuning to \(n\)-th mode and \(\Omega_{0}\) is the Rabi frequency), as can be seen in 4(a). This effect is most pronounced at \(R=-1/3\) where the geometric phase contribution is similar for both axes, thus the required Rabi frequency is reduced by a factor of \(\simeq 1/\sqrt{2}\). Here, the Rabi frequency needed to generate an equal superposition of \(\left|\downarrow\downarrow\right\rangle\) and \(\left|\uparrow\uparrow\right\rangle\) using both axes is \(2\pi\times 86.1\) kHz, which is in agreement to the experimentally calibrated value of \(2\pi\times 81.3\pm 0.6\) kHz. This is \(28.3\%\) lower compared to the Rabi frequency required using only the X axis, and \(30.1\%\) lower compared to using only the Y axis, assuming the same gate time and gate detuning. Figure 3: **Generation of entangled cat state and measurement of phonon state distribution.** (a) Experimental sequence used to generate entangled coherent state and observe the modulation of its parity. (b) A representative plot for phonon distribution of the Y mode with \(R=-2/3\) when the X mode is disentangled and (c) entangled. Orange bars are theoretically expected phonon population for a cat state and blue bars are population extracted by fitting the blue sideband Rabi oscillation, which is presented in the insets. Solid curves in the insets are fits to the blue sideband Rabi oscillation model. (d), (e) Evolution of parity and mean phonon numbers as a functions of \(t_{SDF}\) for \(R=-2\) and \(R=-2/3\), respectively. In (d), the maximum magnitude of the displacement in the Y phase space is \(\left|\beta\right|=\sqrt{\pi_{Y}}\simeq 2\) and for the X phase space, \(\left|\alpha\right|=\sqrt{\pi_{X}}\simeq 0.7\). In (e), \(\left|\beta\right|\simeq 1.5\) and \(\left|\alpha\right|\simeq 1.0\) at maximum. Black line is a fit to phonon number parity of the entangled coherent state. Solid red line is the mean phonon number in the Y mode derived from the Rabi frequency and temperatures of each mode obtained from the phonon number parity fitting. Dahsed blue line is the mean phonon number of the X mode calculated the same way. All the error bars in this figure represent standard errors of fitted parameters. ## Discussion In this work, we have demonstrated the generation of entangled coherent states with two-dimensional motion of a trapped ion. Our scheme uses the near-degeneracy of the transverse modes of a linear Paul trap to excite the two motional modes of a single ion simultaneously with detuned SDF, and does not require second order interactions or two-phonon interactions as proposed in [21; 22], which is advantageous in terms of the strength of the interaction. We observed a periodic modulation in the single-mode phonon number parity which is a direct consequence of the entanglement between the two phonon modes. The loss of parity information is analogous to the decoherence of the spin state when only the spin state is directly measured in a spin-motion entangled system [17]. We have also shown that Molmer-Sorensen interaction with multiple ions can be easily realized with a lower laser power using a two-dimensional spin-dependent force. The size of the phase space displacements produced in our experiment is mainly limited by the large Rabi frequency required to generate a strong SDF in both X and Y modes. For a SDF with non-zero detuning, the maximum displacement is limited because the phase space trajectory forms a circle with radius proportional to the inverse of the detuning. Thus, the maximum displacement can be increased by making the trap more isotropic in the transverse directions. Alternatively, one can apply a SDF resonant to both the X and Y modes, which can be realized by a tetrachromatic laser beam [23]. In this case the size of the displacement will increase linearly with \(t_{SDF}\) and the coupling of the laser to each mode. Another limiting factor is the difficulty of characterizing motional states with large displacements via blue side-band Rabi oscillation. States with larger displacement magnitude are harder to probe because the coherence time of a cat state scales inversely with the square of magnitude of displacement [35]. The creation of an entangled coherent state with opposite phase, \(\ket{\downarrow}(\ket{\alpha}\ket{\beta}-\ket{-\alpha}\ket{-\beta})\), is possible with a \(\pi\)-pulse phase-locked to the SDF preceding the mid-circuit measurement [20]. Using our scheme, up to \(3N\) modes can be entangled for an N-ion chain when all the principal axes of the trapping potential are utilized. Especially, the generation of a tripartite entangled coherent state of the X, Y and Z modes, combined with a beam splitter interaction between the phonon modes [39; 40; 41], will enable the quantum teleportation protocol in Ref. [9] with a single trapped ion. There have been proposals and experiments of a Ramsey-type matter-wave rotation sensor [42; 43], Rabi-type sensor [44] and Rashba and Dresselhaus-type spin-orbit coupling for quantum simulation of topological insulators and Majorana fermions in which a single ion is coherently manipulated in two or more orthogonal spatial modes [25]. The coherent control of two-dimensional motion of a trapped ion demonstrated in this work can be applied to realize such experiments. Lastly, utilization of quantum motion in multiple axes for the realization of entangling gates can reduce the experimental overhead required to suppress interactions in multiple directions often employed in trapped ion quantum computing setups, such as asymmetric trap geometry [45] and trap RF voltage offset [46]. Figure 4: **Characterization of Molmer–Sørensen gate with two-dimensional motion.** (a) Normalized contributions of each mode for the geometric phase needed for the generation of the Bell state, \(1/\sqrt{2}(\ket{\downarrow\downarrow}+\ket{\uparrow\uparrow})\). \(d_{2}\) is the detuning from the \(X_{em}\) mode. At the detuning indicated by a vertical dashed line, the detuning ratio is \(R=-1/3\) and the X and Y mode contribute almost equally. (b) Time evolution of the spin states of a two-qubit system under two-dimensional Molmer–Søørensen interaction when \(R=-1/3\). The optimal gate time \(t_{g}=182\)\(\mu s\). Error bars represent quantum projection noise. (c) Qubit state population oscillation as a function of the phase of the \(\pi/2\)-pulse. Qubit state parity(not shown in the plot), \(\Pi(\phi)=P_{\uparrow\uparrow}(\phi)+P_{\downarrow\downarrow}(\phi)-(P_{ \downarrow\uparrow}(\phi)+P_{\uparrow\downarrow}(\phi))\), oscillates with an amplitude of \(\Pi_{a}=0.852\pm 0.007\). Error bars are the standard deviation calculated from five iterations of the same experiment. The average population of the even states at \(t_{g}\), \(P_{\uparrow\uparrow}+P_{\downarrow\downarrow}\), is \(0.942\pm 0.009\) as shown in the inset. The parity oscillation and even state population yield a gate fidelity of \(0.897\pm 0.006\). ## Methods ### Instruments The pulsed laser beams are provided by a 355-nm mode-locked laser (Coherent, Paladin Compact 355-4000). Its repetition rate fluctuates around 120.1 MHz due to the thermal and acoustic perturbations in the laser cavity. The repetition rate is monitored by an ultrafast photodetector (Alphalas, UPD-30-VSG-P). The drift is compensated by an AOM in the path of the \(R_{V}\) beam whose RF frequency is updated at a rate of 50 kHz by a field-programmable-gate-array (Digilent, Arty-S7) running a custom PID program [33]. The RF trap voltage is sampled by a capacitive divider and rectified by a diode circuit [29]. The rectified voltage is fed to a high-speed PID controller (New Focus, LB1005-S), which controls the output power of the RF source to stabilize the trap RF voltage [28]. ### Experimental protocol To drive the two-dimensional motion of a single trapped ion along a closed trajectory, we first optimize the Raman detuning to a frequency where the phase space trajectories for the X and Y axis can be closed simultaneously. The protocol is as follows: (1) The ion is ground-state cooled by sequentially applying sideband cooling pulses to the X and Y modes. (2) The spin state of the ion is initialized to \(\left|\downarrow\right\rangle\) via optical pumping. (3) With the probe time set to \(T=2\pi\times N/(\omega_{Y}-\omega_{X})\) where N is an integer, we scan the Raman detuning between \(\omega_{Y}\) and \(\omega_{X}\) and look for frequencies where the measured spin state is close to \(\left|\downarrow\right\rangle\), which indicates the simultaneous disentanglement of the spin from the motional states in the X and Y axes. (4) To balance the red and blue sideband transitions and calibrate out the differential Strak shift, the RF power and frequency for the transitions are fine-tuned individually to minimize the \(\left|\uparrow\right\rangle\) state population. In the entangled coherent state experiment, we limit the Rabi frequency of the blue sideband transition used to probe the phonon distribution of the Y mode to about 5 kHz, so as not to excite the blue sideband transition of the X mode. At this value, the expected maximum amplitude of the X mode blue sideband Rabi oscillation is 0.7%, thus we did not include the excitation of the X mode in the phonon distribution analysis. The relatively low frequency of the blue sideband Rabi oscillation means that even a small drift in the secular frequency of the trap can affect the phonon state estimation results. The trap RF power is stabilized by a PID loop, but it drifts slowly due to the temperature changes in the components of the PID circuit at a rate of 2 kHz/hr in the worst case. Thus, we interleave a blue sideband Ramsey spectroscopy experiment with the main experiment for every data point in 3(d) and (e), and monitored the change of secular frequency throughout data collection process. Data collection for each point in the figures takes about 5 minutes and for all the points in each plot about 2 hours. We stop the experiment if the secular frequency of the Y mode changed from the calibrated value by more than 300 Hz. We recalibrate the frequencies for the blue sideband transition and the spin-dependent force, and then resume the experiment. With a Rabi frequency of 5 kHz and detuning of 300 Hz, the amplitude of the blue sideband Rabi oscillation decreases by 0.4% and Rabi frequency increases by 0.9%, which are negligible for the purposes of our experiment. Also, we note that in the analysis of the blue sideband Rabi oscillation, we use the exact form of \(\Omega_{n+1,n}=\Omega_{0,0}\left\langle n+1\right|e^{i\eta_{Y}(\bar{a}_{Y}^{ \dagger}+\bar{a}_{Y})}\left|n\right\rangle=\Omega_{0,0}exp\left(-\eta_{Y}^{2}/ 2\right)\eta_{Y}/\sqrt{n+1}\;L_{n}^{1}(\eta_{Y}^{2})\) where \(\Omega_{0,0}\) is the carrier transition Rabi frequency with zero phonons, \(\eta\) is the Lamb-Dicke factor, and \(L_{n}^{1}\) is the generalized Laguerre polynomial of \(n\)-th order [47], since in our data the maximum value of \(\eta_{Y}\sqrt{2\bar{n}_{Y}+1}\) is about 0.33 where the Lamb-Dicke approximation becomes inaccurate. Also, to eliminate the possibility of the slow drift during the experiment affecting the observed pattern of parity modulation, we conducted the experiment in randomized orders of \(t_{SDF}\). The full randomized sequences of \(t_{SDF}\) used for the data sets in 3(d) and (e) are available in the Supplementary Material. ### Effect of finite temperature on phonon number parity The measured maximum value of parity shown in 3 does not reach unity because of imperfect sideband cooling, which in our setup typically results in \(\overline{n}_{X}\simeq 0.2\) and \(\overline{n}_{Y}\simeq 0.05\). This finite temperature effect is modelled by considering a mixed motional state with a population of \(p_{X,1}\) and \(p_{Y,1}\) in the first excited state of each mode and the rest in the motional ground states. We include the following three initial states in the model. (i) \(\left|1\right\rangle_{X}\left|0\right\rangle_{Y}\) with probability \(p_{X,1}\left(1-p_{Y,1}\right)\), (ii) \(\left|0\right\rangle_{X}\left|1\right\rangle_{Y}\) with probability \(\left(1-p_{X,1}\right)p_{Y,1}\), and (iii) \(\left|0\right\rangle_{X}\left|0\right\rangle_{Y}\) with probability \(\left(1-p_{X,1}\right)\left(1-p_{Y,1}\right)\). The \(\left|1\right\rangle_{X}\left|1\right\rangle_{Y}\) state is not considered since its probability is negligible. When the motional state is the \(n\)-th excited state, the effect of the displacement operator and the resulting phonon distribution can be calculated using number state representations of the displacement operator, \(d_{mn}^{\alpha}=\left\langle m\right|\bar{D}(\alpha)\left|n\right\rangle\). We employed the results of [48] to calculate \(d_{mn}^{\alpha}\). For (i), the modified phonon distribution of the Y mode is as follows \[p_{Y,n}\left(t\right)=\frac{1}{\left(1+e^{-2\left(\left|\alpha(t) \right|^{2}+\left|\beta(t)\right|^{2}\right)}\right)}e^{-\left|\beta(t)\right|^{ 2}}\frac{\left|\beta\left(t\right)\right|^{2n}}{n!}\] \[\times\left(1+(-1)^{n}\,\frac{\left(d_{11}^{-2\alpha}+d_{11}^{2 \alpha}\right)}{2}\right)\] For (ii), \[p_{Y,n}\left(t\right)=\frac{1}{2\left(1+e^{-2\left(\left|\alpha( t)\right|^{2}+\left|\beta(t)\right|^{2}\right)}\right)}\] \[\times\left(\left|d_{n1}^{\beta}\right|^{2}+\left|d_{n1}^{-\beta }\right|^{2}+\left(\left(d_{n1}^{\beta}\right)^{\ast}d_{n1}^{-\beta}+d_{n1}^{ \beta}(d_{n1}^{-\beta})^{\ast}\right)e^{-2\left|\alpha\right|^{2}})\] The weighted sum of the phonon distributions corresponding to the above three cases were used to fit the parity modulation data and extract the Rabi frequency, \(p_{X,1}\) and \(p_{Y,1}\). ## IV Data availability The data supporting the findings in this study are available from the corresponding author upon reasonable request. ## V Acknowledgments This work was supported by Institute for Information & communications Technology Planning & Evaluation (IITP) grant (No.2022-0-01040), the National Research Foundation of Korea (NRF) grant (No. 2020R1A2C3005689, No. 2022M3E4A1083521), and the Samsung Research Funding & Incubation Center of Samsung Electronics (No. SRFC-IT1901-09). ## VI Author contributions H.J. designed, carried out and analyzed the experiments. J.Y.K. carried out the experiments and implemented the experimental control system and the feed-forward system. H.J., J.Y.K., J.E.K., W.H.C., and K.K. contributed to the setup of the experimental apparatus. H.J. wrote and edited the manuscript. T.K. supervised the experiment. All authors participated in the discussion of the experimental results.
2302.02048
Energy Efficiency of MIMO Massive Unsourced Random Access with Finite Blocklength
This paper investigates the energy efficiency of massive unsourced random access~(URA) in multiple-input multiple-output quasi-static Rayleigh fading channels. Specifically, we derive achievability and converse bounds on the minimum required energy-per-bit under the per-user probability of error constraint, where the converse bounds contain two parts: one is general and the other is a weaker ensemble bound. Numerical evaluation shows that the gap between our achievability and converse bounds is less than $5$~dB in the considered regime. Some practical schemes are energy-inefficient compared with our bounds especially when there are many users. Moreover, we observe that in contrast to the sourced random access paradigm, the URA paradigm achieves higher spectral efficiency.
Junyuan Gao, Yongpeng Wu, Tianya Li, Wenjun Zhang
2023-02-04T01:11:18Z
http://arxiv.org/abs/2302.02048v1
# Energy Efficiency of MIMO Massive Unsourced Random Access with Finite Blocklength ###### Abstract This paper investigates the energy efficiency of massive unsourced random access (URA) in multiple-input multiple-output quasi-static Rayleigh fading channels. Specifically, we derive achievability and converse bounds on the minimum required energy-per-bit under the per-user probability of error constraint, where the converse bounds contain two parts: one is general and the other is a weaker ensemble bound. Numerical evaluation shows that the gap between our achievability and converse bounds is less than \(5\) dB in the considered regime. Some practical schemes are energy-inefficient compared with our bounds especially when there are many users. Moreover, we observe that in contrast to the sourced random access paradigm, the URA paradigm achieves higher spectral efficiency. Energy efficiency, finite blocklength regime, massive unsourced random access, MIMO channel. ## I Introduction As a typical use case in future wireless networks, massive machine-type communication has two distinct features different from traditional human-type communication [1]. First, there are a large number of users, while only a fraction of them are active at any given time. Second, active users transmit small data payloads to the base station (BS) with stringent latency and energy constraints. Massive random access technology has been proposed for this scenario, which includes sourced and unsourced random access (SRA and URA) paradigms. For SRA, the BS requires to identify active users and decode their messages. For URA, the BS is only interested in the transmitted messages but not users' identities. The URA paradigm was introduced in [2] and has attracted great attention, calling for new information-theoretic analysis. On this topic, finite-blocklength (FBL) bounds on the minimum required energy-per-bit were derived in [2] and [3] for URA in Gaussian and Rayleigh fading channels, respectively, under the per-user probability of error (PUPE) constraint and the assumption of knowing the number \(K_{a}\) of active users. Further, in [4], the result in [2] was extended to the setting with unknown \(K_{a}\). Notably, the above-mentioned FBL results are established for the setting with a single BS antenna. Indeed, the use of large antenna arrays has great benefits. Specifically, it was proved in [5] that with \(n\) channel uses and \(L\) BS antennas satisfying \(K_{a}/L=o(1)\), up to \(K_{a}\!=\!\mathcal{O}(n^{2})\) active users can be identified from \(K\!=\!\Theta(K_{a})\) potential users, but it reduces to \(K_{a}\!=\!\mathcal{O}(n)\) when there is a single BS antenna. In this paper, we investigate the energy efficiency of URA in multiple-input multiple-output (MIMO) quasi-static Rayleigh fading channels. Assuming all users share a common codebook, we derive achievability and converse bounds on the minimum required energy-per-bit under a PUPE constraint. Specifically, we utilize random coding and maximum likelihood (ML) decoding to derive the achievability bound, where Fano's "good region" technique [6] is applied since the error event is the union of many events. Our converse bounds contain two parts, namely the single-user bound and multi-user Fano-type bound. The former is general and the latter is limited to Gaussian codebooks. Numerical results verify the tightness of our bounds and indicate their importance to benchmark practical schemes. Some schemes are shown to be suboptimal especially in large \(K_{a}\) regime. Moreover, in contrast to SRA, the URA paradigm achieves higher spectral efficiency. _Notation:_ Throughout this paper, uppercase and lowercase boldface letters denote matrices and vectors, respectively. We use \([\mathbf{x}]_{m}\) to denote the \(m\)-th element of \(\mathbf{x}\) and \([\mathbf{X}]_{m,n}\) to denote the \((m,n)\)-th element of \(\mathbf{X}\). We use \((\cdot)^{T}\), \((\cdot)^{H}\), \(|\mathbf{X}|\), \(\left\|\mathbf{x}\right\|_{p}\), and \(\left\|\mathbf{X}\right\|_{F}\) to denote transpose, conjugate transpose, determinant, \(\ell_{p}\)-norm, and Frobenius norm, respectively. Let \(\operatorname{diag}\left\{\mathbf{x}\right\}\) denote a diagonal matrix with \(\mathbf{x}\) comprising its diagonal elements and \(\operatorname{diag}\left\{\mathbf{A},\mathbf{B}\right\}\) denote a block diagonal matrix. We use \(\cdot\backslash\cdot\) to denote set subtraction and \(|\mathcal{A}|\) to denote the cardinality of a set \(\mathcal{A}\). For an integer \(k>0\), we denote \([k]=\{1,\ldots,k\}\). We use \(\mathcal{C}\mathcal{N}(\cdot,\cdot)\) and \(\chi^{2}(\cdot)\) to denote the circularly symmetric complex Gaussian distribution and central chi-square distribution, respectively. We use \(\gamma\left(\cdot,\cdot\right)\) and \(\Gamma\left(\cdot\right)\) to denote the lower incomplete gamma function and gamma function, respectively. The complement of event \(\mathcal{G}\) is denoted as \(\mathcal{G}^{c}\). For \(0\leq p\leq 1\), we denote \(h_{2}(p)=-p\log_{2}p-(1-p)\log_{2}(1-p)\). ## II System Model We consider an uplink single-cell system consisting of a BS equipped with \(L\) antennas and \(K\) single-antenna users, where only \(K_{a}\) users are active due to sporadic traffic. The active user set is denoted as \(\mathcal{K}_{a}\). Each active user has a message of \(J=\log_{2}M\) bits to transmit with \(n\) channel uses. We consider a quasi-static Rayleigh fading channel model. The \(l\)-th antenna of the BS observes \(\mathbf{y}_{l}\) given by \[\mathbf{y}_{l}=\sum\nolimits_{k\in\mathcal{K}_{a}}h_{k,l}\mathbf{x}_{W_{k}}+ \mathbf{z}_{l}\in\mathbb{C}^{n}, \tag{1}\] where \(h_{k,l}\mathop{\sim}^{\frac{1+\delta}{\epsilon}}\mathcal{CN}(0,1)\) denotes the fading coefficient between the \(k\)-th user and the \(l\)-th BS antenna; the noise vector is \(\mathbf{z}_{l}\sim\mathcal{CN}(\mathbf{0},\mathbf{I}_{n})\); \(W_{k}\) denotes the message of active user \(k\), which is chosen uniformly at random from \([M]\); the transmitted codeword of active user \(k\) is denoted as \(\mathbf{x}_{W_{k}}\). Here, we assume all users share a common codebook, and the matrix \(\mathbf{X}=[\mathbf{x}_{1},\ldots,\mathbf{x}_{M}]\in\mathbb{C}^{n\times M}\) is obtained by concatenating all codewords. The received signal \(\mathbf{Y}=[\mathbf{y}_{1},\ldots,\mathbf{y}_{L}]\) is given by \[\mathbf{Y}=\mathbf{X}\mathbf{\Phi}\mathbf{H}+\mathbf{Z}\in\mathbb{C}^{n\times L}, \tag{2}\] where the binary selection matrix \(\mathbf{\Phi}\!\in\!\{0,1\}^{M\times K}\) satisfies that \(\left[\mathbf{\Phi}\right]_{W_{k},k}=1\) if the \(k\)-th user is active and transmits the \(W_{k}\)-th codeword, and \(\left[\mathbf{\Phi}\right]_{W_{k},k}=0\) otherwise; \(\mathbf{H}=[\mathbf{h}_{1},\ldots,\mathbf{h}_{L}]\in\mathbb{C}^{K\times L}\) with \(\mathbf{h}_{l}=\left[h_{1,l},\ldots,h_{K,l}\right]^{T}\in\mathbb{C}^{K}\) for \(l\in[L]\). We assume neither the BS nor users know the instantaneous channel state information (CSI) in advance, but they both know the distribution. As in [2], \(K_{a}\) is assumed to be known to the BS in this work due to space constraints, and our results can be extended to the scenario without known \(K_{a}\) applying similar ideas in [4]. Next, we introduce the notion of the URA code: **Definition 1** ([2]): _Let \(\mathcal{X}\), \(\mathcal{H}_{k}\), and \(\mathcal{Y}\) denote the input alphabet of active users, the channel fading coefficient alphabet of active user \(k\), and the output alphabet for the channel (2), respectively. An \((n,M,\epsilon,P)\) massive URA code consists of_ 1. _An encoder_ \(f_{\text{en}}:[M]\mapsto\mathcal{X}\) _that maps the message_ \(W_{k}\) _to a codeword_ \(\mathbf{x}_{W_{k}}\in\mathcal{X}\) _for_ \(k\in\mathcal{K}_{a}\)_, where_ \(W_{k}\) _is chosen independently and uniformly from_ \([M]\) _for_ \(k\in\mathcal{K}_{a}\)_. The codewords in_ \(\mathcal{X}\) _satisfy the maximum power constraint_ \[\left\|\mathbf{x}_{m}\right\|_{2}^{2}\leq nP,\quad m\in[M].\] (3) 2. _A decoder_ \(g_{\text{de}}:\mathcal{Y}\!\mapsto\!\binom{[M]}{K_{a}}\) _satisfying the PUPE constraint_ \[P_{e}=\frac{1}{K_{a}}\!\!\sum\nolimits_{k\in\mathcal{K}_{a}} \!\!\mathbb{P}\left[\left\{W_{k}\notin\hat{\mathcal{W}}\right\}\right.\] \[\left.\cup\left\{W_{k}=W_{i}\text{ for some }i\neq k\right\} \right]\leq\epsilon.\] (4) _Here,_ \(\hat{\mathcal{W}}\) _denotes the set of decoded messages of size_ \(K_{a}\)_._ Let \(S_{e}\!=\!\frac{K_{a}J}{n}\) denote the spectral efficiency and \(E_{b}\!=\!\frac{nP}{J}\) denote the energy-per-bit. The minimum required energy-per-bit is denoted as \(E_{b}^{*}(n,M,\epsilon)=\inf\left\{E_{b}:\exists(n,M,\epsilon,P)\text{ code }\right\}\). ## III Main results An achievability bound on the minimum required energy-per-bit for URA in MIMO channels is given in Theorem 1. **Theorem 1**: _The minimum required energy-per-bit for the URA model described in Section II can be upper-bounded as_ \[E_{b}^{*}(n,M,\epsilon)\leq\inf\frac{nP}{J}. \tag{5}\] _Here, the \(\inf\) is taken over all \(P>0\) satisfying that_ \[\epsilon\geq\min_{0<P^{\prime}<P}\left\{p_{0}+\sum\nolimits_{t=1}^{K_{a}} \frac{t}{K_{a}}\min\left\{1,p_{t}\right\}\right\}, \tag{6}\] _where_ \[p_{0}=\frac{\binom{K_{a}}{2}}{M}+K_{a}\left(1-\frac{\gamma\left(n,\frac{nP}{P ^{\prime}}\right)}{\Gamma\left(n\right)}\right), \tag{7}\] \[p_{t}=\min_{0\leq\omega\leq 1,0\leq\nu}\left\{q_{1,t}\left(\omega,\nu\right)+q_{2, t}\left(\omega,\nu\right)\right\}, \tag{8}\] \[q_{1,t}(\omega,\nu)\] \[=\!\binom{K_{a}}{t}\!\!\left(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \(\hat{\mathbf{\Gamma}}=\operatorname{diag}\left\{\hat{\mathbf{\gamma}}\right\}\in\{0,1\}^{M \times M}\) with \([\hat{\mathbf{\gamma}}]_{i}=1\) if \(\mathbf{c}_{i}\in\hat{\mathcal{C}}_{\mathcal{K}_{a}}\) and the log-likelihood cost function \(g\left(\hat{\mathbf{\Gamma}}\right)\) is given by \[g(\hat{\mathbf{\Gamma}})\!=\!L\ln\left|\mathbf{I}_{n}\!+\!\mathbf{C} \hat{\mathbf{\Gamma}}\mathbf{C}^{H}\right|\!+\!\operatorname{tr}\!\left(\mathbf{Y }^{H}\!\left(\mathbf{I}_{n}\!+\!\mathbf{C}\hat{\mathbf{\Gamma}}\mathbf{C}^{H} \right)^{-1}\!\mathbf{Y}\right)\!. \tag{19}\] Let \(S_{1}\!\subset\!S_{\mathcal{K}_{a}}\) denote the set of misdecoded messages. Let \(S_{2}\!\subset\![M]\!\setminus\!S_{\mathcal{K}_{a}}\) denote the set of false-alarm messages. We rewrite "\(\cup_{S_{1}\subset S_{\mathcal{K}_{a}},|S_{1}|=t}\)" to "\(\cup_{S_{1}}\)" and "\(\cup_{S_{2}\subset[M]\setminus S_{\mathcal{K}_{a}},|S_{2}|=t}\)" to "\(\cup_{S_{2}}\)" for brevity; and similarly for \(\sum\)and \(\cap\). Then, we have \[\mathbb{P}\left[\mathcal{F}_{t}\right]\!\leq\!\mathbb{P}\left[ \mathcal{G}_{e}\right]\!\leq\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \(r\) is the solution of \[\mathbb{P}\left[\chi^{2}(2L)\leq r\right]=\epsilon. \tag{34}\] In [9, Theorem 3], a converse bound was provided for the single-user case in MIMO channels with unknown CSI. In this part, we assume both the transmitted codewords of \(K_{a}-1\) active users and their CSI are known to derive the converse bound, which can be obtained based on [9, Theorem 3] with the following changes: 1) we choose the auxiliary distribution \(\prod_{l=1}^{L}\mathcal{CN}(0,\mathbf{I}_{n+1})\) for simplicity; 2) we change \(J\) in [9, Theorem 3] to \(J-\log_{2}K_{a}\) and the reasons are as follows. Since all users share a common codebook, there may exist message collisions; thus, the number \(B\) of different messages among the known transmitted messages of \(K_{a}-1\) active users satisfies \(1\leq B\leq K_{a}-1\). Therefore, the decoder aims to output another \(K_{a}-B\) possible messages to recover the message transmitted by the remaining active user. To derive a converse bound, we loosen the list size from \(K_{a}-B\) to \(K_{a}\) and the PUPE becomes \(\mathbb{P}[W_{1}\notin\hat{\mathcal{W}}]\). Using the meta-converse variation for list decoding [3, Theorem 4.1], we modify the result in [9, Theorem 3] by changing \(J\) to \(\log_{2}M/K_{a}\). Theorem 2 holds for all codes but is derived based on the knowledge of the transmitted messages of \(K_{a}-1\) active users and their CSI. It can be loose when \(K_{a}\) is large because multi-user interference (MUI) is a significant bottleneck and CSI is difficult to obtain in this case. In contrast, Theorem 3 considers the case with unknown CSI and unknown transmitted messages. Since it is tricky to analyse, we make a stronger assumption of Gaussian codebook, which reduces the converse result to a weaker ensemble converse and raises an open question about whether a tight converse bound can be derived under more general assumptions on the codebook. **Theorem 3**: _For the URA model described in Section II, assuming that the codebook has i.i.d. Gaussian entries with \(M>2K_{a}\) and \(\epsilon\leq 1-K_{a}/M\), the minimum required energy-per-bit can be lower-bounded as_ \[E_{b}^{*}(n,M,\epsilon)\geq\inf\frac{nP}{J}. \tag{35}\] _Here, the \(\inf\) is taken over all \(P>0\) satisfying that_ \[nL\log_{2}(1+K_{a}P)-Lp_{no}\mathbb{E}\!\left[\log_{2}\left| \mathbf{I}_{n}+\mathbf{X}_{K_{a}}\mathbf{X}_{K_{a}}^{H}\right|\right]\] \[\geq(1-\epsilon)\,K_{a}\left(J-\log_{2}K_{a}\right)-K_{a}h_{2} \left(\epsilon\right), \tag{36}\] _where \(p_{no}=1-\binom{K_{a}}{2}/M\) and \(\mathbf{X}_{K_{a}}\in\mathbb{C}^{n\times K_{a}}\) has i.i.d. \(\mathcal{CN}(0,P)\) entries._ We assume w.l.o.g. that the active user set is \([K_{a}]\). Let \(\mathbf{X}\in\mathbb{C}^{n\times M}\) be a codebook matrix. Denote \(\bar{\mathbf{X}}_{K_{a}M}=[\mathbf{X},\ldots,\mathbf{X}]\!\in\!\mathbb{C}^{n \times K_{a}M}\) and \(\bar{\mathbf{X}}\!=\!\mathrm{diag}\{\bar{\mathbf{X}}_{K_{a}M},\ldots,\bar{ \mathbf{X}}_{K_{a}M}\}\in\mathbb{C}^{nL\times K_{a}ML}\). Let \(\bar{\mathbf{H}}_{l}\) be a \(K_{a}M\times K_{a}M\) block diagonal matrix, whose block \(k\) is a diagonal \(M\!\times\!M\) matrix with diagonal entries equal to \(h_{k,l}\). Let \(\bar{\mathbf{H}}=\left[\bar{\mathbf{H}}_{1},\ldots,\bar{\mathbf{H}}_{L}\right]^ {T}\). The vector \(\bar{\boldsymbol{\beta}}\!\in\!\{0,1\}^{K_{a}M}\) has \(K_{a}\) blocks, whose block \(k\) denoted as \(\bar{\boldsymbol{\beta}}_{k}\) is of size \(M\) and includes one \(1\). Then, we have \[\bar{\mathbf{y}}=\bar{\mathbf{X}}\bar{\mathbf{H}}\bar{\boldsymbol{\beta}}+ \bar{\mathbf{z}}\in\mathbb{C}^{nL\times 1}, \tag{37}\] where \(\bar{\mathbf{z}}\in\mathbb{C}^{nL\times 1}\) with each entry i.i.d. from \(\mathcal{CN}(0,1)\). Let \(P_{e,k}^{de}=\mathbb{P}[W_{k}\!\notin\!\hat{\mathcal{W}}]\) and \(P_{e}^{de}\!=\!\frac{1}{K_{a}}\sum_{k\in[K_{a}]}\!P_{e,k}^{de}\). Since \(P_{e}^{de}\!\leq\!P_{e}\), a converse bound based on the constraint \(P_{e}^{de}\!\leq\!\epsilon\) is also converse for the constraint (4). Applying [3, Eq. (61) and Eq. (63)] (where Eq. (61) follows from Fano's inequality [7]) and allowing \(S_{2}\) therein to be \([K_{a}]\) with \(M>2K_{a}\), we have \[\log_{2}\!\frac{M}{K_{a}}\!-\!h_{2}\!\left(P_{e}^{de}\right)-P_{e}^{de}\log_{ 2}\!\left(\!\frac{M}{K_{a}}\!-\!1\!\right)\!\leq\!\frac{I\!\left(\mathcal{W} _{[K_{a}]};\hat{\mathcal{W}}\left|\bar{\mathbf{X}}\right.\right)}{K_{a}}. \tag{38}\] Assuming \(P_{e}^{de}\leq\epsilon\leq 1-K_{a}/M\), we have \[P_{e}^{de}\log_{2}\!\left(\!\frac{M}{K_{a}}-1\right)+h_{2}\left(P_{e}^{de} \right)\leq\epsilon\log_{2}\!\frac{M}{K_{a}}+h_{2}\left(\epsilon\right). \tag{39}\] The mutual information in (38) is bounded as [10, Eq. (149)] \[I\left(\mathcal{W}_{[K_{a}]};\hat{\mathcal{W}}\left|\bar{\mathbf{X}}\right. \right)\leq I\!\left(\bar{\mathbf{H}}\bar{\boldsymbol{\beta}};\bar{\mathbf{y}} \right|\bar{\mathbf{X}}\right)-I\!\left(\bar{\mathbf{H}}\bar{\boldsymbol{\beta}}; \bar{\mathbf{y}}\left|\bar{\boldsymbol{\beta}},\bar{\mathbf{X}}\right.\right). \tag{40}\] Then, we can obtain \[I\left(\bar{\mathbf{H}}\bar{\boldsymbol{\beta}};\bar{\mathbf{y}} \left|\bar{\mathbf{X}}\right.\right) \leq L\mathbb{E}\left[\log_{2}\left|\mathbf{I}_{n}+\frac{1}{M} \bar{\mathbf{X}}_{K_{a}M}\bar{\mathbf{X}}_{K_{a}M}^{H}\right|\right] \tag{41}\] \[\leq nL\log_{2}\left(1+K_{a}P\right), \tag{42}\] where (41) follows from the inequality in [10, Eq. (151)] and (42) follows from Jensen's inequality assuming that the codebook has i.i.d. entries with mean \(0\) and variance \(P\). The term \(I\left(\bar{\mathbf{H}}\bar{\boldsymbol{\beta}};\bar{\mathbf{y}}\right|\bar{ \boldsymbol{\beta}},\bar{\mathbf{X}}\right)\) can be lower-bounded as follows: \[I\!\left(\bar{\mathbf{H}}\bar{\boldsymbol{\beta}};\bar{\mathbf{y}} \left|\bar{\boldsymbol{\beta}},\bar{\mathbf{X}}\right) \geq L\mathbb{E}\!\left[\log_{2}\left|\mathbf{I}_{n}+\mathbf{X} \!\sum\nolimits_{k=\!1}^{K_{a}}\!\!\mathrm{diag}(\bar{\boldsymbol{\beta}}_{k}) \mathbf{X}^{H}\right|\right] \tag{43}\] \[\geq p_{no}\,L\,\mathbb{E}\left[\log_{2}\left|\mathbf{I}_{n}+ \mathbf{X}_{K_{a}}\mathbf{X}_{K_{a}}^{H}\right|\right], \tag{44}\] where \(p_{no}\) denotes the probability of no collision and \(\mathbf{X}_{K_{a}}\) includes codewords transmitted by \(K_{a}\) active users without collision. Here, (44) holds because \(\mathbb{E}\left[f\left(\mathbf{X},\bar{\boldsymbol{\beta}}\right)\right]=p_{no} \mathbb{E}\left[f_{\text{no-col}}\left(\mathbf{X},\bar{\boldsymbol{\beta}} \right)\right]+\left(1-p_{no}\right)\mathbb{E}\left[f_{\text{col}}\left(\mathbf{X},\bar{\boldsymbol{\beta}}\right)\right]\), where the expectation of \(f_{\text{no-col}}\left(\mathbf{X},\bar{\boldsymbol{\beta}}\right)=\log_{2} \left|\mathbf{I}_{n}+\mathbf{X}_{K_{a}}\mathbf{X}_{K_{a}}^{H}\right|\) can be evaluated under the assumption of Gaussian codebook. It completes the proof of Theorem 3. ## IV Numerical Results In this section, we provide numerical evaluation of the derived bounds. In Fig. 1, we consider the scenario with \(n=3200\), \(J=100\) bits, \(L=50\), and \(\epsilon\in\{0.025,0.1\}\). In this case, we compare our achievability and converse bounds, as well as the schemes proposed in [11, 12, 5, 13], in terms of the minimum required energy-per-bit for different numbers of active users. Next, we explain how each curve is obtained: 1. For the achievability bound is Theorem 1, we generate \(10000\) samples to evaluate the expectations therein using the Monte Carlo method. 2. The converse bounds in Theorem 2 and Theorem 3 are plotted, respectively. The expectations therein are evaluated by the Monte Carlo method using \(500\) samples. 3. To evaluate the covariance-based scheme proposed in [5], we and false-alarm, i.e. \(P_{e}=\left(p^{md}+p^{fa}\right)/2\), and plot the minimum required energy-per-bit to satisfy \(P_{e}\leq\epsilon\). 4. The pilot-based scheme is evaluated as in [11, Fig. 7]. The data is split into two parts with \(16\) bits and \(84\) bits, respectively. The first part is coded as "pilot" of length \(1152\); the second one is coded by a polar code of length \(2048\). The error requirement is the same as that for the covariance-based scheme. 5. The FASURA scheme is evaluated as in [12, Fig. 4]. Similar to the pilot-based scheme in [11], it divides messages into two parts. Departures from [11] include the use of spreading sequences, the detection of active sequences, and channel/symbol estimation techniques. 6. The tensor-based scheme proposed in [13] is evaluated with tensor signature \((8,5,5,4,4)\), an outer BCH code, and a higher error requirement \(\epsilon=0.1\). As predicted before, we can observe from Fig. 1 that the converse bound in Theorem 2 dominates in small \(K_{a}\) regime and the weaker ensemble bound in Theorem 3 dominates otherwise. Numerical results verify the tightness of our bounds, with the gap between the achievability and (dominating) converse bounds less than \(5\) dB in our regime. In MIMO channels, the required \(E_{b}\) is almost a constant when \(K_{a}\) is small, in line with the almost perfect MUI cancellation in the single-receive-antenna setting [2, 3]. Moreover, our bounds provide theoretical benchmarks to evaluate practical schemes. Specifically, among the schemes proposed in [5, 11, 12, 13], the FASURA scheme in [12] performs the best, but it still exhibits a large gap to our bounds in large \(K_{a}\) regime. How to reduce this gap is an interesting topic for the future work. In Fig. 2, we compare the achievability bound (in Theorem 1) and the converse bound (in Theorem 3) for URA in terms of the maximum spectral efficiency per antenna for different numbers of BS antennas with \(n=1000\), \(J=100\) bits, \(E_{b}=16\) dB, and \(\epsilon=0.1\). Since both \(n\) and \(J\) are fixed, Fig. 2 indicates the bounds on the number of reliably served active users against that of BS antennas. We also show the theoretical bounds for SRA provided in [14]. Compared with SRA, the URA paradigm achieves higher spectral efficiency because all users share a common codebook and the search space to find the transmitted messages is reduced in this case. For both SRA and URA paradigms, the total spectral efficiency \(S_{e}\) increases with \(L\), whereas the spectral efficiency per antenna \(S_{e}/L\) gradually reduces due to the increased channel uncertainty. ## V Conclusion In this paper, we considered the massive URA problem in MIMO quasi-static Rayleigh fading channels with stringent latency and energy constraints. Specifically, an achievability bound, a general converse bound, and a weaker ensemble converse bound were derived under a PURE constraint. Numerical evaluation verified the tightness of our results and indicated that some existing schemes exhibit a large gap to our theoretical bounds, especially when there are many users. Moreover, we observed that compared with the SRA paradigm, the URA paradigm achieves higher spectral efficiency.
2303.02407
Local Path Planning among Pushable Objects based on Reinforcement Learning
In this paper, we introduce a method to deal with the problem of robot local path planning among pushable objects -- an open problem in robotics. In particular, we achieve that by training multiple agents simultaneously in a physics-based simulation environment, utilizing an Advantage Actor-Critic algorithm coupled with a deep neural network. The developed online policy enables these agents to push obstacles in ways that are not limited to axial alignments, adapt to unforeseen changes in obstacle dynamics instantaneously, and effectively tackle local path planning in confined areas. We tested the method in various simulated environments to prove the adaptation effectiveness to various unseen scenarios in unfamiliar settings. Moreover, we have successfully applied this policy on an actual quadruped robot, confirming its capability to handle the unpredictability and noise associated with real-world sensors and the inherent uncertainties present in unexplored object pushing tasks.
Linghong Yao, Valerio Modugno, Andromachi Maria Delfaki, Yuanchang Liu, Danail Stoyanov, Dimitrios Kanoulas
2023-03-04T12:56:15Z
http://arxiv.org/abs/2303.02407v3
# Local Navigation Among Movable Obstacles ###### Abstract Autonomous robots would benefit a lot by gaining the ability to manipulate their environment to solve path planning tasks, known as the Navigation Among Movable Obstacle (NAMO) problem. In this paper, we present a deep reinforcement learning approach for solving NAMO locally, near narrow passages. We train parallel agents in physics simulation using an Advantage Actor-Critic based algorithm with a multi-modal neural network. We present an online policy that is able to push obstacles in a non-axial-aligned fashion, react to unexpected obstacle dynamics in real-time, and solve the local NAMO problem. Experimental validation in simulation shows that the presented approach generalises to unseen NAMO problems in unknown environments. We further demonstrate the implementation of the policy on a real quadrupedal robot, showing that the policy can deal with real-world sensor noises and uncertainties in unseen NAMO tasks. ## I Introduction Studied extensively by Mike Stilman [1], the Navigation Among Movable Obstacle (NAMO) problem tackles planning problems where obstacles can be manipulated via pushing, pulling, or lifting to aid a robot's navigation in cluttered environments. Just like how humans intuitively nudge and move furniture out of their way when walking through a tightly packed room, robots can also become much more useful if they can manipulate their surroundings to help them reach their target location. Practical use cases for NAMO solutions include robots that perform routine maintenance of factories where idle containers and boxes may obstruct doorways; personal service robots in households where rooms are packed with furniture and items; and industrial inspection robots in underground caves and tunnels where rocks and debris may block the walkway. In such environments, the ability to reliably manipulate obstacles will significantly increase the efficiency of autonomous robots. However, even simplified versions of the NAMO problem have been proven to be NP-hard [2, 3]. Past literature has tackled the NAMO problem using algorithms based on iterative and recursive methods [4, 5, 6], but usually with simplifications to the problem setting, such as prior knowledge of the environment, offline planning, and axial-aligned manipulation. Only a few studies have tackled the online NAMO problem and considered the implications of sensor errors and unexpected object dynamics [7, 8]. Furthermore, methods utilizing recursive and iterative search-based techniques usually result in a computational complexity that is exponential to the number of local obstacles [5], and the search time in practice can sometimes be too long when many obstacles are present. In this paper, we aim to address some of these shortcomings using a deep reinforcement learning approach (Fig. 1). By utilizing a neural network for policy-based RL algorithms, we develop an online agent that can solve unseen NAMO tasks without the constraint of axis-aligned pushing, and is able to handle uncertainties in sensor inputs and obstacle dynamics. Specifically, we focus on solving key-hole problems1 near one or two narrow passages. We also constrain the problem to use only pushing actions, as this is the most universal manipulation method amongst mobile robots - pulling and lifting obstacles might need extra robotic equipment such as arms. Our proposed method is suitable to be implemented within workflows where global path planning algorithms such as A* [9] would be unable to resolve local problems that require NAMO solutions. Therefore, we formulate the environment in a small region near narrow passages, with the objective of the robot reaching from one disjointed space to the other via obstacle manipulation. Such Fig. 1: The environment observation is obtained via visual sensors and preprocessed into the desired state format. The input is fed into a trained policy network that computes a high-level action output for the robot to follow in the environment, in order to solve the local NAMO task.
2305.11093
Noise-adapted recovery circuits for quantum error correction
Implementing quantum error correction (QEC) protocols is a challenging task in today's era of noisy intermediate-scale quantum devices. We present quantum circuits for a universal, noise-adapted recovery map, often referred to as the Petz map, which is known to achieve close-to-optimal fidelity for arbitrary codes and noise channels. While two of our circuit constructions draw upon algebraic techniques such as isometric extension and block encoding, the third approach breaks down the recovery map into a sequence of two-outcome POVMs. In each of the three cases we improve upon the resource requirements that currently exist in the literature. Apart from Petz recovery circuits, we also present circuits that can directly estimate the fidelity between the encoded state and the recovered state. As a concrete example of our circuit constructions, we implement Petz recovery circuits corresponding to the $4$-qubit QEC code tailored to protect against amplitude-damping noise. The efficacy of our noise-adapted recovery circuits is then demonstrated through ideal and noisy simulations.
Debjyoti Biswas, Gaurav M. Vaidya, Prabha Mandayam
2023-05-18T16:29:49Z
http://arxiv.org/abs/2305.11093v2
# Noise-adapted recovery circuits for quantum error correction ###### Abstract Implementing quantum error correction (QEC) protocols is a challenging task in today's era of noisy intermediate-scale quantum devices. We present quantum circuits for a universal, noise-adapted recovery map, often referred to as the Petz map, which is known to achieve close-to-optimal fidelity for arbitrary codes and noise channels. While two of our circuit constructions draw upon algebraic techniques such as isometric extension and block encoding, the third approach breaks down the recovery map into a sequence of two-outcome POVMs. In each of the three cases we improve upon the resource requirements that currently exist in the literature. Apart from Petz recovery circuits, we also present circuits that can directly estimate the fidelity between the encoded state and the recovered state. As a concrete example of our circuit constructions, we implement Petz recovery circuits corresponding to the 4-qubit QEC code tailored to protect against amplitude-damping noise. The efficacy of our noise-adapted recovery circuits is then demonstrated through ideal and noisy simulations on the IBM quantum processors. ## I Introduction Quantum states are inherently fragile and error-prone, rendering the current generation of quantum processors extremely noisy [1]. Quantum error correction (QEC) [2] provides a means by which quantum information maybe protected from noise, paving the way for robust and scaleable quantum processors. The standard approach to QEC which relies on general purpose codes that correct for _arbitrary_ single-qubit noise is however rather resource-intensive. The smallest surface codes that correct for arbitrary single-qubit errors, for example, require at least 13 physical qubits for a single round of QEC [3]. Noise-adapted QEC, on the other hand, aims to identify the dominant noise affecting the quantum system and tailor the quantum code and recovery to correct for that specific noise model. It has been shown [4; 5; 6] that such noise-adapted QEC schemes often require fewer resources while achieving comparable levels of error mitigation as general-purpose QEC. While there have been several analytical and numerical studies to identify noise-adapted quantum codes and adaptive recovery maps for specific noise models (see [7] for a recent review), proposals to implement such noise-adapted QEC schemes have been few and far in between. One key challenge in realising noise-adapted QEC is that of implementing the recovery map associated with such schemes. Since standard QEC works by breaking down arbitrary noise in terms of Pauli errors, the recovery operation that follows syndrome detection is simply a Pauli frame change [8]. Adaptive QEC, on the other hand, does not view the noise in terms of a Pauli error basis and hence typically requires non-trivial recovery circuits corresponding to different error syndromes [9; 10]. Constructing such non-Pauli recovery circuits is often hard - indeed, there are very few explicit adaptive recovery schemes discussed in the literature, even for the well-studied case of amplitude-damping noise. An important universal prescription for noise-adapted recovery is the so-called _Petz map_[11; 12]. This is a completely positive trace-preserving (CPTP) map which has served as an important analytical tool in the context of approximate QEC [13; 5] and noise-adapted QEC [6]. Variants of the Petz map also play an important role in quantum Shannon theory [14] and more recently, in the context of operator reconstruction in the AdS/CFT correspondence [15]. A quantum algorithm to implement the Petz map was proposed recently [16], providing for the first time, a systematic procedure for circuit realizations of the map. However, this algorithm is geared towards implementing the _state-specific_ Petz map [12] rather than the _code-specific_ Petz map [5]. The latter is arguably of greater relevance in the context of quantum error correction, since it has been shown to correct with near-optimal fidelity for the action of a given noise channel \(\mathcal{E}\) on a quantum code \(\mathcal{C}\)[5] In the present work, we demonstrate three different circuit constructions for implementing the code-specific Petz map and estimate the resource requirements in each case. Two of these approaches are based on known algebraic techniques for implementing quantum dynamical maps [17; 18]. The third approach combines newly developed algorithmic techniques such as block encoding [19] with the well known isometric extension. As a concrete use case, we obtain Petz recovery circuits for the specific case of a 4-qubit code [4] tailored to protect against single-qubit amplitude-damping noise. Finally, we simulate these circuits on noisy superconducting processors available on the IBMQ platform and benchmark their performance in terms of the fidelity of the recovered state. The rest of the paper is organized as follows. In Sec. II we introduce the code-specific Petz map and briefly discuss its role as a near-optimal, noise-adapted recovery map. In Sec. III, we describe our first approach to implementing the Petz map, using its isometric extension. In Sec. IV we describe a POVM-based approach to implement the Petz map. Finally, in Sec. V, we describe our circuit construction that combines the block encoding technique with isometric extension. We discuss the example of the 4-qubit code subject to amplitude-damping noise in Sec. VI and obtain fidelities corresponding to the different implementations of the Petz map in this case. We conclude with a summary and future outlook in Sec. VII. ## II Preliminaries We begin with a brief review of the Petz recovery map. Recall that a quantum noise channel is modelled as a completely positive trace-preserving (CPTP) map \(\mathcal{E}\) acting on the set of states \(\mathcal{S}(\mathcal{H})\) of a Hilbert space \(\mathcal{H}\). Such a map is characterized by a set of Kraus operators \(\{E_{i}\}\), satisfying \(\sum_{i=1}^{N}E_{i}^{\dagger}E_{i}=I\). ### The Petz map The Petz map was first defined in the context of reversing noisy dynamics on a specific system state [12]. The action of the _state-specific_ Petz map corresponding to a state \(\sigma\in\mathcal{S}(\mathcal{H})\) and noise map \(\mathcal{E}\), denoted as \(\mathcal{R}_{\sigma,\mathcal{E}}\), is given by, \[\mathcal{R}_{\sigma,\mathcal{E}}(.) =\sqrt{\sigma}\,\mathcal{E}^{\dagger}\,\left(\mathcal{E}(\sigma) ^{-1/2}(.)\mathcal{E}(\sigma)^{-1/2}\right)\,\sqrt{\sigma}, \tag{1}\] \[=\sqrt{\sigma}\sum_{i=1}^{N}E_{i}^{\dagger}\left(\mathcal{E}( \sigma)^{-1/2}(.)\mathcal{E}(\sigma)^{-1/2}\right)E_{i}\sqrt{\sigma}.\] Here, \(\mathcal{E}(\sigma)=\sum_{i}E_{i}\rho E_{i}^{\dagger}\) denotes the action of the channel on the state \(\sigma\) and \(\mathcal{E}^{\dagger}\sim\{E_{i}^{\dagger}\}\) is the adjoint map corresponding to the map \(\mathcal{E}\). The map \(\mathcal{R}_{\sigma,\mathcal{E}}\) is clearly completely positive (CP) and trace non-increasing; it can be normalised to make it trace preserving (TP) as well. From the definition in Eq. (1), it is clear that the state-specific Petz map _perfectly_ reverses the action of the noise \(\mathcal{E}\) on the state \(\rho\). It was shown in [12] that for an ensemble of states \(\{p_{i},\rho_{i}\}\), affected by noise \(\mathcal{E}\), the Petz map \(\mathcal{R}_{\rho,\mathcal{E}}\) corresponding to the state \(\rho=\sum_{i}p_{i}\rho_{i}\) is a near-optimal recovery map. Here, optimality was defined in terms of the average entanglement fidelity between the ideal and noisy states. In our work, we focus on the _code-specific_ Petz map defined in [5]. Recall that an \([n,k]\) quantum code \(\mathcal{C}\) encoding \(k\) qubits in \(n\), is a \(2^{k}\)-dimensional subspace of the \(n\)-qubit space which has \(2^{n}\) dimensions. Corresponding to a quantum code \(\mathcal{C}\) and noise map \(\mathcal{E}\), the Petz recovery channel is defined as, \[\mathcal{R}_{P,\mathcal{E}}(.) =P\,\mathcal{E}^{\dagger}\,(\mathcal{E}(P)^{-1/2}(.)\mathcal{E}(P )^{-1/2})\,P \tag{2}\] \[=P\sum_{i=1}^{N}E_{i}^{\dagger}\left(\mathcal{E}(P)^{-1/2}(.) \mathcal{E}(P)^{-1/2}\right)E_{i}P,\] where \(P\) is the projector onto the codespace \(\mathcal{C}\) and \(\mathcal{E}(P)=\sum_{i}E_{i}PE_{i}^{\dagger}\) denotes the action of the noise map on the codespace. The map \(\mathcal{R}_{P,\mathcal{E}}\) is also clearly completely positive (CP) and can be normalised to make it trace preserving (TP) as well. Furthermore, the composite map \(\mathcal{R}_{P,\mathcal{E}}\circ\mathcal{E}\), which denotes the action of noise followed by Petz recovery, is unital and preserves the identity operator on the codespace. It was shown in [5] that \(\mathcal{R}_{P,\mathcal{E}}\) is a near-optimal recovery map for the codespace \(\mathcal{C}\) under the action of noise \(\mathcal{E}\), where optimality is defined in terms of the _worst-case fidelity_ on the codespace. Recall that the fidelity measure \(F\), based on the Bures metric, between a pure state \(|\psi\rangle\) and a mixed state \(\rho\) is defined as [8], \[F^{2}(|\psi\rangle,\rho)=\langle\psi|\rho|\psi\rangle. \tag{3}\] The worst-case fidelity corresponding to codesapce \(\mathcal{C}\) and the composite map \(\mathcal{R}_{P,\mathcal{E}}\circ\mathcal{E}\) is then obtained by minimizing the fidelity over all states in the codespace, namely, \[F^{2}_{\min}(\,\mathcal{C},\mathcal{R}_{P,\mathcal{E}}\circ\mathcal{E}\,)= \min_{|\psi\rangle\in\mathcal{C}}\langle\psi|\mathcal{R}_{P,\mathcal{E}}\circ \mathcal{E}(|\psi\rangle\langle\psi|)|\psi\rangle. \tag{4}\] The worst-case fidelity measure thus guarantees that the _all_ states in the codespace are preserved to a high degree. The near-optimality result in [5] states that the Petz map defined in Eq. (2) achieves a worst-case fidelity that is close to that of the optimal recovery map corresponding to code \(\mathcal{C}\) and noise \(\mathcal{E}\). ### Quantum algorithm for the Petz map It was recently shown in [16] that a quantum circuit that _approximately_ implements the state-specific Petz map in Eq. (1) can be realised using the techniques of block encoding [20] and the quantum singular value transform (QSVT) [19], by decomposing the Petz map as sequence of completely positive (CP) maps. Note that the maps in Eq. (1) and Eq. (2) can be thought of as being composed of three (CP) maps. For example, the state-specific map \(\mathcal{R}_{\sigma,\mathcal{E}}\) can be decomposed in terms of (a) the adjoint map \(\mathcal{E}^{\dagger}(.)\) with Kraus operators \(\{E_{i}^{\dagger}\}\), (b) the normalization or amplification map \(\mathcal{E}(\sigma)^{-1/2}(.)\mathcal{E}(\sigma)^{-1/2}\) and (c) the projection map \(\mathcal{P}_{\sigma}(.)=\sqrt{\sigma}\,(.)\sqrt{\sigma}\). These maps are trace increasing in general, but the overall map obtained by composing these three maps is indeed trace-preserving. The quantum algorithm to implement the state-specific Petz recovery map \(\mathcal{R}_{\sigma,\mathcal{E}}\) described in [16], proceeds via the following steps. First, it assumes that there exist quantum circuits that realise approximate _block encodings_ of the state \(\sigma\) and the operator \(\mathcal{E}(\sigma)\). Furthermore, it also assumes the existence of a quantum circuit that implements an _isometric extension_ of the adjoint map \(\mathcal{E}^{\dagger}(.)\). Finally the algorithm uses the QSVT and the technique of amplitude amplification [21] to realise the \(\mathcal{E}(\sigma)^{-1/2}\) and the \(\sqrt{\sigma}\) operations. In what follows, we will briefly review the concepts of isometric extension and block encoding here since they are used in one of our circuit constructions as well. #### ii.1.1 Isometric Extension of a CPTP map Consider a CPTP map \(\mathcal{E}:\mathcal{S}(\mathcal{H}_{A})\rightarrow\mathcal{S}(\mathcal{H}_{B})\) mapping states on Hilbert space \(\mathcal{H}_{A}\) to states on \(\mathcal{H}_{B}\). Let \(\mathcal{H}_{E}\) denote the Hilbert space of an ancillary system \(E\) and \(\rho_{B}\in\mathcal{S}(\mathcal{H}_{E})\) denote some fixed state of the ancilla. The isometric extension of \(\mathcal{E}\) is defined to be the unitary \(U^{\mathcal{E}}_{AE\to B}\) mapping states on \(\mathcal{H}_{A}\otimes\mathcal{H}_{E}\) to states on \(\mathcal{H}_{B}\), such that, \[\mathrm{Tr}_{E}[U^{\mathcal{E}}_{AE\to B}(\rho_{A}\otimes\rho_{E})(U^{ \mathcal{E}}_{AE\to B})^{\dagger}]=\mathcal{E}(\rho_{A}). \tag{5}\] The isometric extension of a map \(\mathcal{E}\) can be implemented via a quantum circuit as shown in Fig. 1. However, the adjoint map \(\mathcal{E}^{\dagger}\) is not a physical map in general, and hence does not admit an isometric extension. However, it was shown in [16] that the adjoint map can be implemented via a quantum circuit, starting with the isometric extension for the forward channel \(\mathcal{E}\) and using the procedure of block encoding described below. #### ii.1.2 Block Encoding The idea of block encoding is to represent any arbitrary matrix \(A\) as the top-left block of a unitary matrix. If the unitary \(U^{A}\) is an _exact_ block encoding of \(A\), then, it has the following structure. \[U^{A}=\begin{bmatrix}A&.\\.&.\end{bmatrix}. \tag{6}\] We formally define the block encoding for an \(s\)-qubit operator here and note that this can be generalized to arbitrary dimensions as well [19]. An \((s+a)\)-qubit unitary \(U\) is said to be an \((\alpha,a,\epsilon)\)-block encoding of an \(s\)-qubit operator \(A\), if, \[\parallel A-\alpha(\langle 0|^{\otimes a}\otimes I_{2^{*}}\rangle U(|0\rangle^{ \otimes a}\otimes I_{2^{*}})\parallel\,\leq\,\epsilon,\] where \(\parallel(.)\parallel\) denotes the operator norm, \(\alpha,\epsilon\in\mathbb{R}_{+}\) and \(a\in\mathbb{N}\). \(U\) is said to be an exact block encoding of the matrix \((A/\alpha)\) if \(\epsilon=0\). The unitary \(U\) is thus an exact block encoding of \(A\), if \(\alpha=1\) and \(\epsilon=0\). In order to get the action of the \(s\)-qubit operator \(A\), we measure the \(a\)-qubit ancilla and keep the \(s\)-qubit output state only if the \(a\)-qubit ancilla returns the all-zero after the measurement. This procedure is depicted in the Fig 2. Of particular interest here is the existence of unitaries that provide block encodings of density operators. In this context, we note the following result from [19] which shows an exact block encoding for any density operator \(\rho\) can be constructed via a _purifying_ isometry on an extended space. **Lemma 1**.: _Suppose that \(\rho\) is an \(s\)-qubit density operator and \(G\) is an (a + s)-qubit unitary that when acting on the \(\left|0\right\rangle\left|0\right\rangle\) input state gives a purification \(\left|0\right\rangle\left|0\right\rangle\rightarrow\left|\rho\right\rangle\) on the \(a+s\)-qubit system, such that, \(\mathrm{Tr}_{a}\left|\rho\right\rangle\left\langle\rho\right|=\rho\). Then the operator \(U\) given by_ \[U=(G^{\dagger}\otimes I_{2^{*}})(I_{2^{*}}\otimes SWAP_{s})(G\otimes I_{2^{*}}) \tag{7}\] _is an exact block encoding of the density matrix \(\rho\)._ ## III Petz Recovery Circuit via Isometric Extension Arguably, the most direct approach to implementing the code-specific Petz recovery map \(\mathcal{R}_{P,\mathcal{E}}\) is via its isometric extension. However, this is rather resource intensive in general, requiring for example, of the order of \(dN\) two-level unitaries [22], for a quantum channel with \(N\) Kraus operators, acting on a \(d\)-dimensional space. In what follows we first present a resource-efficient implementation of the isometric extension of any CPTP map. We then construct a circuit that can directly estimate the fidelity between an arbitrary initial state and the final state after the action of a quantum channel. Both these results are then applied to the specific case of the Petz map \(\mathcal{R}_{P,\mathcal{E}}\). ### Isometric extension of the Petz map using two-level unitaries Consider a quantum channel \(\mathcal{R}\) with \(N\) Kraus operators acting on a \(d\)-dimensional Hilbert space \(\mathcal{H}_{S}\). We consider a specific isometric extension of \(\mathcal{R}\), denoted as \(V_{\mathcal{R}}\), where the ancillary system starts in a fixed pure state \(\left|0\right\rangle\), so that, \[\mathcal{R}(\rho)=\mathrm{Tr}_{E}[V_{\mathcal{R}}(\rho_{S}\otimes\left|0_{E} \right\rangle\left\langle 0_{E}\right|)V_{\mathcal{R}}^{\dagger}] \tag{8}\] If \(\{R_{i},\;i\in[0,\ldots,N-1]\}\) denote the Kraus operators of the quantum channel \(\mathcal{R}\), then the isometry \(V_{\mathcal{R}}\) has the Figure 1: Isometric extension circuit of a map \(\mathcal{E}\) Figure 2: The \((a+s)\)- qubit block encoding of A. \(U^{A}\) returns \(A\left|\psi\right\rangle\) upon measuring the \(a\)-qubit ancilla, whenever the outcome is the all-zero state. following block matrix structure on the extended space \(\mathcal{H}_{S}\otimes\mathcal{H}_{E}\). \[V_{\mathcal{R}}=\begin{bmatrix}R_{0}&\ldots&\ldots\\ \vdots&\ddots&\vdots\\ R_{N-1}&\ldots&\ldots\end{bmatrix}. \tag{9}\] Note that the unitary \(V_{\mathcal{R}}\) acts on a space of dimension \(dN\). The first \(N\) blocks of the unitary \(\mathcal{V}_{R}\) contain the Kraus operators, and the rest of the unitary does not affect the system of interest. To construct a quantum circuit to implement \(V_{\mathcal{R}}\) in Eq.(9), we will follow the well-known procedure of decomposing the unitary into a product of two-level unitaries [8]. According to this prescription, the unitary operator \(V_{\mathcal{R}}\) can be realised as a product of \(k\) two-level unitaries \(\{V_{1},\ldots,V_{k}\}\) as follows. \[V_{\mathcal{R}} =V_{1}V_{2}\ldots V_{k}\] \[(V_{1}V_{2}\ldots V_{k})^{\dagger}V_{\mathcal{R}} =I_{dN}\] \[\text{where} k =\frac{dN(dN-1)}{2}. \tag{10}\] Here, \(I_{dN\times dN}\) denotes the identity operator on a \(dN\)-dimensional space. Consider now, the Petz recovery map \(\mathcal{R}_{P,\mathcal{E}}\) corresponding to a noise channel \(\mathcal{E}\) with \(N\) Kraus operators and an \(n\)-qu\(d\)it code \(\mathcal{C}\) with projector \(P\). Note that \(\mathcal{E}\) here refers to the single-qudit noise channel with \(N\) Kraus operators, which acts in an i.i.d. fashion on the \(n\) qudits of the codespace. The Petz recovery map in this case contains \(N^{n}\) Kraus operators, each acting on a \(d^{n}\)-dimensional space. As per the upper bound in Eq. (10), the isometric extension circuit for this recovery map could require as many as \(d^{n}N^{n}(d^{n}N^{n}-1)/2\) two-level unitaries. As we argue below, for our purposes, we do not need to decompose the complete unitary \(V_{\mathcal{R}}\). Rather, it suffices to decompose \(V_{\mathcal{R}}\) partially in such a way that the isometric extension circuit can be realised with only \(d^{2n}N^{n}\) two-level unitaries. **Lemma 2**.: _The Petz map \(\mathcal{R}_{P,\mathcal{E}}\) corresponding to a noise channel \(\mathcal{E}\) with \(N\) Kraus operators and an \(n\)-qudit code \(\mathcal{C}\) with projector \(P\), can be implemented exactly using a quantum circuit comprising \(d^{2n}N^{n}\) two-level unitaries and \(n\) ancillary qudits._ Proof.: According to the two-level unitary decomposition procedure in [8], we note that for any unitary matrix \(U_{D\times D}\) on a \(D\)-dimensional system we can find \(m\) two-level unitaries such that, \[\prod\limits_{i=1}^{m}V_{i}^{\dagger}\,U_{D\times D}=\begin{pmatrix}I_{m}&0 \\ 0&M_{D-m\times D-m}\end{pmatrix} \tag{11}\] For a given \(n\)-qu\(d\)it code and a noise channel \(\mathcal{E}\) with \(N\) Kraus operators, we know all the \(N\) Kraus operators of the Petz recovery \(\mathcal{R}_{P,\mathcal{E}}\), as defined in Eq. (2). Let \(\{R_{i}\}\) denote the Kraus operators of \(\mathcal{R}_{P,\mathcal{E}}\). Since these are operators of dimension \(d^{n}\times d^{n}\), we essentially know the first \(d^{n}\) columns of the isometric extension \(V_{\mathcal{R}_{P,\mathcal{E}}}\). Therefore, according to the Eq.(11), there exist \(N^{n}d^{2n}\) two-level unitaries \(\{V_{1},V_{2},\ldots,V_{d^{n}}\}\) such that, \[(V_{1}V_{2}\ldots V_{N^{n}d^{2n}})^{\dagger}V_{\mathcal{R}_{P,\mathcal{E}}}= \begin{pmatrix}\mathbb{I}_{d^{n}\times d^{n}}&0\\ 0&*\end{pmatrix} \tag{12}\] Let the product \((V_{1}V_{2}\ldots V_{m})\) be denoted \(\tilde{V}_{\mathcal{R}_{P,\mathcal{E}}}\). Then Eq.(12) implies, \[\left(\tilde{V}_{\mathcal{R}_{P,\mathcal{E}}}\right)^{\dagger}= \begin{pmatrix}R_{0}^{\dagger}&R_{1}^{\dagger}&\ldots&R_{N-1}^{\dagger}\\ \vdots&\vdots&\ddots&\vdots\end{pmatrix} \tag{13}\] Since the Petz map is a CPTP map, its Kraus operators \(\{R_{i}\}\) satisfy \(\sum\limits_{i=0}^{N-1}R_{i}^{\dagger}R_{i}=\mathbb{I}_{d^{n}\times d^{n}}\), thus showing that the action of unitary \(\tilde{V}_{\mathcal{R}_{P,\mathcal{E}}}\) is equivalent to the action of the unitary \(\mathcal{V}_{\mathcal{R}_{P,\mathcal{E}}}\) on the first \(d^{n}\times d^{n}\) block which is the domain of the Petz map in this case. \[(\tilde{V}_{\mathcal{R}_{P,\mathcal{E}}})^{\dagger}\mathcal{V}_{ \mathcal{R}_{P,\mathcal{E}}} =\begin{pmatrix}R_{0}^{\dagger}&R_{1}^{\dagger}&\ldots&R_{N-1}^{ \dagger}\\ \vdots&\vdots&\ddots&\vdots\\ R_{N-1}&\ldots&\ldots\end{pmatrix}\begin{bmatrix}R_{0}&\ldots&\ldots\\ \vdots&\ddots&\vdots\\ R_{N-1}&\ldots&\ldots\end{bmatrix}\] \[=\begin{pmatrix}\sum_{i=0}^{N-1}R_{i}^{\dagger}R_{i}&0\\ 0&*\end{pmatrix}\] \[=\begin{pmatrix}I_{d^{n}}&0\\ 0&*\end{pmatrix} \tag{14}\] Putting together Eqs. (12) and Eq. (14), it follows that the Petz map can be exactly realised using \(d^{2n}N^{n}\) two-level unitaries. Note that the reduction in the number of two-level unitaries shown in Lemma 2 holds more generally for a quantum channel whose isometric extension is defined using a fixed pure state of the ancilla, as in Eq. (8). The final step in implementing the isometric extension \(V_{\mathcal{R}_{P,\mathcal{E}}}\) is to identify the set of two-level unitaries \(\{V_{1},V_{2},\ldots,V_{N^{n}d^{2n}}\}\) that can be done following, for example, the standard procedure described in [8]. Fig. 3 shows a complete circuit that combines the isometric extension of the noise channel \(\mathcal{E}\) and that of the recovery channel \(\mathcal{R}_{P,\mathcal{E}}\), along with the encoding unitary \(U_{\text{en}}\). Note that the Petz recovery circuit shown here for an \(n\)-qu\(d\)it code requires \(n\) ancillary qubits initialized to the \(\left|0\right\rangle\) state. ### Fidelity Calculation Via Block encoding We next demonstrate how the technique of block encoding can be used to construct a quantum circuit that directly estimates the fidelity between a given (initial) state and the final state after the action of noise and recovery. This provides a way to circumvent the need for full-state tomography, which is often expensive to implement, and instead estimate the fidelity in a more direct, resource-efficient way. Our approach is distinct from other works in the literature which have proposed quantum algorithms for computing the fidelity between a pair of arbitrary density operators [23; 24; 25]. These works either assume access to multiple copies of the density operators [23; 24] or assume access to purifications of these operators via quantum circuits [24; 25]. Here, we address the question of estimating the fidelity between a given \(n\)-qubit (initial) state \(\ket{\psi}\) and the density operator that results after the action of a CPTP map \(\mathcal{N}\) on the state \(\ket{\psi}\). Our circuit construction makes use of the isometric extension of the map \(\mathcal{N}\) and a unitary matrix \(U\) that prepares the initial state \(\ket{\psi}\) from \(\ket{0}^{\otimes n}\). Recall that the fidelity between an initial state \(\ket{\psi}\) and the corresponding final state \(\mathcal{N}(\ket{\psi}\bra{\psi})\) is calculated as defined in Eq. (3). Instead of calculating the fidelity by performing full state tomography on the output state \(\rho=\mathcal{N}(\ket{\psi}\bra{\psi})\) and then using Eq. (3), we propose a direct method for estimating \(F^{2}\). Specifically, we construct a circuit whose output gives us the fidelity \(F^{2}\) as the probability of getting the all-zero state, when measured in the computational basis. **Theorem 1**.: _Consider the action of a CPTP map \(\mathcal{N}\) on an \(n\)-qubit pure state \(\ket{\psi}\). Let \(V_{\mathcal{N}}\) denote the isometric extension \(\mathcal{N}\) and \(U\) be a unitary such that \(U|0\rangle^{\otimes n}=\ket{\psi}\) Then, the circuit in Fig.5 can be used to estimate the fidelity \(F^{2}=\bra{\psi}\mathcal{N}(\ket{\psi}\bra{\psi})\ket{\psi}\) as follows. Upon measuring all the registers in Fig. 5 in the computational basis, the probability of getting the all-zero state as the outcome is given by \(F^{4}\)._ Proof.: To calculate the fidelity, we first encode the \(n\)-qubit density matrix \(\rho=\mathcal{N}(\ket{\psi}\bra{\psi}))\) as the top-left block of a unitary matrix, following the procedure in Lemma 1. Recall that exact block encoding of an \(n\)-qubit density matrix \(\mathcal{N}(\ket{\psi}\bra{\psi})\), requires an \((n+s)\)-qubit _purification unitary_\(G\) for the state \(\mathcal{N}(\ket{\psi}\bra{\psi})\), such that, \[G\ket{0}^{\otimes n}\ket{0}^{\otimes s}=\mathcal{N}(\ket{\psi}\bra{\psi}) \ket{\psi}, \tag{15}\] where \(\ket{\mathcal{N}(\ket{\psi}\bra{\psi})}\) denotes a purification of the density operator \(\mathcal{N}(\ket{\psi}\bra{\psi})\). This purification circuit is schematically depicted in Fig. 4. Note that the purification circuit in Fig. 4 is similar to the isometric extension circuit described in Fig. 1, with the only difference being in the specific choice of input states. We can make these two circuits effectively identical by just by adding another unitary \(U\) which acts on \(\ket{0}^{n}\) to give the state \(\ket{\psi}\), along with identifying that \(s=n\). Therefore the purification \(G\) is nothing but a matrix product of the isometric extension \(V_{\mathcal{N}}\) of the channel \(\mathcal{N}\) and \(I_{2^{n}\times 2^{n}}\otimes U\) as depicted by the boxed circuit in Fig. 5. Now we analyse the rest of the circuit in Fig. 5. As explained above, the output of the boxed circuit is the block encoding, \[U^{\mathcal{N}(\ket{\psi}\bra{\psi}))}=\begin{pmatrix}\mathcal{N}(\ket{\psi} \bra{\psi})&*\\ *&*\end{pmatrix} \tag{16}\] Therefore, the unitary \((\mathbb{I}\otimes U^{\dagger})U^{\mathcal{N}(\ket{\psi}\bra{\psi})}(\mathbb{ I}\otimes U)\) has the form, \[U_{\text{Fid}}=\begin{pmatrix}U^{\dagger}\mathcal{N}(\ket{\psi}\bra{\psi})U&* \\ *&*\end{pmatrix} \tag{17}\] The action of the unitary \(U_{\text{Fid}}\) on the input state is, \[U_{\text{Fid}}\ket{0}^{3n}=U^{\dagger}\mathcal{N}(\ket{\psi}\bra{\psi})(\ket{ \psi}\otimes\ket{0}^{2n})+\ket{*}\] Thus the first matrix element of \(U_{\text{Fid}}\) in the computational basis evaluates to, \[\bra{0}^{3n}\!U_{\text{Fid}}\!\ket{0}^{3n}=\bra{\psi}\mathcal{N}(\ket{\psi} \bra{\psi})\ket{\psi}. \tag{18}\] Therefore the probability of getting all-zero state after performing a measurement in the computational basis is \[\text{Prob}(\ket{0}^{3n})=\bra{\psi}\mathcal{N}(\ket{\psi}\bra{\psi})\ket{\psi} ^{2}=F^{4}.\] Figure 5: Circuit for the fidelity calculation. The boxed part is the purification unitary \(G\) for the state \(\mathcal{N}(\ket{\psi}\bra{\psi})\). We now use the circuit construction outlined in Theorem III.2 to obtain a circuit that evaluates the fidelity between any encoded state and the state after the action of noise followed by Petz recovery. Consider a noise channel \(\mathcal{E}\) acting on an \(n\)-qubit code, followed by the Petz recovery map \(\mathcal{R}_{P,\mathcal{E}}\). Let \(\{E_{i}\}\) denote the \(N\) Kraus operators of the \(n\)-qubit noise channel. Correspondingly, let \(\{R_{i}\}\) denote the \(N\) Kraus operators of the \(n\)-qubit Petz recovery map. Note that both sets of Kraus operators \(\{E_{i}\}\) and \(\{R_{i}\}\) are of dimension \(2^{n}\times 2^{n}\). Given the Kraus operators, the individual isometric extensions \(V_{\mathcal{E}}\) and \(V_{\mathcal{R}_{P,\mathcal{E}}}\) are known. We can then construct the isometric extension for the composite channel \((\mathcal{R}_{P,\mathcal{E}})\circ\mathcal{E}\) by combining these individual isometric extensions as shown in Fig. 6. Here, the unitary \(V_{\mathcal{E}}\) is applied only when the control block is set to \(\ket{0}^{n}\). Now, let \(\ket{\psi_{en}}\) be a state in the codespace of an \(n\)-qubit code, which is prepared by the action of the encoding unitary \(U_{en}\) on \(\ket{\psi}\otimes\ket{0}^{n-1}\). Since our circuit requires the input to be the all-zero states, we construct another unitary \(\tilde{U}_{\text{en}}\) such that, \[\tilde{U}_{en}\ket{0}^{n}=\ket{\psi_{en}}.\] We demonstrate an example of such a construction of \(\tilde{U}_{en}\) for a specific QEC code in Sec.VI. Then, it follows from Thm. III.2 that the circuit in the Fig. 7 evaluates the fidelity between the density matrix \((\mathcal{R}\circ\mathcal{E})(\ket{\psi_{\text{en}}}\bra{\psi_{\text{en}}})\) and the initial encoded state \(\ket{\psi_{en}}\), with all the inputs initialized to the zero state. The boxed part of the circuit in Fig. 7 is the purification of the density matrix \((\mathcal{R}\circ\mathcal{E})(\ket{\psi_{\text{en}}}\bra{\psi_{\text{en}}})\). ## IV Petz map circuit via polar decomposition Our second approach to construct a circuit for the Petz map invokes general quantum measurements corresponding to positive operator valued measures (POVMs). Our approach is based on the fact that the Kraus operators of any channel can be _approximately_ realised via a sequence of binary outcome POVMs and unitary operations, as shown in Lemma 3 below. The fact that quantum channels can be realised using unitary operators and POVMs is well known [17; 8]. The original proposal in [17] uses a \(N\)-outcome quantum measurement to implement a CPTP map with \(N\) Kraus operators, but this implementation typically requires \(\lceil\log_{2}N\rceil\) ancillary qubits [26]. Alternate approaches to implementing \(N\)-outcome POVMs that require only a single ancillary qubit are known, but these approaches either involve non-Hermitian quantum measurements [17] or require extending the system Hilbert space via Naimark dilation [27]. Here, we adapt some of these existing approaches to approximate a quantum channel via a sequence of two-outcome POVMs and unitary operations, requiring only one additional ancillary system. A similar approximate approach to simulating open quantum system dynamics has been outlined in [28], but this approach relies on Stinespring dilation rather than POVMs. Finally, we note that a related approach to implementing quantum channels using a sequence of POVMs and a single ancillary qubit has also been proposed in [29]. **Lemma 3**.: _A CPTP map \(\mathcal{K}\) of rank \(N\), described by \(N\) Kraus operators \(\mathcal{K}\sim\sum_{i=1}^{N}\{K_{i}\}\), can be decomposed using a sequence of quantum operations \(\{\mathcal{R}_{j}\}\), as,_ \[\mathcal{K}(\rho)=(\mathcal{R}_{N}\circ\mathcal{R}_{N-1}\cdots\mathcal{R}_{1} )(\rho)+\mathcal{O}(\parallel K_{i}^{\dagger}K_{i}\parallel^{2}), \tag{19}\] _where each operation \(\mathcal{R}_{j}\) is a composition of a \(2\)-outcome POVM and a unitary gate that is controlled by the outcome of the POVM._ Proof.: We start with the polar decomposition of the Kraus operators. Let \(\{U_{i}\}\) be a set of unitary matrices and \(\{P_{i}\}\) be a set of positive semi-definite operators, such that, \[K_{i}=U_{i}P_{i}=U_{i}\sqrt{K_{i}^{\dagger}K_{i}},\;\forall\;i\in[1,N]. \tag{20}\] Since \(\mathcal{K}\) is a trace-preserving map, the set \(\{P_{i}\}\) defines an \(N\)-outcome POVM. \[\sum_{i=1}^{N}P_{i}^{2}=\sum_{i=1}^{N}K_{i}^{\dagger}K_{i}=I. \tag{21}\] We now define the sequence of operations \(\{\mathcal{R}_{i}\}\) as follows. We first construct \(N\) two-outcome POVM measurements \(\mathcal{M}_{i}\), each defined by the pair of positive operators \(P_{i}=\sqrt{K_{i}^{\dagger}K_{i}}\) and \(Q_{i}=\sqrt{I-K_{i}^{\dagger}K_{i}}\). Suppose the positive operator \(P_{i}\) corresponds to outcome \(0\) and \(Q_{i}\) to outcome \(1\) for each \(\mathcal{M}_{i}\). Figure 7: Circuit for the fidelity calculation If the outcome of the measurement \(\mathcal{M}_{i}\) is \(0\), we simply apply the unitary operation \(U_{i}\). If the outcome of the measurement is \(1\), we need to ideally perform an inverse operation \((Q_{i})^{-1}\) followed by the \((i+1)^{\text{th}}\) POVM \(\mathcal{M}_{i+1}\). Here, we assume that the inverse operations given by \((Q_{i})^{-1}\) only _weakly perturb_ the state and hence can be ignored. Thus the \(i^{\text{th}}\) operation \(\mathcal{R}_{i}\) is characterized by the pair of operators \[R_{i}^{0}=U_{i}P_{i}\equiv K_{i},\ R_{i}^{1}=Q_{i}\equiv\sqrt{I-K_{i}^{\dagger }K_{i}}. \tag{22}\] We now formally show that the sequence of operations \(\mathcal{R}_{i}\) does implement the map \(\mathcal{K}\) upto \(\mathcal{O}(\parallel K_{i}^{\dagger}K_{i}\parallel^{2})\). Using a Taylor series expansion and ignoring all terms of second order and above in \(\parallel K_{i}^{\dagger}K_{i}\parallel\), we have, \[\sqrt{I-K_{i}^{\dagger}K_{i}}\approx I-\frac{1}{2}K_{i}^{\dagger}K_{i}.\] The action of the operator \(Q_{i}\) is therefore, \[Q_{i}\rho Q_{i}^{\dagger}=\rho-\frac{1}{2}K_{i}^{\dagger}K_{i}\rho-\frac{1}{ 2}\rho K_{i}^{\dagger}K_{i}+\mathcal{O}(\parallel K_{i}^{\dagger}K_{i} \parallel^{2}). \tag{23}\] The action of \(\mathcal{R}_{j}\) on Eq. (23) taken up to first order in \(\parallel K_{i}^{\dagger}K_{i}\parallel\) gives us, \[\mathcal{R}_{j}(Q_{i}\rho Q_{i})= K_{j}\rho K_{j}^{\dagger}+\rho-\frac{1}{2}K_{i}^{\dagger}K_{i} \rho-\frac{1}{2}\rho K_{i}^{\dagger}K_{i}\] \[\frac{1}{2}K_{j}^{\dagger}K_{j}\rho-\frac{1}{2}\rho K_{j}^{ \dagger}K_{j}+\mathcal{O}(\parallel K_{i}K_{i}^{\dagger}\parallel^{2}).\] Thus, composing the entire sequence of \(N\) maps as in Eq.(19), we obtain the sum, \[(\mathcal{R}_{N}\circ\mathcal{R}_{N-1}\cdots\mathcal{R}_{1})( \rho)=\sum_{i=1}^{N}K_{i}\rho K_{i}^{\dagger}+\rho \tag{24}\] \[-\frac{1}{2}\sum_{i=1}^{N}K_{i}^{\dagger}K_{i}\rho-\frac{1}{2} \sum_{i=1}^{N}\rho K_{i}^{\dagger}K_{i}+\mathcal{O}(\parallel K_{i}^{\dagger }K_{i}\parallel^{2}).\] The trace preserving nature of our CPTP map and Eq.(21), we see that Eq. (24) simplifies to give us the desired result. ### Quantum circuit for an arbitrary channel via polar decomposition We further elucidate the construction outlined in Lemma 3 by describing the initial steps of the quantum circuit to realise an arbitrary quantum channel, as shown in Fig. 8. Each step in our circuit involves implementing a CPTP map (of rank 2) with a pair of Kraus operators given in Eq. (22). For the first step, we implement the operation \(\mathcal{R}_{1}\) whose Kraus operators are given by \(K_{1}\) and \(\sqrt{I-K_{1}^{\dagger}K_{1}}\). These may be implemented using a single ancilla qubit \(A_{1}\), initialised to the \(\ket{0}\) state, by first performing an operation \(U_{\mathcal{M}_{1}}\) given by, \[U_{\mathcal{M}_{1}}=\begin{bmatrix}P_{1}&-Q_{1}\\ Q_{1}&P_{1}\end{bmatrix}. \tag{25}\] Since the operator acts on the combined ancilla-target system given by \(\ket{0}\otimes\ket{\psi}\) where \(\ket{\psi}\) is the target state, it is sufficient to define only the first column of the block matrix. This is then followed by conditionally applying either of two unitary operations - \(U_{1}\) or \(\tilde{U}_{1}\) - on the target state that corresponds to the polar decompositions of the Kraus operators \(K_{1}\) and \(\tilde{K_{1}}=\sqrt{I-K_{1}^{\dagger}K_{1}}\). We can see that for a rank \(N=2\) CPTP map, this gives a complete implementation of the map, as follows. Consider the combined state of the ancilla-target system after the operation of \(U_{\mathcal{M}_{1}}\) and the corresponding unitaries. \[\ket{\phi}=\ket{0}\otimes U_{1}\sqrt{K_{1}^{\dagger}K_{1}}\ket{\psi}+\ket{1} \otimes\tilde{U}_{1}\sqrt{\tilde{K_{1}}^{\dagger}\tilde{K_{1}}}\ket{\psi}. \tag{26}\] Using Eq. (20) we see that \(U_{1}\sqrt{K_{1}^{\dagger}K_{1}}=K_{1}\) and \(\tilde{U}_{1}\sqrt{\tilde{K_{1}}^{\dagger}\tilde{K_{1}}}=\tilde{K_{1}}\). Therefore, we me rewrite \(\ket{\phi}\) as \[\ket{\phi}=\ket{0}\otimes K_{1}\ket{\psi}+\ket{1}\otimes\tilde{K_{1}}\ket{ \psi}. \tag{27}\] For channels with \(N>2\), we now wish to apply the subsequent set of operations only on the part of the state Figure 8: The first two steps of the quantum circuit used to implement any CPTP map as described in Lemma 3. The unitary gates labeled \(U_{\mathcal{M}_{i}}\) implement the POVMs \(\mathcal{M}_{i}\) at each step. The two conditional unitaries following \(U_{\mathcal{M}_{i}}\) implement the operations \(U_{i}\) and \(\tilde{U_{i}}\) as described in Eq. (20). The second component of the circuit (boxed) is repeated \(N-2\) times in order to implement a rank \(N\) CPTP map. corresponding to \(\tilde{K_{1}}\). To achieve this, introduce a second ancilla qubit (\(A_{2}\)) and apply a cnot gate with \(A_{1}\) as the control. \(A_{1}\) can now be reinitialised to the \(\ket{0}\) state for further operations. The state of the system \((\rho_{A_{1}}\otimes\rho_{A_{2}}\otimes\rho_{T})\) after the cnot and re-initialization is given as, \[\begin{split}\rho_{A_{1}}\otimes\rho_{A_{2}}\otimes\rho_{T}& =\ket{0}\bra{0}\otimes\ket{0}\bra{0}\otimes K_{1}\ket{\psi} \bra{\psi}K_{1}^{\dagger}\\ &+\ket{1}\bra{1}\otimes\ket{0}\bra{0}\otimes\tilde{K_{1}}\ket{ \psi}\bra{\psi}\tilde{K_{1}}^{\dagger}\end{split} \tag{28}\] Now, we can use the ancilla system \(A_{1}\) as a control to apply the second set of Kraus operators corresponding to \(\mathcal{R}_{2}\). We redefine the \(U_{\mathcal{M}_{i}}\) operators for \(i>1\) as follows \[U_{\mathcal{M}_{i}}=\begin{bmatrix}Q_{i}&-P_{i}\\ P_{i}&Q_{i}\end{bmatrix}\quad\forall i>1 \tag{29}\] Using this definition of \(U_{\mathcal{M}_{2}}\), the state of the combined system post the re-initialization step is given by \[\begin{split}\rho_{a_{2}}\otimes&\rho_{a_{1}}\otimes \rho_{s}=\ket{0}\bra{0}\otimes\ket{0}\bra{0}\otimes K_{1}\ket{\psi}\bra{\psi} K_{1}^{\dagger}\\ &+\ket{0}\bra{0}\otimes\ket{0}\bra{0}\otimes K_{2}\tilde{K_{1}} \ket{\psi}\bra{\tilde{K_{1}}^{\dagger}K_{2}^{\dagger}}\\ &+\ket{1}\bra{1}\otimes\ket{0}\bra{0}\otimes\tilde{K_{2}}\tilde{K_ {1}}\ket{\psi}\bra{\psi}\tilde{K_{1}}^{\dagger}\tilde{K_{2}}^{\dagger}\end{split} \tag{30}\] Now, the subsequent steps can be carried out in the same way, and the second control ancilla is traced over at the very end. Finally, it is now straightforward to apply this circuit construction to obtain an _approximate_ circuit implementation of the Petz map, as described in Sec. VI. ## V Petz recovery circuit via block encoding In this section, we present an alternate approach to realizing the Petz map recovery circuit for a \(n\)-qubit code, using both the techniques of block encoding and isometric extension. We first visualize the code-specific Petz map as a sequence of three completely positive maps, namely, \[\mathcal{E}(P)^{-1/2}(.)\mathcal{E}(P)^{-1/2} \tag{31}\] \[\mathcal{E}^{\dagger}(.)\] (32) \[P\left(.\right)P \tag{33}\] In contrast to the procedure outlined in [16], which requires the block encoding of both the maps in Eqs. (31) and (33), we only employ the block encoding technique to implement the _normalization_ map in Eq. (31). In a further departure from [16] where the quantum singular value transform (QSVT) is used to implement the block encoding of the pseudo-inverse operation \(\mathcal{E}(P)^{-1/2}\), here, we use a more direct approach to construct the block encoding of this operator, as discussed in Appendix A. The adjoint map in Eq. (32) and the action of the projection map in Eq. (33) are both realised via isometric extensions. If \(\mathcal{V}_{\mathcal{E}}\) denotes the isometric extension of the adjoint quantum channel \(\mathcal{E}\), the isometric extension of the adjoint map \(\mathcal{E}^{\dagger}\) on a state \(\rho\) can be realised by the action of the unitary (\(I_{2}\otimes\mathcal{V}_{\mathcal{E}}^{\dagger}\)) as [30], \[\mathcal{E}^{\dagger}(\rho)=\bra{0}(I_{2}\otimes\mathcal{V}_{\mathcal{E}}^{ \dagger})(I_{2}\otimes\rho)(I_{2}\otimes\mathcal{V}_{\mathcal{E}})\ket{0}. \tag{34}\] Here, \(I_{2}\) is the \(2\times 2\) identity matrix. Finally, we realise the projection onto the codespace via the isometric extension \(U_{\mathcal{P}}\) of the channel \(\mathcal{P}\), with Kraus operators \[E_{P}^{(1)}=\begin{pmatrix}P&0\\ 0&0\end{pmatrix},\;E_{P}^{(2)}=\begin{pmatrix}P_{\perp}&0\\ 0&I_{2^{n+1}}\end{pmatrix}, \tag{35}\] where \(P_{\perp}\) is the projector orthogonal to the codespace projector \(P\). Applying the isometric extension of the channel \(\mathcal{P}\) allows us to post-select for the recovered state in the codespace. The above discussion is formally stated and proved in Theorem 2. **Theorem 2**.: _The circuit in Fig. 9 implements the code-specific Petz recovery map with probability \(\frac{1}{N^{n}||(\mathcal{E}(\mathcal{P})^{-1/2})||^{2}}\) for an \(n-\)qubit code with projector \(P\), where each qubit is subject to a noise channel \(\mathcal{E}\) with \(N<4\) Kraus operators in an i.i.d. fashion. For a Petz channel with \(N^{n}\) Kraus operators, the ancillary inputs are initialized to the state \((\ket{0}\bra{0}\otimes\frac{I_{N^{n}}}{N^{n}})\). The final recovered state is obtained by post-selecting the all-zero state on the first \((n\log_{2}N+2)\) qubits._ Proof.: For a \(n-\)qubit code and an error channel with \(N\) Kraus operators, the Petz recovery map \(\mathcal{R}_{P,\mathcal{E}}\) has \(N^{n}\) Kraus operators. Let \(W\) denote the unitary operation implemented by the first part of the circuit in Fig. 9, up to the dotted line. We first note that the unitary \(W\) has the following algebraic form. \[W=(I_{2}\otimes SWAP)(U^{\tilde{\mathcal{E}}(P)^{-1/2}}\otimes I _{N^{n}})(I_{2}\otimes SWAP)\\ (I_{2}\otimes\mathcal{V}_{\mathcal{E}}^{\dagger}) \tag{36}\] where, \(U^{\tilde{\mathcal{E}}(P)^{-1/2}}\) denotes the block encoding of the operator \(\frac{\mathcal{E}(\mathcal{P})^{-1/2}}{||\mathcal{E}(\mathcal{P})^{-1/2}||}\) and \(\mathcal{V}_{\mathcal{E}}\) denotes the isometric extension of the noise channel \(\mathcal{E}\). \(I_{N^{n}}\) denotes the \(N^{n}\times N^{n}\) identity matrix. Next, we note that if \(U^{A}\) gives the block encoding of an operator \(A\), then sandwiching \(U^{A}\) with a pair of swap operations effectively gives a block encoding of the block diagonal operator \(A_{D}\), written as, \[A_{D}=\begin{pmatrix}A&0&\ldots&0\\ 0&A&\ldots&0\\ 0&0&\ddots&0\\ 0&0&\ldots&A\end{pmatrix}.\] We refer to Lemma 5 in Appendix A for a proof. Thus, the sequence of gates \((I_{2}\otimes SWAP)(U^{\tilde{\mathcal{E}}(P)^{-1/2}}\otimes I_{N^{n}})(I_{2 }\otimes SWAP)\) in Eq. (36) gives a block encoding of the block diagonal operator \((\tilde{\mathcal{E}}(P)^{-1/2})_{D}\). Note that the number of diagonal blocks in the matrix \((\tilde{\mathcal{E}}(P)^{-1/2})_{D}\) depends on the number of Kraus operators characterizing the Petz map. For an \(n\)-qubit code, there exist \(N^{n}\) Kraus operators, each of dimension \(2^{n}\times 2^{n}\). Thus, \((\tilde{\mathcal{E}}(P)^{-1/2})_{D}\) contain \(N^{n}\) diagonal blocks of the same dimension. The full sequence of operations in Eq. (36) thus corresponds to sequentially applying the unitaries \(U^{\tilde{\mathcal{E}}(P)^{-1/2}}\) and \((I_{2}\otimes\mathcal{V}_{\mathcal{E}}^{\dagger})\). Multiplying these unitaries, we then obtain the elements of \(i^{\text{th}}\) block of the first row of the unitary \(W\) as, \[\frac{E_{i}^{\dagger}\mathcal{E}(P)^{-1/2}}{||\mathcal{E}(P)^{-1/2}||} \tag{37}\] It follows that the action of the unitary operation \(W\) on the state \((\ket{0}\bra{0}\otimes\frac{I_{N^{n}}}{N^{n}}\otimes\mathcal{E}(\ket{\psi_{ \text{en}}}\bra{\psi_{\text{en}}}))\), gives, \[W\left(\ket{0}\bra{0}\otimes\frac{I_{N^{n}}}{N^{n}}\otimes \mathcal{E}(\ket{\psi_{\text{en}}}\bra{\psi_{\text{en}}})\right)W^{\dagger}\] \[=\ket{0}\bra{0}^{2n+1}\otimes\Bigg{(}\frac{(\mathcal{E}^{\dagger }\circ\mathcal{E}(P)^{-1/2}\circ\mathcal{E})(\ket{\psi_{\text{en}}}\bra{\psi_ {\text{en}}})}{N^{n}\ ||(\mathcal{E}(\mathcal{P})^{-1/2})||^{2}}\Bigg{)}+\ldots \tag{38}\] The final step (beyond the dotted line in Fig. 9) is simply a projection onto the codespace. To implement this, the isometric extension \(U_{\mathcal{P}}\) of the channel \(\mathcal{P}\) defined in Eq. (35) is applied to the state \[\ket{0}\bra{0}\otimes W\left(\ket{0}\bra{0}\otimes\frac{I_{N^{n}}}{N^{n}} \otimes\mathcal{E}(\ket{\psi_{\text{en}}}\bra{\psi_{\text{en}}})\right)W^{\dagger},\] evaluated in Eq.(38). The action of \(U_{\mathcal{P}}\) leads to, \[U_{\mathcal{P}}(I_{2}\otimes W)\Bigg{(}\ket{0}\bra{0}^{\otimes 2 }\otimes\frac{I_{N^{n}}}{N^{n}}\otimes\mathcal{E}(\ket{\psi_{\text{en}}} \bra{\psi_{\text{en}}})\Bigg{)}\] \[(I_{2}\otimes W^{\dagger})U_{\mathcal{P}}^{\dagger}\] \[=\ket{0}\bra{0}\otimes\ket{0}\bra{0}^{2n+1}\otimes\Bigg{(}\frac{ \left(\mathcal{R}_{\mathcal{P},\mathcal{E}}\circ\mathcal{E}\right)(\ket{\psi_ {\text{en}}}\bra{\psi_{\text{en}}})}{N^{n}\ ||(\mathcal{E}(\mathcal{P})^{-1/2})||^{2}}\Bigg{)}+\] \[\ket{1}\bra{1}\otimes\ket{0}\bra{0}^{2n+1}\otimes P_{\perp}(*)P _{\perp}. \tag{39}\] Therefore, upon measuring the first \((n\log_{2}N+2)\) qubits - which constitute the ancilla qubits for the circuit shown in Fig. 9 - in the computational basis and keeping only the all-zero outcome gives the recovered state as \(\frac{(\mathcal{R}_{\mathcal{P},\mathcal{E}}\circ\mathcal{E})(\ket{\psi_{ \text{en}}}\bra{\psi_{\text{en}}})}{N^{n}\ ||(\mathcal{E}(\mathcal{P})^{-1/2})||^{2}}\). It is also easy to see from Eq. (39) that this post-selection on the \(\ket{0}^{\otimes(n\log_{2}N+2)}\) state succeeds with the probability, \[\text{Prob}(0) =\frac{\text{Tr}(\mathcal{R}_{\mathcal{P},\mathcal{E}}\circ \mathcal{E})(\ket{\psi_{\text{en}}}\bra{\psi_{\text{en}}})}{N^{n}\ ||(\mathcal{E}(\mathcal{P})^{-1/2})||^{2}}\] \[=\frac{1}{N^{n}||(\mathcal{E}(\mathcal{P})^{-1/2})||^{2}} \tag{40}\] On the other hand, the success probability of the _QSVT_ based approach is \(\frac{1}{16N^{n}||(\mathcal{E}(\mathcal{P})^{-1/2})||}\), which is \(16\) times smaller than the success probability of our approach. As a simple corollary to the above result, we show how we can also estimate the fidelity between the recovered state \(\mathcal{R}_{\mathcal{P},\mathcal{E}}\circ\mathcal{E}(\ket{\psi_{\text{en}}} \bra{\psi_{\text{en}}})\) and the encoded state \(\ket{\psi_{\text{en}}}\) using our approach. **Corollary 2.1**.: _The circuit in Fig. 10 gives the fidelity between the encoded state \(\ket{\psi_{en}}\) and the recovered state \((\mathcal{R}_{\mathcal{P},\mathcal{E}}\circ\mathcal{E})(\ket{\psi_{en}} \bra{\psi_{\text{en}}})\) as the probability of getting the all-zero state. The final step (shown beyond the dashed line) involves the unitary \(\tilde{U}_{\text{en}}\) which transforms \(\ket{0}^{\otimes n}\) to the encoded state \(\ket{\psi_{\text{en}}}\), followed by a measurement in the computational basis._ Proof.: Note that the first part (until the dashed line) of the circuit in Fig. 10 is the same as that of the circuit in Fig. 9. To analyze the last part of the circuit (beyond the dashed line), recall that \(\tilde{U}_{\text{en}}\) is such that \(\tilde{U}_{\text{en}}\ket{0}^{\otimes n}=\ket{\psi_{\text{en}}}\). Thus, the action of the unitary operator \((I_{2}\otimes I_{N^{n}}\otimes\tilde{U}_{\text{en}})U_{\mathcal{P}}(I_{2} \otimes W)\) on the state \(\left(\ket{0}\bra{0}\otimes\frac{I_{N^{n}}}{N^{n}}\otimes\mathcal{E}(\ket{ \psi_{\text{en}}}\bra{\psi_{\text{en}}})\right)\), results in the following state \[\left(\frac{\bra{\psi_{en}}(\mathcal{R}_{\mathcal{P},\mathcal{E}}\circ\mathcal{E}) (\ket{\psi_{en}}\bra{\psi_{en}})\ket{\psi_{en}}}{N^{n}\ ||(\mathcal{E}(\mathcal{P})^{-1/2})||^{2}}\right)\ket{0}\bra{0}^{2n+1}+\ket{*} \bra{*}. \tag{41}\] Measuring the state in Eq.(41) in the computational basis, we see that the probability of obtaining the all-zero state gives the desired fidelity. Figure 10: Petz recovery circuit followed by the circuit for estimating the fidelity between the encoded state and the recovered state. The third register in the circuit is an \((n\log_{2}N)\)-qubit state for a noise channel \(\mathcal{E}\) with \(N\) Kraus operators. Finally, in terms of resource requirements, we note that while the QSVT procedure in [16] requires \(2(n\log_{2}N+2)\) ancilla qubits, our approach requires only \((n\log_{2}N+2)\) ancilla qubits. On the other hand, the direct implementation of the Petz map by its isometric extension costs only \(n\) ancilla qubits. ## VI Petz recovery circuits for amplitude-damping noise In this section, we provide concrete examples of our constructions for the case of the 4-qubit code [4] subject to amplitude-damping noise. The amplitude-damping channel models energy dissipation in a quantum systems [8] and is known to be one of the dominant noise processes in several physical realizations of qubits [31], making this as a natural candidate for our study. A single qubit amplitude-damping channel is represented by a pair of Kraus operators, given by, \[A_{0}=\begin{pmatrix}1&0\\ 0&\sqrt{1-\gamma}\end{pmatrix}\qquad A_{1}=\begin{pmatrix}0&\sqrt{\gamma}\\ 0&0\end{pmatrix}, \tag{42}\] where \(\gamma\) is the probability for the system to decay from the (excited) state \(\ket{1}\) to the (ground) state \(\ket{0}\). In our simulations, we implement amplitude-damping noise in two different ways. First, we use the circuit model for the amplitude-damping channel discussed in [8]. Specifically, the isometric extension of a single-qubit amplitude damping channel can be implemented via a controlled rotation operator followed by a cnot, with an ancilla that is initialised to the fixed state \(\ket{0}\)[8]. Alternatively, we can also realise amplitude-damping noise using a sequence of identity gates, where duration and number of gates is decided based on the \(T_{1}\), \(T_{2}\) times of the specific qubits, as described in Appendix B. The 4-qubit code proposed by Leung et. al. [4], is one of the earliest examples of a channel-adapted quantum error correcting code. It encodes a single qubit into the two-dimensional subspace spanned by, \[\ket{0}_{L} =\frac{1}{\sqrt{2}}\left(\ket{0000}+\ket{1111}\right)\] \[\ket{1}_{L} =\frac{1}{\sqrt{2}}\left(\ket{0011}+\ket{1100}\right), \tag{43}\] and was shown to correct for amplitude-damping noise with fidelity \(1-5O(\gamma^{2})\). Subsequently, it was shown that the code-specific Petz recovery map tailored to this 4-qubit code and amplitude-damping noise improves upon this fidelity bound [5]. In our implementation, we prepare the encoded states corresponding to the codewords in Eq. (43) using the circuit in Fig. 11. Note that in place of the original encoding circuit in [4], we have used a modified form proposed in [32], so as to obtain a simpler circuit for the encoding unitary \(U_{\text{en}}\). To calculate the fidelity using the circuits in Figs.7 and 10, we also need a circuit implementation of the unitary \(\tilde{U}_{en}\). The unitary \(\tilde{U}_{en}\), can be constructed once we know the encoding unitary \(U_{en}\), as illustrated in Fig. 12, for the case of the 4-qubit code. ### Results We now present the results based on our implementations of the Petz recovery circuits for the amplitude-damping channel and the four-qubit code, on a noisy quantum simulator. Our simulations were carried out on the qiskit platform of IBMQ, simulating noisy superconducting qubits. We first plot the fidelities obtained for different input states for a fixed noise strength, corresponding to value of the damping parameter \(\gamma=0.2\), in Fig. 13. Here, we have used the circuit model to introduce amplitude-damping noise for the POVM-based and isometric extension-based Petz circuits. For the Petz circuit using the block encoding approach, we followed the method outlined in Appendix B to introduce the noise. Furthermore, rather than use the block encoding based circuit in Fig. 9 which yields the recovered state with a certain probability, we instead use the circuit in Fig. 10 that directly estimates the fidelity of the recovered state. Here, we note further challenge in implementing the block encoding based Petz recovery circuits described in Sec. V, namely, preparing the ancillas in the maximally mixed state. For the specific case of the 4-qubit code subject to amplitude-damping noise, we require the ancillary qubits to be prepared in the maximally mixed Figure 11: Encoding circuit for the \([[4,1]]\) code. state \(\frac{\mathbb{I}_{16\times 16}}{2^{4}}\). Our approach to preparing the maximally mixed state makes use of the noise channel itself. A single qubit amplitude-damping channel with noise parameter \(\gamma=0.5\) acting on the state \(\ket{1}\) results in the maximally mixed state \(\frac{I_{16\times 2}}{2}\). So for the 4-qubit maximally mixed state, we simply apply the \(4-\)qubit amplitude-damping channel to the state \(\ket{1}^{\otimes 4}\). On the IBMQ quantum simulator, we again follow the procedure outlined the Appendix B to realize the maximally mixed state \(\frac{\mathbb{I}_{16\times 16}}{2^{4}}\). Next, we plot the fidelities for a fixed state, namely, the \(\ket{1}\) state, for various values of the damping parameter \(\gamma\) in Fig. 14. Here we have introduced the noise using the sequence of identity gates as explained in Appendix B, for all three Petz circuits. In this context, it is important to note that the fidelity for the POVM method is slightly lower than the other two constructions that use the isometric extension and sequential block encoding, since the former is an _approximate_ realisation of the Petz recovery, rather than an exact implementation. Finally, to emphasize the efficacy of using the Petz recovery channel even in the presence of noisy gates, we simulate our recovery circuits using noisy quantum gates. Unlike the simulation results depicted in Fig.13 and Fig.14, where the gates are all assumed to be ideal (noiseless), for the simulation results depicted in Fig.15, we use noisy gates. Specifically, we assume that the gate-noise in the cnot and single-qubit gates can be modeled as depolarizing noise noise parameter \(\mu\). Figure(15) depicts three fidelity maps that allow us to compare the fidelities of the unencoded qubits initialised as \(\ket{\psi}=\cos\left(\frac{\theta}{2}\right)\ket{0}+\sin\left(\frac{\theta}{2 }\right)\ket{1}\) with the fidelities of the error-corrected qubits, for different values of the damping parameter \(\gamma\). The values of \(\mu\) for the two different Petz circuits - one based on isometric extension and the other based on two-outcome POVMs - have been chosen based on the corresponding threshold values identified in Appendix C. ### Resource Requirements for Petz recovery circuits With noisy intermediate-scale quantum devices in mind, the objective of any error correcting protocol is to try and optimize the computing resources used. Here, we compare and contrast the resource requirements for the three different recovery circuits outlined in this article. Specifically, we quantify the gate complexity and the number of ancillary qubits required to implement the code-specific Petz map for an \(n\)-qubit subject to a noise channel with \(N=4\) Kraus operators. We first quantify the resource requirements for the Petz circuit implementation in Sec. III.1 based on the isometric extension. It is well known that an \(n\)-qubit two-level unitary requires \(\mathcal{O}(n^{2})\) single-qubit gates and cnot gates [8]. From Lemma 2, we note that to implement the \(n\)-qubit Petz recovery circuit specific to an \(n\)-qubit code and a noise channel with \(N\)-Kraus operators via the isometric extension, we require \(4^{2n}\) two-level unitaries. Therefore to implement an \(n\)-qubit Petz recovery channel, we need at most \(\mathcal{O}(n^{2}4^{\prime}2n))\) single-qubit gates and cnot gates. In addition to this, we also require \(\log_{2}N\) ancillary qubits to implement an \(n\)-qubit Petz recovery channel. The POVM-based implementation discussed in Sec. IV requires only 2 ancillary qubits for any CPTP map of rank greater than two. For a rank-2 map, however, it is easy to see that the second ancillary qubit can be Figure 14: Fidelity (\(F^{2}\)) of \(\ket{1}\) plotted against the damping parameter \(\gamma\) for the 4-qubit code under amplitude-damping noise, simulated using a sequence of identity gates as explained in Appendix B. dropped, and the circuit can be constructed using just one ancillary qubit. We note that the number of the \(U_{M_{i}}\) matrices is \(N^{n}\), and for each \(n+1\)-qubit \(U_{\mathcal{M}_{i}}\), we need two \(n\)-qubit unitaries \(U_{i}\) and \(\tilde{U}_{i}\). For implementing each \(n\)-qubit unitary, we need \(\mathcal{O}((n)^{2}4^{n})\) single-qubit and two-qubit cnot gates [8]. Therefore, for any \(n+1-\)qubit unitary, we need \(\mathcal{O}((n+1)^{2}4^{(n+1)})\) cnot and single-qubit gates. Therefore the total number of gates needed for the implementation of the POVM-based method scales as \(\mathcal{O}(N^{n}4^{n}(n^{2}+4(n+1)^{2}))\), where \(N\) is the rank of the noise channel. Finally, for the block encoding-based approach in Sec. V, note that we need to perform the singular value decomposition for the \(\mathcal{E}(P)^{-1/2}\). Therefore for an \(n-\)qubit \(\mathcal{E}(P)^{-1/2}\), we need to implement the \(n\)-qubit unitaries \(V_{1}\) and \(V_{2}\) (Appendix A). Therefore, the total number of the single qubit and the two-qubit cnot gates scales as \(\mathcal{O}(n^{2}4^{n})\). To realize the unitary \(W\), we need to implement the block encoding for \(\mathcal{E}(P)_{D}^{-1/2}\), which needs \(6n\log_{2}N\) cnot gates. The implementation of the isometric extension for the adjoint channel needs \(\mathcal{O}(n^{2}4^{n}N^{n})\) cnot and single-qubit gates. Therefore the total number of gates scales as \(\mathcal{O}(6n\log_{2}N+n^{2}4^{n}(1+N^{n}))\). To construct the block encoding for any \(n\)-qubit operator, we need just one extra ancilla. However, to implement the unitary \(W\) in the Eq.(36) along with the isometric extension of the channel \(\mathcal{P}\), we need \(n\log_{2}N+2\) ancillary qubits. We summarize the resource requirements quantified here in Table 1. Restricting our attention to qubit codes only, we present a worst-case resource estimation for the different methods, assuming that the noise channel has the maximum possible number of Kraus operators, namely, \(N=4\). ## VII Concluding remarks In this work, we describe three different approaches to obtain circuit implementations of a noise-adapted recovery map, namely the Petz map. Since we focus on the code-specific Petz map here, the resulting recovery circuits can be tailored both to the underlying quantum code as well as the noise channel that the individual qubits are subject to. Apart from the recovery circuits, we also present circuits that can directly estimate the fidelity between the original encoded state and the recovered state. Of the three approaches discussed here, the circuit construction based on the isometric extension proves to be rather advantageous, both in terms of resourcefulness as well as faithfulness of implementation. Moving away from the standard approach to constructing the isometric extension, we observe here that it suffices to partially decompose the unitary on the extended space, leading to a reduced gate complexity. Furthermore, this approach leads to an _exact_ implementation of the Petz recovery map, in contrast to the QSVT-based approach [16], which Figure 15: Fidelity maps for (a) an unencoded qubit, (b) error corrected qubit using the 4-qubit code and Petz recovery circuit implemented via the POVM method and (c) error corrected qubit using the 4-qubit code and Petz recovery circuit implemented via isometric extension. In all three cases, the qubits undergo amplitude-damping noise, simulated using a sequence of identity gates as explained in Appendix B. For the implementations in (b) and (c), the cnot and single-qubit gates are assumed to be noisy with noise parameters \(\mu=10^{-6}\) and \(\mu=10^{-5}\) respectively. In contrast, the POVM-based approach that we present here is indeed an approximate method to implement the Petz map. However, this approach requires the least number of ancillas, just one in the case of a noise channel with two Kraus operators. We also obtain a third circuit implementation that uses both the technique of block encoding technique as well as isometric extension. This implementation is comparable to both the isometric extension as well as the recently obtained QSVT-based circuit implementation [16], in terms of its resource requirements. Like the QSVT-based approach, it is a _probabilistic_ implementation, albeit with a higher success probability than that obtained via QSVT. Finally, we simulate our circuits on noisy quantum processors to benchmark their performance under both ideal and noisy conditions. For the specific case of amplitude-damping noise, we hence obtain a threshold for cnot gate-noise below which our recovery circuits can indeed be effective in protecting against the native damping noise of the qubits. The techniques outlined in our work are indeed quite general, and can be used to obtain resource efficient circuits for any CPTP map. The fidelity estimation circuits described here might also be of independent interest and find uses beyond the context of QEC. Going forward, it will be interesting to study if the noise-adapted recovery circuits described here can be optimised to take into account gate-noise as well as hardware constraints such as qubit connectivity. ###### Acknowledgements. We acknowledge the use of IBM Quantum for this work. The views expressed are those of the authors and do not reflect the official policy or position of IBM or the IBM Quantum team. This research was supported in part by a grant from the Mphasis F1 Foundation to the Centre for Quantum Information, Communication, and Computing (CQuICC). We also acknowledge financial support from the Department of Science and Technology, Government of India, under Grant No. DST/ICPS/QuST/Theme-3/2019/Q59.
2305.18909
Password-Based Authentication and The Experiences of End Users
Passwords are used majorly for end-user authentication in information and communication technology (ICT) systems due to its perceived ease of use. The use for end-user authentication extends through mobile, computers and network-based products and services. But with the attendant issues relating to password hacks, leakages, and theft largely due to weak, reuse and poor password habits of end-users, the call for passwordless authentication as alternative intensifies. All the same, there are missing knowledge of whether these password-based experiences are associated with societal economic status, educational qualification of citizens, their age and gender, technological advancements, and depth of penetration. In line with the above, understanding the experience of end-users in developing economy to ascertain their password-based experience has become of interest to the researchers. This paper aims at measuring the experience of staff and students in University communities within southeastern Nigeria on password-based authentication systems. These communities have population whose age brackets are majorly within the ages of 16 and 60 years; have people with requisite educational qualifications ranging from Diploma to Doctorate degrees and constitutes good number of ICT tools consumers. The survey had 291 respondents, and collected data about age, educational qualifications, and gender from these respondents. It also collected information about their password experience in social media network, online shopping, electronic health care services, and internet banking. Our analysis using SPSS and report by means of descriptive statistics, frequency distribution, and Chi-Square tests showed that account compromise in the geographical area is not common with the respondents reporting good experience with passwords usage.
Assumpta Ezugwu, Elochukwu Ukwandu, Celestine Ugwu, Modesta Ezema, Comfort Olebara, Juliana Ndunagu, Lizzy Ofusori, Uchenna Ome
2023-05-30T10:05:46Z
http://arxiv.org/abs/2305.18909v1
# Password-Based Authentication and The Experiences of End Users ###### Abstract Passwords are used majorly for end-user authentication in information and communication technology (ICT) systems due to its perceived ease of use. The use for end-user authentication extends through mobile, computers and network-based products and services. But with the attendant issues relating to password hacks, leakages, and theft largely due to weak, reuse and poor password habits of end-users, the call for passwordless authentication as alternative intensifies. All the same, there are missing knowledge of whether these password-based experiences are associated with societal economic status, educational qualification of citizens, their age and gender, technological advancements, and depth of penetration. In line with the above, understanding the experience of end-users in developing economy to ascertain their password-based experience has become of interest to the researchers. This paper aims at measuring the experience of staff and students in University communities within southeastern Nigeria on password-based authentication systems. These communities have population whose age brackets are majorly within the ages of 16 and 60 years; have people with requisite educational qualifications ranging from Diploma to Doctorate degrees and constitutes good number of ICT tools consumers. The survey had 291 respondents, and collected data about age, educational qualifications, and gender from these respondents. It also collected information about their password experience in social media network, online shopping, electronic health care services, and internet banking. Our analysis using SPSS and report by means of descriptive statistics, frequency distribution, and Chi-Square tests showed that account compromise in the geographical area is not common with the respondents reporting good experience with passwords usage. Furthermore, this experience is not in any way related to their age (under 60), and educational qualification. Our experiment did not measure the entropy of end-users' passwords, their password hygiene culture and so cannot relate this experience with the strengths of their passwords nor that of their password hygiene culture. The outcome and recommendations of this research will help inform policy and research direction towards password hygiene culture, management, and the potentials or otherwise of passwordless authentication systems in developing economies. **Keywords:** Password-based authentication, cyber-hygiene culture, End-User Experience, Cyber-attack, educational qualification, age, gender. ## 1 Introduction Good number of literatures exists on passwords such as that of Renaud, Otondo, & Warkentin, (2019) on effect of endowment on password strength; AlSabah, Oligeri, & Riley, (2018) on culture and password, and Furnell (2022) on assessing website password practices. Majority of these have shown that passwords, especially text-based are the most popular used authentication method for end-users of information system in computers and network-based products and services (Shay _et al._, 2010). However, due to unhygigenic practices such as choosing weak passwords and insecure management of passwords, they are highly vulnerable to exploitation by both internal and external threat actors. In the other hand, the attraction to the use of passwords as argued by Sharma _et al.,_ (2010) lies mostly in its simplicity, practicality, ease of use and low cost rather than the security. But are passwords simple to use, of low cost, easy to use and insecure? Authors of this paper wants to submit that the simplicity of passwords use could depend on so many factors such as age, health condition of the user, number of user's application that requires password protections as well as use of password utilities like password managers. Older generations, people living with dementia, end users with multiple online accounts will have challenges managing multiple passwords. Having multiple passwords can be complicated, prone to have issues related to remembering multiple passwords, temptation of reuse of single login credential on multiple accounts and so on. The security of assets secured using passwords largely depends on the ability of the user to cultivate good password habits such as regular change of passwords, non-reuse of login credentials on multiple systems, and use of strong passwords derived by using multiple characters with special symbols and numbers as well as the length of it. All the same, cultivating some of these habits have been made easier using password managers. Although several studies have explored consumers experiences with password-based authentication system in developed economies (Butler and Butler, 2015; Bilgian _et al._, 2016; Morrison _et al._, 2021) and the call and rollout of passwordless authentication use cases as alternative intensifies (Jakkal, 2021). For instance, in 2021, Microsoft posits that the development and roll-out of passwordless authentication system is the future of user access management in computer-based systems (Jakkal, 2021). This involves any method that helps to identify a user without the use of password. However, there is a paucity of research focusing on measuring password experience of end-users in developing economies such as Nigeria and relating same with the existing knowledge. This represents a gap in the literature and gives an opportunity for this study to address. Recent studies by Ugwu _et al._, (2022) in Nigeria suggest that age and educational qualification have no effect on personal cyber hygiene culture of information and communication technologies (ICTs) users. Further studies by Ugwu _et al._, (2023) in Nigeria on the relationship between cyber hygiene culture of Internet users with gender, employment status and academic discipline did not establish any significant relationship between these dependent and independent variables. These studies by Ugwu _et al.,_ (2022) and Ugwu _et al.,_ (2023) having been done in Nigeria provided more basis for further study - a study that will establish the likelihood or otherwise of a relationship between cyber hygiene culture and password use experience in developing economies. In line with these, this study focuses on finding: 1. What are the end-user experiences in password-based authentication in Southeastern Nigeria? 2. Does age have impact on end-user experience in password-based authentication of this populace? 3. Does gender have impact on end-user experience in password-based authentication of this populace? 4. Does educational level have impact on end-user experience in password-based authentication of this populace? To answer these questions, descriptive statistics will be deployed, and hypotheses will be raised and tested using appropriate statistical tests. The experience of end-users of ICT in this instance is the measure of a user having had an incident of unauthorised access to his/her ICT tool, as well an overall experience such as difficulties managing accounts passwords or accessing ICT tools using passwords. The results like that of Ugwu _et al._ (2022) and Ugwu _et al.;_ (2023) did not establish any significant relationship between educational qualification, and gender to password usage experience in Southeastern Nigeria of respondents under the age of 60. To the best of our knowledge, this is the first research effort towards measuring the experience of end-users to password-based authentication in developing economies alongside relevant factors such as age, educational qualifications, gender, cyber hygiene culture and aligning it with that of end-users' experiences in developed economies. ### End-User Experiences With Password-Based Authentication Systems The use of computers and the Internet on a daily basis has changed the way people conduct their lives and businesses (Butler and Butler, 2015; Shaikh and Karjaluoto, 2015). People can now work using their computer systems from anywhere they can access the Internet (Brynjolfsson _et al._, 2020). These systems are used for a variety of activities such as entertainment (Butler and Butler, 2015), shopping (Vasic _et al._, 2019), banking (Singh and Srivastava, 2020), healthcare services (Liu _et al._, 2019), and maintaining communication on social media (Huerta-Alvarez _et al._, 2020). Both the young and older adults acknowledged the benefits and importance of using technology to accomplish everyday tasks (Morrison _et al._, 2021). One of the significant benefits is convenience, as it allows people to stay and work independently, and many are keen to continue using technology well into older age. However, there have been concerns on the escalation of cybercrime incidents, with consumers prioritising having safe Internet experience, as 55% cite security as the essential aspect of their online experience (BusinessWire, 2021). Similarly, another consumer experience reported by Beyond Identity (2021) reveals that 46% of United States (US) consumers failed to complete transactions due to authentication failure. Lance (2021) shows the circumstances people forget their passwords; (67% of the respondents said it happens when they are trying to complete an online banking transaction, 56% said it happens when trying to get travel information, 55% reported it happens when they are attempting to buy something, and 43% said it happens when they try to access a document). For decades, user identification and authentication, especially password systems, have been regarded as the foundation of frontline defense against intruders' within a computer security environment (Conklin _et al._, 2004: 1, Guven, Boyaci, & Aydin, 2022). End user identification and authentication in ICT tool through unique username and password have become acceptable, understandable, and even expected to ensure a secure environment (Butler and Butler, 2015). Yang (2019) and Guven, Boyaci, & Aydin, (2022) affirm that user identification and authentication are essential to ensure computer security and control access while maintaining the users' integrity and confidentiality. Although other user authentication systems, such as one-time Personal Identification Numbers (PINs) (using device ownership) and Biometrics (using physical characteristics), Multi-Factor and Two-Factor authentications are evolving, but password-based authentication systems remain one of the most cost-effective and efficient methods to use (Butler and Butler, 2015; Yang, 2019; Tam _et al._, 2010). However, while the authentication of users is critical to controlling access, the authentication process remains problematic (Chiasson and Biddle, 2007; Butler and Butler, 2015). In addition, Butler and Butler (2015) reveal that because users now have unlimited online options, they are impatient with the time-consuming and inconvenient login experiences. These experiences include being prompted to reset a password or create an account with a long-form and managing passwords (measures related to the safekeeping of passwords). ### Password Experiences of Social Media Users In recent years, social media have gained substantial popularity as it allows individuals to connect, share and network all over the world (Huerta-Alvarez _et al._, 2020). The emergence of Twitter in 2006, YouTube in 2005, Facebook in 2004, and LinkedIn in 2003 has been unprecedented, thus making them a global phenomenon (Rozaimee _et al._, 2013; Tess, 2013). Facebook appears to be the largest and the most populous social networking site for all ages, with over one billion users worldwide (Zeevi, 2013). According to Dunphy _et al._ (2015: 142), social media use by Americans shows that approximately 18% of all online adults are Twitter users, compared to the 71% being users of Facebook, the largest online social platform. While Facebook is popular across a range of demographic groups, 31% of Twitter users are drawn from the age range 18-29 (19% of Twitter users are in the age range 30-49 years; 9% for 50-64 years; 5% for 65+ years) with a particular presence of urban dwellers, African-Americans, and Hispanics. Moreover, Twitter users are split equally in gender, and 46% of all Twitter users tend to visit the site daily (29% multiple times per day). Recent survey by Vogels, E, Gelles-Watnick, R and Massarat, N. (2022) of Pew Research Center on America teens between the ages of 13 to 17 shows that TikTok is the most popular social media network, while the use of Facebook by this age group has fallen sharply (Figure 1). As social networking sites are increasingly becoming an integral part of our everyday lives, it is essential to understand individual experiences as it relates to password-based authentication while using social media platforms to network. For example, Majid and Kouser (2019) report that many users forget to log out of their social media accounts. Likewise, Stobert and Biddle (2014) reveal that several users were more careful about logging out on their computers. This implies that when another person starts using the same mobile/laptop, s/he gets access to the account information and can change the password or even post items and communicate with your friends as if they are from your side. Another user experience shared by Zintle (2020) reveals that, in South Africa, a parliament meeting hosted via the online streaming service Zoom was hacked, and pornographic images were broadcasted. This can only happen Figure 1: American Teens Social Media and Technology Survey (Vogel _et al._, 2022) when there is a security breach which can also occur if the user's password falls into the wrong hands (Isobe and Ito, 2021). ### Password Experiences of Online Shoppers The online shopping experience continues to evolve as consumers increasingly rely on their social connections (Bilgihan _et al._, 2016). Online customers' experience, thus, includes every point of contact (apps, social media, website) that customer chooses to use to interact and transact with the firm (Bilgihan _et al._, 2016). Most often, online customers generally utilise password authentication when transacting; hence, several scholars have reported consumers' experiences regarding password authentication (Meter and Bauman, 2015; Fagan _et al._, 2017; Beyond Identity, 2021). For example, Beyond Identity (2021) reveals that 46% of US consumers failed to complete transactions due to authentication failure. Also, 18.75% of returning users abandon the cart after forgetting their password and having issues with password reset emails. Likewise, 36% of the consumers said they try to guess a forgotten password twice before resetting it, while 28% said they guess a forgotten password once and 22% three times (Beyond Identity, 2021). ### Password Experience of Electronic Healthcare Clients Some end-users have embraced virtual or electronic healthcare (eHealth) services as a preferable alternative to traditional doctor appointments (Han _et al._, 2006). They connect with their doctors via phone calls and video chat platforms such as Zoom, Skype, and WhatsApp for consultations without necessarily having physical contact with their doctors (Chatterjee, 2022). However, the underlying issue is security requirements as it relates to eHealth user authentication and authorisation. End-user authentication is very important to ensure confidentiality, which is very essential in the healthcare domain (Chatterjee, 2022: 285). Constantinides _et al._, (2020) reveal that, in an emergency, the end-users do not have time to reset their passwords when they forget. Similarly, a health worker does not have time to type several codes when accessing the electronic medical records (EMR). Likewise, the end-users are impatient to reset passwords when they forget their passwords (Constantinides _et al._, 2020). ### Password Experience of Electronic Bank Customers According to Fagan _et al._, (2017), usernames and passwords are still very much ingrained in the fabric of online banking and commerce as the primary means of initial authentication despite being high-risk. Krol _et al._, (2015) reveal that users expressed much frustration about providing additional information when using online banking; in particular, they did not like to use one-time passwords (OTP) with devices (hardware token). Likewise, Singh _et al._, (2007) describe users' experiences of how an entire village would delegate bank credentials to a single person who would conduct the online transaction on everybody's behalf. In addition, Dunphy _et al._, (2014: 2) report on how older adults delegate the withdrawal of their monies from an automated teller machine to their helpers thereby disclosing their personal identification numbers (PINs) to a third party. ### Research Model Several Information System (IS) models have been used to study users' experiences in adopting new technologies. A typical example is the Unified Theory of Technology of Acceptance and Use of Technology (UTAUT) (Venkatesh _et al._, 2003). It holds that four constructs, performance expectancy, effort expectancy, social influence, and facilitating conditions, directly influence user behaviour. It has moderating factors of gender, age, experience, and voluntariness of use. However, some constructs of the UTAUT, such as social influence and performance expectancy, are not significant because this study does not intend to measure performance expectancy but to explore the experiences of online end-users in using a password-based authentication system. Nevertheless, this study adopts the moderating factors of age, educational level, and gender as external variables in the conceptual framework. Thus, this study aims to measure online end-users' experiences as it relates to password-based authentication system usage. The research article attempts to answer the following hypothesis: **H1**: Consumers' experiences have a positive influence on the usage of password-based authentication systems. **H0**: Consumers' experiences have a negative influence on the usage of password-based authentication systems. #### 1.6.1 External Variables The external variables contribute to consumers' experiences while transacting online using a password-based authentication system, as shown in Figure 2. These variables include age group, educational level, and gender. Age group can be described as one of the criteria used to include or exclude certain audiences (Natarajan _et al._, 2018). In this study, the age group helps determine the experiences of online end-users when transacting via a password-based authentication system. While educational level helps to measure the end-user experiences when transacting via a password-based authentication system. Likewise, gender helps to understand the influence of the end-users' gender differences as it relates to their experiences while transacting through a password-based authentication system. For this study, the chosen external variables will represent unique characteristics of the sample that could have impact on the online end-users using a password-based authentication system (Venkatesh _et al._, 2003). #### 1.6.2 Online End-User Behaviour The external variables influence the end-user's behaviour construct. From the research, the experiences that the consumer displays due to external variables could give a user a more positive/negative attitude towards using the password-based authentication system (Venkatesh _et al._, 2012). According to Brock and Khan (2017) behavioural intention to use is an essential step towards the actual usage of any new Figure 2: Variables contributing to online consumers’ experiences while using a Password-based authentication system [Source: Primary] system. In this study, online end-users' behaviour will be used to evaluate online consumers' experiences while using a password-based authentication system. #### 1.6.3 Actual Use Actual use of a system/technology is determined by users behaviour (Venkatesh _et al._, 2012). In this study, actual use indicates the end-user's usage of a password-based authentication system while transacting online. The end-user experiences will determine the predictions of the usage. According to Butler and Butler (2015), many computer security password breaches result from poor user security behaviour. The password creation and management practices that online consumers apply have a direct effect on the level of computer security and are often targeted in attacks. **2.0 Research Methodology** This study was conducted using some Universities within the southeastern region of the country with ethical approval obtained from one of the concerned authorities. The study used online-administered survey approach and the questionnaire was designed in Google form and distributed to our respondents through WhatsApp and email addresses. Data from respondents were coded into numerical data in a spreadsheet. Processing and analysis of captured data was carried out using SPSS (Statistical Package for Social Sciences) version 20.0 and reported by means of descriptive statistics, frequency distribution, and Chi-Square tests. In all, about 291 responses were used after data cleaning in accordance to the required sample size of the respondent population (Krejcie & Morgan (1970)). **2.1 Instruments and Methods** For the researchers to verify the hypotheses formulated, quantitative research methodology was adopted in the study which according to Adegbuyi _et al._, (2015) is the most suitable for any exploratory investigation for better understating of a particular problem under scrutiny. The variables needed to measure the main constructs that are part of the conceptual framework were identified through extensive review of literature. A cross-sectional research design in conjunction with a well-structured questionnaire with closed-ended questions as used by Bowen _et al._, (2010) was adopted to gather information from the respondents. The questionnaire is comprised of two sections. Section A has questions that focused on obtaining the demographics of the respondents such as age, gender, educational qualification, marital status, and employment information while section B examined the online end-user experiences with the Password-based authentication systems. The questions in section B were designed in such a way that data can be collected based on the respondents' experiences with Password-based authentication systems. **2.2 Sample Size and Sampling Technique** A population of 1200 possible respondents were drawn from Universities in the Southeastern Nigeria for the purpose of this study with an expected sample size of 291 as stipulated in Krejcie & Morgan (1970). Since the possibility of collecting data from the entire population is not possible according to Sekaran & Bougies, (2010), using sample relative to the entire population was considered as being able to produce reliable and better result and remove unnecessary stress. A systematic sampling approach was adopted to provide for a heterogenous population characteristics (Leedy & Ormrod, 2005) and to make generalisation to the population (Bryman & Bello, 2003). Systematic sampling technique was considered suitable by previous studies such as (Albuelu & Ogbouma, 2013; Asgharnezhad, Akbarlou & Karlcaj, 2013; Aliyu, 2013). At the end of the survey, a sample size of 291 was used for the study after data cleaning, which represented the actual number as suggested by Krejcie & Morgan (1970). ### Data Collection Method The study used online-administered survey approach. The questionnaire was designed in Google form and distributed to our respondents through WhatsApp and email addresses. The primary targets of respondents were students and employees of the Universities since they were perceived likely to be the most informed users of online Password-based authentication system. Prior to the distribution of the questionnaire, an ethical approval was obtained for the study from the approved authority. The responses to the questionnaire were collected through the spreadsheet designed for this purpose. A total of 299 responses were received out of the target population of 1200 and were subjected to data cleaning and normalisation. The extracted and normalised data were subsequently subjected to analysis. ### Analysis, Results and Discussion ### Data Analysis Data from respondents were coded into numerical data in a spreadsheet. Processing and analysis of captured data was carried out using SPSS (Statistical Package for Social Sciences) version 20.0 and reported by means of descriptive statistics, frequency distribution, and Chi-Square tests. ### Results The demographic data is presented in Table 1 below. The table shows that 157 male respondents (54%) and 134 female respondents (46%) participated in the survey. Ages of the participants were divided into five categories. Participants that are 20 years and below has a distribution of 86 out of 291 participants, which represents 29.6% of all participants. The second category captured participants that are between 21-30. This group has a distribution of 148(50.9%) of 291 respondents. The third category captured respondents between 31-40 years and had a distribution of 28 representing 9.6% of all participants. In category 4, respondents between 41-50 were captured with a distribution of 21, representing 7.2% of entire respondents. The last category captured respondents within the ages of 51-60. A distribution of 8 was observed, representing 2.7% of the entire participants. The third demographic data collected is the highest education qualification obtained by the respondents. The first category captured are respondents with secondary school certificate. 187 respondents out of the 291, representing 64.3% of participants belong to this category. The second category had OND (Ordinary National Diploma) certificate owners covered. This category had a distribution of 16 respondents, which is 5.5% of the entire 291 respondents. Other categories are HND (Higher National Diploma and B.Sc. (Bachelor of Science) with distributions of 42 (14.4%) and those with Ph.D. (Doctor of Philosophy) had a distribution of 46 (15.8%) respectively. Demographic data on the employment status of respondents showed that students/unemployed respondents had a distribution of 243 out of 291 participants, giving 83.5% of all participants. Contract/temporary staff had a distribution of 3 (1.0%), while permanent staff and undefined categories had a distribution of 39 (13.4%) and 6 (2.1%) respectively. \begin{table} \begin{tabular}{|p{113.8pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline **Demographic Data** & **Frequency** & **Percent** & **Cumulative Percent** \\ \hline **Gender** & & & \\ \hline Male & 157 & 54.0 & 54.0 \\ \hline Female & 134 & 46.0 & 100.0 \\ \hline **Age** & & & \\ \hline \(<\)=20 & 86 & 29.6 & 29.6 \\ \hline 21-30 & 148 & 50.9 & 80.4 \\ \hline 31-40 & 28 & 9.6 & 90.0 \\ \hline \end{tabular} \end{table} Table 1: Demographic Information **Assumptions**: for a 2 X 2 table, not more than 10% of cells have expected counts of less than 5, while for tables greater than 2 X 2, not more than 20% of cells should have expected counts of less than 5. If the assumption is violated, the readings are taken from the Likelihood Ratio row instead of the Pearson Chi-Square row. **Research Question 1** What are the end-user experiences in password-based authentication? To answer this question, descriptive analysis of respondents' self-reports on variables in this domain was carried out. Tables 2 and 3 below show the results of the analysis. From Table 2 above, 93 respondents, giving a distribution of 32% of total respondents, have experienced unauthorised access to their accounts, while a higher percentage of the respondents 153 (52.6%) have not. Also, those who did not notice if such compromise occurred and those who cannot remember its occurrence are 25 (8.6%) and 20(6.9%) respectively. To further ascertain the respondents' password-based authentication experiences, the respondents were presented with the question: **How has your experience been using Password authentication?** Table 3 below gives the self-report frequency: \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline & & **Frequency** & **Percent** & **Cumulative Percent** \\ \hline Valid & Yes & 93 & 32.0 & 32.0 \\ \hline & Not At All & 153 & 52.6 & 84.5 \\ \hline & Don’t Know & 25 & 8.6 & 93.1 \\ \hline & Can’t Remember & 20 & 6.9 & 100.0 \\ \hline & **Total** & **291** & **100.0** & \\ \hline \end{tabular} \end{table} Table 2: Frequency Distribution of Respondents (Experienced unauthorized access to account?) Only 3 (1%) of the 291 respondents reported having a poor experience with Password authentication while 126 (43.3%) reported excellent Password authentication experience, 139 (47.8%) and 23 (7.9%) reported good and average experiences respectively. This agrees with the report from Table 2, where majority of the respondents reported not having had their accounts compromised, and either not being aware of such compromise or cannot remember its occurrence. **Research Question 2** Does age have impact on end-user experience in Password-based authentication? To answer this question, a hypothesis was raised and tested. Null Hypothesis(**H0**): Age of Internet users have no impact on their Password-based authentication experiences. **Hypothesis Testing** Cross tabulation of the independent variable (age) and dependent variable (Password-based authentication experience) showed observed and expected counts recorded on each category's option item and assumption check following Chi-Square tests is used to select that appropriate values required to ascertain the association strength between the variables and if the independent impacts on the dependent. \begin{table} \begin{tabular}{l l|r|r|r|r|r|} \hline Age & \multicolumn{4}{c|}{Has someone logged into your account} & Total \\ \multicolumn{4}{c|}{} & \multicolumn{4}{c|}{without your permission?} & \\ \cline{3-6} \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & Yes & Not At & Don’t & Can’t & \\ & & & All & Know & Remember & \\ \hline \(<\)=20 & Count & 31 & 40 & 9 & 6 & 86 \\ & Expected Count & 27.5 & 45.2 & 7.4 & 5.9 & 86.0 \\ & \% Within Age & 36.0\% & 46.5\% & 10.5\% & 7.0\% & 100.0\% \\ \hline 21-30 & Count & 47 & 82 & 11 & 8 & 148 \\ & Expected Count & 47.3 & 77.8 & 12.7 & 10.2 & 148.0 \\ & \% Within Age & 31.8\% & 55.4\% & 7.4\% & 5.4\% & 100.0\% \\ \hline 31-40 & Count & 8 & 16 & 1 & 3 & 28 \\ & Expected Count & 8.9 & 14.7 & 2.4 & 1.9 & 28.0 \\ & \% Within Age & 28.6\% & 57.1\% & 3.6\% & 10.7\% & 100.0\% \\ \hline \end{tabular} \end{table} Table 4: Age * End-user experience in Password-based authentication (Has someone ever logged into your account without your permission?) \begin{table} \begin{tabular}{|l l|l|l|l|} \hline & & **Frequency** & **Percent** & **Cumulative Percent** \\ \hline Valid & Excellent & 126 & 43.3 & 43.3 \\ \hline & Good & 139 & 47.8 & 91.1 \\ \hline & Average & 23 & 7.9 & 99.0 \\ \hline & Poor & 3 & 1.0 & 100.0 \\ \hline & **Total** & **291** & **100.0** & \\ \hline \end{tabular} \end{table} Table 3: Frequency Distribution of Respondents End-User Experience (Rating) The cross-tabulation result shows the disparity between the observed counts and the expected counts. The expected counts are counts that would be observed if no association exists between the dependent and independent variables. The greater the difference between these counts, the higher the impact independent variable has on the dependent variable. In age category \(<\)=20, count of 31(36.0%) was observed for respondents who have experienced unauthorized access to their accounts whereas the expected count was 27.5(approximately 28). 40(46.5%) respondents were observed as having not experienced unauthorized access where the expected count was 45.2(approximately 45). Option items "Don't know" and "Can't remember" had observed counts of 9(10.5%) and 6(7.0%) respectively, against expected counts of 7.4 (approximately7) and 5.9 (approximately 6). The second age category captured were respondents between ages 21-30. In this category, observed counts of 47(31.8%) were observed for respondents who have experienced unauthorised access to their accounts against expected count of 47.3(approximately 47). 82 (55.4%) counts were observed instead of an expected count of 77.8 (approximately 78) for respondents who have not experienced unauthorised access to their accounts, while 11(7.4%) and 8 (5.4%) counts were observed instead of expected 12.7(approximately 13) and 10.2(approximately 10) respectively, for respondents who either have no knowledge of unauthorised access to their accounts or cannot remember its occurrence. The third age category, 31-40 had observed counts of 8(28.6%) for unauthorised access experience, with expected count of 8.9(approximately 9), 16(57.1%) observed counts for no unauthorised access experience, with an expected count of 14.7(approximately 15), 1(3.6%) and 3(10.7%) observed counts and of 2.4(approximately 1) and 1.9 (approximately 2) expected counts respectively were recorded for respondents who do not know of any attempt to access their accounts without permission or cannot remember its occurrence. Age category four captured respondents between the ages of 41-50. Here, 4(19.0%) respondents have had their accounts accessed without their permission while an expected count of 6.7 (approximately 7) would be recorded when no association exists between age and end-user experience. 12(57.1%) observed counts and expected 11 counts for respondents who have not had their accounts accessed by unauthorised persons. Observed counts for respondents who either do not know of unauthorised access to their accounts or cannot remember its occurrence are 3(14.3%) and 2 (9.5%), with 1.8 (approximately 2) and 1.4(approximately 1) as expected counts in these option items respectively. The last age category captured respondents between ages 51 to 60. A distribution of 3(37.5%) observed to 2.6(approximately 3) expected counts for Yes to unauthorised account access, 3 (37.5%) observed to 4.2 (approximately 4) expected counts for no experience of unauthorised access, 1 (12.5%) observed to 0.7(approximately 1) expected count for respondents who do not know of the occurrence of unauthorised access to their accounts and 1(12.5%) observed count to 0.5 (approximately 1) expected count for respondents who cannot remember having their accounts accessed by unauthorised persons. **Assumption:** The table is greater than 2 x 2, and the assumption is that not more than 20% of expected count is greater less than 5. This assumption applies to Table 4, which is a 4 x 5 table (4 categories by 5 categories). For each of the variables, a chi-square test of independence was used to test for a significant relationship between "Age" and "End-user experience in Password-authentication (Has someone ever logged into your account without your permission?)". Chi-Square test result is displayed on Table 5 below. From Table 5, the result shows that 40% (8 cells) have expected counts of less than 5, hence violating the assumption that not more than 20% of cells should have expected counts of less than 5. When the assumption is violated, Pearson Chi-Square row values are ignored, and the Likelihood Ratio row values are used to determine the statistical significance and impacts level of the independent variable on the dependent variable. With regards to the difference between the observed counts and expected counts, it is obvious that an association exists between the independent variable "Age" and the dependent variable "End-user Password-authentication experience (Has someone logged into your account without your permission?)". The likelihood Ratio reading gives: **Statistical Value**: 7.227; **Degree of Freedom:** 12; **P-Value**: 0.842, represented on the table as Asymptotic 2-sided. The \(p\)-value reflects the degree of data compatibility with the null hypothesis. Also, 0.05 (5%, 1 of 20) has been conventionally accepted as the threshold to differentiate significant from non-significant results (Di Leo and Sardanelli, 2020). \begin{table} \begin{tabular}{|l|r|r|r|r|r|} \hline & Value & Df & Asymp. Sig(2-sided) & Exact Sig. (2 sided) & Exact Sig.(1-sided) \\ \hline Pearson Chi-Square & 7.080 a & 12 & 0.852 & b & \\ \hline Likelihood Ratio & 7.227 & 12 & 0.842 & & \\ \hline Fisher’s Exact & 8.751 & & & 0.683 & b \\ Test & & & & & \\ \hline Linear-by-Linear & 1.369 & 1 & 0.242 & & \\ Association & & & & & \\ \hline N of Valid Cases & 291 & & & & \\ \hline \end{tabular} \end{table} Table 5: Chi-Square Tests In this result (Table 5), the reported p-value is 0.842, which is much larger than the significance level of 0.05. This means the result of tests are not statistically significant at 0.05 level. In other words, we cannot reject the null hypothesis (**H0**). We therefore accept the Null Hypothesis (**H0**), which states that the age of respondents has no impact on their end-user Password authentication experience (experienced unauthorised access to account). It is important to note that when conducting a hypothesis test, the null hypothesis is typically rejected if the p-value is less than the significance level. The significance level is the probability of rejecting the null hypothesis when it is true. The most used significance level is 0.05, which means that there is 5% chance of rejecting the null hypothesis when it is true. Again, another test was carried out using the hypothesis with another variable that captures user experience, scaled by option items that allow Internet users to rate their experience with Password authentication: the result is displayed on table 6 below: \begin{table} \begin{tabular}{l l|r|r|r|r|r|} \hline Age & & \multicolumn{3}{c|}{How has your experience been using} & Total \\ \multicolumn{5}{c|}{Password authentication?} \\ \cline{3-6} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Excellent} & Good & Average & Poor & \\ \hline \(<\)=20 & Count & 32 & 47 & 7 & 0 & 86 \\ & Expected Count & 37.2 & 41.1 & 6.8 & 0.9 & 86.0 \\ & \% Within Age & 37.2\% & 54.7\% & 8.1\% & 0.0\% & 100.0\% \\ \hline 21-30 & Count & 65 & 69 & 13 & 1 & 148 \\ & Expected Count & 64.1 & 70.7 & 11.7 & 1.5 & 148.0 \\ & \% Within Age & 43.9\% & 46.6\% & 8.8\% & 0.7\% & 100.0\% \\ \hline 31-40 & Count & 18 & 8 & 2 & 0 & 28 \\ & Expected Count & 12.1 & 13.4 & 2.2 & 0.3 & 28.0 \\ & \% Within Age & 64.3\% & 28.6\% & 7.1\% & 0.0\% & 100.0\% \\ \hline 41-50 & Count & 8 & 11 & 1 & 1 & 21 \\ & Expected Count & 9.1 & 10.0 & 1.7 & 0.2 & 21.0 \\ & \% Within Age & 38.1\% & 52.4\% & 4.8\% & 4.8\% & 100.0\% \\ \hline 51-60 & Count & 3 & 4 & 0 & 1 & 8 \\ & Expected Count & 3.5 & 3.8 & 0.6 & 0.1 & 8.0 \\ & \% Within Age & 37.5\% & 50.0\% & 0.0\% & 12.5\% & 100.0\% \\ \hline Total & Count & 126 & 139 & 23 & 3 & 291 \\ & Expected Count & 126.0 & 139.0 & 23.0 & 3.0 & 291.0 \\ & \% Within Age & 43.3\% & 47.8\% & 7.9\% & 1.0\% & 100.0\% \\ \hline \end{tabular} \end{table} Table 6: Chi-Square tests for association between Age and End-user experience (How has your experience been using Password authentication?) Table 6 above shows the observed and expected counts resulting from the cross tabulation of Age of respondents (independent variable) and Password authentication experiences (dependent variable). The dependent variable is however captured using a question that allows respondents select their Password authentication experiences from given option items. Cross tabulation of the two participating variables give insight into the existence of association or otherwise between age of respondents and their Password authentication experiences. The distribution shows that for respondents in age category \(<\)=20, count of 32(37.2%) was observed for respondents who have excellent Password authentication experience whereas the expected count was 37.2(approximately 37). 47(54.7%) observed counts were recorded for respondents who reported having good Password authentication experiences where the expected count was 41.1(approximately 41). The other options (average and poor Password authentication experiences had observed counts of 7 (8.1%) and 0(0%) against expected counts of 6.8 (approximately 7) and 0.9 (approximately 1) respectively. The second age category captured respondents between ages 21-30. In this category, observed count of 65(43.9%) was recorded for respondents who have excellent Password authentication experiences against expected count of 64.1(approximately 64). 69(46.6%) observed counts were recorded for respondents with good Password authentication experiences, with the expected count being 70.7(approximately 71). 13(8.8%) and 1(0.7%) count were observed for respondents who have average and poor Password authentication experiences respectively. The expected counts for the average and poor experiences are 11.7(approximately12) and 1.5 (approximately 2) respectively. The third age category, 31-40 had observed counts of 18(64.3%) for respondents whose Password authentication experiences are excellent, with expected count of 12.1(approximately 12). 8(28.6%) respondents reported having good authentication experiences, with an expected count of 13.4, 2(7.1%) and 0(0.0%) observed counts with expected counts of 2.2(approximately 2) and 0.3 (approximately 0) for respondents who have average and poor Password authentication experiences respectively. Expected counts were recorded for respondents who have average and poor experiences respectively. Age category four captured respondents between the ages of 41-50. Here, 8(38.1%) observed counts were obtained for respondents who have excellent authentication experiences with an expected count of 9.1(approximately 9). Observed counts for respondents who have average and poor experiences are 1(4.8%) and 1(4.8%) respectively, with expected counts of 1.7(approximately 2) and 2 respectively. The last age category, 51-60 has observed count of 3(37.5%) for respondents who consider their Password authentication experience excellent, with expected counts of 3.5(approximately 4). 4(50.0%) observed counts were recorded in the good experience domain, with 3.8 (approximately 4) expected counts. No observed count was recorded for average Password authentication experience. However, 0.6(approximately 1) expected count correspond to this option item. 1(12.5%) observed count to 0.1 expected for respondents whose Password authentication experiences are poor. Following the disparity between observed and expected counts, the assumption was evaluated. The assumption for using Pearson Chi-Square reading is that not more than 20% of expected counts are less than 5. Table 7 below shows the output of the test. Sequel to the difference between the observed counts and expected counts, it is obvious that an association exists between the independent variable "Age" and the dependent variable "End-user Password-authentication experience, scaled according to strength of users' perceived feeling of satisfaction (excellent, good, average, or poor). The table shows that 50% of cells have counts less than 5 hence the assumption for using the Pearson Chi-Square reading row has been violated. We therefore take the reading of the Likelihood Ratio row in ascertaining the statistical relevance of the existing association. #### Statistical Value: \(15.889\); Degree of Freedom: \(12\); P-Value: \(0.196\). The result on Table 7 represented by Asymptotic 2-sided significance shows the p-value is 0.196. However, the acceptable statistical significance level is 0.05, which means that the p-value 0.196 is greater than the significance level. This implies that the result of the tests are not statistically significant at 0.05 level and we cannot reject the null hypothesis. The p-value of 0.196 is above the 0.05 value acceptable for statistical significance, hence we accept the **H0** which states that age has no impact on end-user Password authentication experience. We therefore conclude that age of respondents has no impact on their Password authentication experience. #### Research Question 3 Does gender have impact on End-user experience in Password-based authentication? To answer this research question, a hypothesis was raised and tested: Null Hypothesis(**H0**): Gender of Internet users has no impact on their Password authentication experiences. In this section we discuss the result from analysis of the cross tabulation of independent variable "Gender" and Dependent variable (End-User Password Authentication Experience), captured by questionnaire item: has someone ever logged into your account without permission? The association is tested using Chi-Square cross tabulation of the two variables which reveal the counts observed on each \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline & Value & Df & Asymp. Sig(2-sided) & Exact Sig. (2 sided) & Exact Sig. (1-sided) & Point Probability \\ \hline Pearson Chi-Square & 22.413 & 12 & 0.033 & \({}^{\text{b}}\) & & \\ \hline Likelihood Ratio & 15.889 & 12 & 0.196 & 0.188 & & \\ \hline Fisher’s ExactTest & 15.889 & & & 0.145 & & \\ \hline Linear-by-Linear Association & 0.15 & 1 & 0.903 & 0.927 & 0.473 & 0.036 \\ \hline N of Valid Cases & 291 & & & & & \\ \hline \end{tabular} * **a.10 cells (50.0%) have expected count less than 5. The minimum expected count is 0.08** b. Not computed c. Standardized statistics is -0.122 \end{table} Table 7: Chi-Square Tests option item and compared the observed count to an expected count that would be obtained in the absence of an association between the independent and dependent variables. Table 8 below give the crosstalk output: From the table above, 46 male respondents (29.3%) have experienced unauthorized access to their accounts, where expected count of 50.2 (approximately 50) would be recorded where no association exists between Gender and End-user experience (Has someone logged into your account without your permission). 88 male respondents (56.1%) have not had their accounts accessed by unauthorized persons. However, this count would be 83 assuming no association existed between gender and end-user experience (Has someone logged into your account without your permission). Male respondents who selected option item "Don't know" have observed counts of 9(5.7%), with expected count of 3.5 (approximately 4). The fourth option item "Can't remember" has an observed count of 14 (8.9%) with expected counts of 10.8 (approximately 11). 47 (35.1%) of the female respondents have had their accounts accessed by unauthorized persons, with expected count of 42.8 (approximately 43); 65 (48.5%) observed counts for respondents that have not experienced unauthorized access to their account with an expected count of 70.5 (approximately 71). The third option item and fourth options items in the female gender category (Don't know and can't remember) had observed counts of 16 (11.9%) and 6 (4.5%) respectively, with expected counts of 11.5(approximately 12) and 9.2(approximately 9) assuming no association exists between the independent variable "Gender" and the dependent variable "End-user Password authentication experience (Has someone logged into your account without your permission). From the disparities between observed and expected counts, an association exists between gender and end-user Password authentication experience, however, to test the strength of the association, Chi-Square test result would be examined. The assumption for using the Pearson Chi-Square is that not more than 20% of expected count is less than 5. Table 9 below shows the Chi-Square result: \begin{table} \begin{tabular}{l l|r|r|r|r|r|} \hline \multicolumn{2}{l|}{GENDER} & \multicolumn{3}{c|}{Has someone logged into your account without} & \multicolumn{1}{c}{Total} \\ \multicolumn{2}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{your permission?} & \multicolumn{1}{c|}{} \\ \cline{3-6} \multicolumn{2}{c|}{} & Yes & Not At All & Don’t Know & Can’t & \multicolumn{1}{c|}{} \\ & & & & Remember & & \\ \hline MALE & Count & 46 & 88 & 9 & 14 & 157 \\ & Expected Count & 50.2 & 82.5 & 13.5 & 10.8 & 157.0 \\ & \% Within Age & 29.3\% & 56.1\% & 5.7\% & 8.9\% & 100.0\% \\ \hline FEMALE & Count & 47 & 65 & 16 & 6 & 134 \\ & Expected Count & 42.8 & 70.5 & 11.5 & 9.2 & 134.0 \\ & \% Within Age & 35.1\% & 48.5\% & 11.9\% & 4.5\% & 100.0\% \\ \hline Total & Count & 93 & 153 & 25 & 20 & 291 \\ & Expected Count & 93.0 & 153.0 & 25.0 & 20.0 & 291.0 \\ & \% Within Age & 32.0\% & 52.0\% & 8.6\% & 6.9\% & 100.0\% \\ \hline \end{tabular} \end{table} Table 8: Gender * User Experience (Has someone logged into your account without your permission?) From the table above, 0 (0.0%) cells have expected counts less than 5, with the minimum expected count being 9.21. this implies that the assumption for reading the result from the Pearson Chi-Square row is met hence the following readings: **Statistical Value: 6.853; Degree of Freedom: 3; P-Value: 0.077** Accepted statistical significance level is 0.05, hence the p-value from this test, represented on the table by Asymptotic Significance (2-sided), is 0.077. This result is more than the accepted level. In other words, the null hypothesis cannot be rejected as there is no enough evidence to suggest that the alternative hypothesis (H1) is true. We therefore accept the Null Hypothesis (**H0**): Gender of Internet users do not have impact on their end-user experience in Password-based authentication and reject the alternative hypothesis (H1): Gender of Internet users have impact on their end-user experience in Password-based authentication. Further testing of the impact gender has on end-user Password authentication experience was carried out using respondents self-report on their authentication experiences, scaled according to their perceived rating of the experience. Table 10 below displays the crosstalk result of respondents' gender and their self-rated Password authentication experience. \begin{table} \begin{tabular}{|l|r|r|r|r|r|r|} \hline & Value & Df & Asymp. & Exact Sig. & Exact & Point Probability \\ & & & Sig(2-sided) & (2 sided) & Sig.(1-sided) & \\ \hline Pearson Chi-Square & 6.853 & 3 & 0.077 & 0.075 & & \\ \hline Likelihood Ratio & 6.939 & 3 & 0.74 & 0.077 & & \\ \hline Fisher’s ExactTest & 6.746 & & & 0.078 & & \\ \hline Linear-by-Linear Association & 0.766 & 1 & 0.381 & 0.391 & 0.211 & 0.039 \\ \hline N of Valid Cases & 291 & & & & & \\ \hline a. & **0 cells (0.0\%) have expected count less than 5. The minimum expected count is 9.21** & & & \\ b. & The Standardized statistic is: -0.875 & & & & \\ \hline \end{tabular} \end{table} Table 9: Chi-Square Tests The table above shows the observed counts and their corresponding expected counts, obtained from cross tabulation of the participating variables: Gender and End-user Password authentication experience. These two counts are used to determine if an association exists between the independent variable Gender and the dependent variable: End-user Password authentication experience. Existence of an association implies that one of the variables has impact on the other. The degree of the impact, however, determines if the association is statistically significant and able to cause a meaningful impact. 70(44.6%) respondents in the male category reported having excellent Password authentication experiences. This count would be 68 if there was no association between gender and end-user Password authentication experience. 71(45.2%) respondents in this category reported having good Password authentication experiences. This count would be 75 assuming no association existed between gender and end-user Password authentication experience. Respondents in the male category, who consider their Password authentication experiences to be average or poor recorded 14(8.9%) and 2(1.3%) counts respectively with 12 expected average counts and 2 expected poor counts assuming no association exists between the independent variable (gender) and the dependent variable (end-user Password authentication experience). The female category has 56(41.8%) of respondents that reported having excellent Password authentication experiences, with an expected count of 58 respondents assuming no association exists between gender and end-user Password authentication experience. 68(50.7%) respondents in this category reported having good Password authentication experiences. This count would be 64 where no association exists between gender and end-user Password authentication. Average and poor experiences in the female category were recorded by 9(6.7%) and 1(0.7%) respondent respectively, with expected counts for these option items being 11 for average experience and 1 for poor experience. The disparities between observed and expected counts give credence to an existing association whose strength determine whether the independent variable (gender) has impact on the dependent variable (End-user Password authentication experience). To test this strength, Chi-Square tests is carried out, assumption is tested, and the insight gained from the test used is to make appropriate assertions. \begin{table} \begin{tabular}{l l|r|r|r|r|r|} \hline **Gender** & \multicolumn{4}{c|}{How has your experience been using} & \multicolumn{1}{c|}{**Total**} \\ \cline{3-6} & & \multicolumn{4}{c|}{Password authentication?} & \multicolumn{1}{c|}{} & \\ \cline{3-6} & & Excellent & Good & Average & Poor & \\ \hline Male & Count & 70 & 71 & 14 & 2 & 157 \\ & Expected Count & 68.0 & 75.0 & 12.4 & 1.6 & 157.0 \\ & \% Within Age & 44.6\% & 45.2\% & 8.9\% & 1.3\% & 100.0\% \\ \hline Female & Count & 56 & 68 & 9 & 1 & 134 \\ & Expected Count & 58.0 & 64.0 & 10.6 & 1.4 & 134.0 \\ & \% Within Age & 41.8\% & 50.7\% & 6.7\% & 0.7\% & 100.0\% \\ \hline Total & Count & 126 & 139 & 23 & 3 & 291 \\ & Expected Count & 126.0 & 139.0 & 23.0 & 3.0 & 291.0 \\ & \% Within Age & 43.3\% & 47.8\% & 7.9\% & 1.0\% & 100.0\% \\ \hline \end{tabular} \end{table} Table 10: Gender * User Experience (How has your experience been using Password authentication?) **Assumption:** The assumption for reading from the Pearson Chi-Square states that not more than 20% of cell in a table greater than 2 X 2 should have expected counts below 20%. Table 11 below give the result of the Chi-Square test. From the table above, 2 (25.0%) cells have expected counts less than 5, with the minimum expected count being 1.38. This implies that the assumption for reading the result from the Pearson Chi-Square row is violated hence we ignore the Pearson Chi-Square readings and use the reading of the Likelihood Ratio row instead. The following are resulting values from the likelihood Ratio: **Statistical Value**: 1.239; **Degree of Freedom**: 3; **P-Value**: 0.744 As indicated in Table 11, the P-value is 0.744. This p-value is greater than the significance level of 0.05, hence, we cannot reject the null hypothesis (**H0).** We therefore accept the Null Hypothesis (**H0**): Gender of the Internet users do not have impact on their end-user Password authentication experiences and reject the alternative hypothesis: Gender of Internet users has impact on their end-user experience in Password-based authentication. **Research Question 4** Does Education Level have impact on end-user experience in Password-based authentication? To answer this research question, we raise and test a hypothesis. Null Hypothesis (**H0**): Level of education has no impact on end-user experience in Password-based authentication. To test this hypothesis, we first carried out a cross tabulation of the independent variable: Level of Education and the dependent variable: End-user Password authentication experience. Table 12 below give details of observed counts and corresponding expected counts in the various categories of education level. \begin{table} \begin{tabular}{|l|r|r|r|r|r|r|} \hline & Value & Df & Asymp. & Exact Sig. & Exact & Point Probability \\ & & & Sig(2-sided) & (2 sided) & Sig.(1- & \\ & & & & & sided) & \\ \hline Pearson Chi-Square & 1.230 a & 3 & 0.746 & 0.740 & & \\ \hline Likelihood & 1.239 & 3 & 0.744 & 0.740 & & \\ Ratio & & & & & \\ \hline Fisher’s & 1.304 & & & 0.748 & & \\ ExactTest & & & & & \\ \hline Linear-by-Linear & 0.003 b & 1 & 0.953 & 1.000 & 0.512 & 0.070 \\ Association & & & & & \\ \hline N of Valid & 291 & & & & & \\ Cases & & & & & \\ \hline \end{tabular} * a. 2 cells (25.0%) have expected count less than 5. The minimum expected count is 1.38 * b. Standardized statistic is -0.059 \end{table} Table 11: Chi-Square Tests In this section, we analyzed the end-user experience using respondents' self-report on whether or not they have had their accounts compromised. Distributions from the table above show that 60(59.8%) respondents with School certificate as their highest obtained qualification have had their accounts accessed without authorization. An expected count of 59.8 (approximately 60) would be expected if no association exists between education qualification and end-user Password authentication experience item. 96 (51.3%) respondents in the school certification category have not had their accounts accessed by unauthorized persons. The expected count for this option item assuming no association exists between education level and end-user Password authentication experience would be 98 counts. School certificate holders who either do not know if their accounts have been accessed without authorization or cannot remember its occurrence have 19(10.2%) and 12(6.4%) counts respectively, with 16 expected counts for respondents who do not know and 13 expected counts for those who cannot remember. The second category captured respondents with Ordinary National Diploma as their highest obtained qualification. The distribution in this category show that 7(43.8%) of the respondents have had their accessed without authorization and that this count would be 5 if no association exists between education level and end-user Password authentication experience as reflected by unauthorized account access. \begin{table} \begin{tabular}{l l|r|r|r|r|r|} \hline \multicolumn{2}{l|}{Highest Education Qualification (HEQ)} & \multicolumn{3}{c|}{Has someone logged into your account} & \multicolumn{1}{c|}{Total} \\ \multicolumn{2}{c|}{} & \multicolumn{3}{c|}{without your permission?} & \multicolumn{1}{c|}{} \\ \cline{3-6} \multicolumn{2}{c|}{} & Yes & Not At & Don’t & Can’t & \\ & & All & Know & Remember & \\ \hline School Certificate & Count & 60 & 96 & 19 & 12 & 187 \\ & Expected Count & 59.8 & 98.3 & 16.1 & 12.9 & 187.0 \\ & \% Within HEQ & 32.1\% & 51.3\% & 10.2\% & 6.4\% & 100.0\% \\ \hline OND & Count & 7 & 8 & 1 & 0 & 16 \\ & Expected Count & 5.1 & 8.4 & 1.4 & 1.1 & 16.0 \\ & \% Within HEQ & 43.8\% & 50.0\% & 6.2\% & 0.0\% & 100.0\% \\ \hline HND/B.Sc & Count & 13 & 27 & 1 & 1 & 42 \\ & Expected Count & 13.4 & 22.1 & 3.6 & 2.9 & 42.0 \\ & \% Within HEQ & 31.0\% & 64.3\% & 2.4\% & 12.4\% & 100.0\% \\ \hline Ph.D & Count & 13 & 22 & 4 & 7 & 46 \\ & Expected Count & 14.7 & 24.2 & 4.0 & 3.2 & 46.0 \\ & \% Within HEQ & 28.3\% & 47.8\% & 8.7\% & 15.2\% & 100.0\% \\ \hline Total & Count & 93 & 153 & 25 & 20 & 291 \\ & Expected Count & 93.0 & 153 & 25.0 & 20.0 & 291.0 \\ & \% Within HEQ & 32.0\% & 52.6\% & 8.6\% & 6.9\% & 100.0\% \\ \hline \multicolumn{2}{l|}{OND/ND – Ordinary National Diploma, HND = Higher National Diploma, BSC = Bachelor of Science, PhD = Doctor of Philosophy} & \multicolumn{1}{c}{} \\ \end{tabular} \end{table} Table 12: Education Qualification * User Experience (Has someone logged into your account without your permission?) 8(50.0%) respondents in this category have not had their accounts accessed without authorization. A count of 8.4(approximately 8) is expected where no association exists between education qualification and end-user experience item. 1(6.2%) respondent does not know if his/her account has been accessed without authorization. The expected count of 1.4 (approximately 1) would also be obtained where no association exists between education level and end-user Password authentication experience. The last option item in this category has no observed count, but 1 count is expected in event of no association between education level and end-user Password authentication item. In the third category, Higher National Diploma/Bachelor of Science degree holders were captured. The data distribution shows that 13(31.0%) respondents have had their accounts accessed by unauthorized persons, with the expected count of 13.4(approximately 13). 27(64.3%) of the respondents have not experienced unauthorized access to their accounts. This count would be 22.1 (approximately 22) if no association exists between education qualification and end-user Password authentication experience. 1(2.4%) respondent do not know of such unauthorized access to his account while another 1(2.4%) respondent cannot remember it's occurrence. Expected counts of 3.6(approximately 4) and 2.9(approximately 3) were recorded for option items "Don't Know" and "Can't Remember" respectively. In the last education qualification category, Ph.D. holders' authentication experiences were captured. Observed counts of 13(28.3%) were recorded for respondents whose accounts have been accessed by unauthorized persons. The expected count where no association exists between education level and end-user Password authentication item would be 14.7(approximately 15). 22 (47.8%) respondents amongst Ph.D. holders have not had their accounts accessed by unauthorized persons. The expected count for this option item is 24.2(approximately 24) assuming no association exists between education level and end-user Password authentication experience item. 4(8.7%) observed counts for Ph.D. holders who do not know of unauthorized access to their accounts, with an expected count of 4, and 7(15.2%) observed counts recorded for respondents who cannot remember if they have had their accounts accessed without authorization against 3.2 (approximately 3) expected counts for this option item. **Assumption:** The assumption for a table greater than 2 x 2 (i.e., 2 categories by 2 categories) is that not more that 20% of expected counts is less than 5. This applies to our 4 x 4 table. The closeness between the observed and expected counts in the of education qualification (independent variable) and end-user Password authentication experience (dependent variable) cross tabulation implies that there is little or no association between education level of respondents and their end-user Password authentication experience. Chi-Square tests result is displayed in Table 13 below: \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline & Value & Df & Asymp. & Exact Sig. (2 & Exact Sig.(1- \\ & & & Sig(2-sided) & sided) & sided) \\ \hline Pearson Chi-Square & 11.848 \({}^{\mathrm{a}}\) & 9 & 0.222 & \({}^{\mathrm{b}}\) & \\ \hline Likelihood & 12.775 & 9 & 0.173 & 0.222 & \\ Ratio & & & & & \\ \hline Fisher’s & 9.909 & & & 0.317 & \\ ExactTest & & & & & \\ \hline Linear-by-Linear Association & 1.051 & 1 & 0.305 & \({}^{\mathrm{b}}\) & \({}^{\mathrm{b}}\) \\ \hline \end{tabular} \end{table} Table 13: Chi-Square Tests \begin{table} \begin{tabular}{l l|r|r|r|r|r|} \hline \multicolumn{1}{|l|}{Highest Education Qualification (HEQ)} & \multicolumn{3}{c|}{How has your experience been using} & \multicolumn{1}{c|}{Total} \\ \multicolumn{1}{|c|}{} & \multicolumn{3}{c|}{Password Authentication} & \multicolumn{1}{c|}{} \\ \cline{3-6} \multicolumn{1}{|c|}{} & Excellent & Good & Average & Poor & \\ \hline School Certificate & Count & 77 & 92 & 17 & 1 & 187 \\ & Expected Count & 81.0 & 89.3 & 14.8 & 1.9 & 187.0 \\ & \% Within HEQ & 41.2\% & 49.2\% & 9.1\% \% & 0.5\% & 100.0\% \\ \hline OND & Count & 8 & 5 & 3 & 0 & 16 \\ & Expected Count & 6.9 & 7.6 & 1.3 & 0.2 & 16.0 \\ & \% Within HEQ & 50.0\% & 31.2\% & 18.8\% & 0.0\% & 100.0\% \\ \hline HND/B.Sc & Count & 19 & 22 & 1 & 0 & 42 \\ & Expected Count & 18.2 & 20.1 & 3.3 & 0.4 & 42.0 \\ & \% Within HEQ & 45.2\% & 52.4\% & 2.4\% & 0.0\% & 100.0\% \\ \hline Ph.D & Count & 22 & 20 & 2 & 2 & 46 \\ \hline \end{tabular} \end{table} Table 14: Education Qualification * User Experience (How has your experience been using Password authentication?) Analysis of the impact of education level of respondents on their Password authentication experiences showed observe counts of 77(41.2%), and an expected count of 81 for respondents who reported having excellent Password authentication experiences in the school certificate holder's category. 92(49.2%) of the respondents in this category have good experiences, with expected count of 89.3(approximately 89). Respondent with average Password authentication experiences had a distribution of 17(9.1%) observed counts and 14.8(approximately 15) expected counts, while those with poor Password authentication experiences had 1(0.5%) observed and 1.9 (approximately 2) expected counts. The second category captured respondents with National Diploma. Distribution in this category show that 8(50.0%) observed counts for respondents that have excellent Password authentication, with an expected count of 6.9(approximately 7); 5(31.2%) observed counts and 7.6 (approximately 8) expected counts for respondents that reported having good Password authentication experiences; 3(18.8%) observed counts and 1.3(approximately 1) expected count for respondents that reported having average Password authentication experienced. Lastly, no respondent reported having poor password authentication experience, with an expected count of 0.2(approximately 0), also, was reported. Following the National Diploma category are respondents whose highest qualification is Higher National Diploma or bachelor's degree. 19(45.2%) observed counts and 18.2(approximately 18) expected counts for respondents that reported having excellent Password authentication experiences; 22(52.4%) observed counts and 20.4(approximately 20) expected counts for respondents who reported having good password authentication experiences; 1(2.4%) observed counts and 3.3(approximately 3) expected counts for respondents who have average password authentication experiences; no observed count and 0.4(approximately 0) expected count for respondents who reported having poor Password authentication experiences. The fourth category captured respondents with Ph.D. as highest education qualification. Distribution in this category showed 22(47.8%) observed counts and 19.9(approximately 20) expected counts for respondents who reported having excellent password authentication experiences; 20(43.5%) observed counts and 22 expected counts for respondents with good password authentication experiences; 2(4.3%) observed counts and 3.6(approximately 4) expected counts for respondents with average password authentication experiences, and lastly, 2(4.3%) observed counts and 0.5(approximately 1) expected count for poor password authentication experience. The disparity between observed and expected counts indicate that an association exists between the independent variable (education qualification) and dependent variable (end-user password authentication experience). The strength of this association is further tested using Chi-Square tests. For a 4 x 4 (4 categories by 4 categories) table such as the Table 14, Chi-Square test also shows if the assumption of having not more than 20% of cells containing expected counts of less than 5. The result of Chi-Square test is displayed in Table 15 below: From Table 15 above, 43.8% (7 cells) have expected counts of less than 5. This violates the assumption that not more than 20% of cells should have expected counts less than 5. The value readings are therefore taken from the Likelihood Ratio row. #### Statistical Value: \(11.683\); Degree of Freedom: \(9\); P-Value: \(0.232\) In Table 15, the reported p-value is 0.232. This p-value is greater than 0.05 (the acceptable significant level). This means that the test results are not statistically significant at 0.05 level. In other words, we cannot reject the null hypothesis (**H0**) that there is no effect or relationship between the variables being tested. We therefore accept the Null Hypothesis(**H0**): Level of education of Internet users has no impact on their end-user experience in Password-based authentication and reject the alternative hypothesis(**H1**): Level of education of Internet users has impact on their end-user experience in Password-based authentication. It is important to note that when conducting a hypothesis test, the null hypothesis is typically rejected if the p-value is less than the significance level. The significance level is the probability of rejecting the null hypothesis when it is true. The most used significance level is 0.05, which means that there is 5% chance of rejecting the null hypothesis when it is true. ### Discussion The results of our findings showed that password experience of end-users in these academic communities remain fair and good. An indication that the password-based experiences of end-users in developed and highly technological advanced economies did not reflect the experiences of end-users in this geographical area, and academic communities. The study also aligned the experiences with factors such as age (under 60), highest educational qualification, and employment status (unemployed, employed as temporary or permanent staff), but the result failed to establish a relationship with these factors. Related studies by Ugwu _et al._ (2022) and Ugwu _et al._ (2023) conducted earlier within the same geographical areas and academic communities suggest that age, gender, educational qualification, and academic discipline of the respondents have no effect on their personal cyber hygiene culture in ICT usage. These studies when juxtaposed with the current study, provided an insight that perhaps, (1) that there is likelihood of a relationship between cyber hygiene culture and password use experiences, and (2) that these experiences \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline & Value & Df & Asymp. & Exact Sig. & Exact & Point Probability \\ & & & Sig(2-sided) & (2 sided) & Sig.(1-sided) & \\ \hline Pearson Chi-Square & 12.998 & 9 & 0.163 & & & & \\ \hline Likelihood & 11.683 & 9 & 0.232 & 0.236 & & \\ Ratio & & & & & \\ \hline Fisher’s & 11.037 & & & 0.216 & & \\ ExactTest & & & & & \\ \hline Linear-by-Linear & 0.74 & 1 & 0.541 & 0.555 & 0.283 & 0.020 \\ Association & & & & & & \\ \hline N of Valid & 291 & & & & & \\ Cases & & & & & \\ \hline \end{tabular} * a. **7 cells [43.8%]** have expected count less than 5. The minimum expected count is 0.16 * b. **Not computed** c. The standardized statistic is -0.612 \end{table} Table 15: Chi-Square Tests are not dependent on age, gender, highest educational qualification, and academic discipline while in the University. All the same, this study did not focus on understanding the password experiences of very economic buoyant communities as there are relationship between possibilities of cyber-attacks on very economic viable communities. This statement is supported by a report by Mottl, (2022), which suggests that cyber-attack incidents are highly motivated by economic gains. The above informs why ICT end-users are very conscious of having strong password protection for their financial accounts as having password leakage or compromise could have the most irreparable damage on end-users (Ray, H., _et al._, 2021). According to Aladenusi, T. and Odumuboni, F. (2022), Nigeria's cyber security outlook for 2022 showed a rise in cyber-attacks and data breach incidents both in public, private financial and non-financial sectors. An affirmation that no sector of the economy is spared of this menace with a prediction that Nigeria has come under the spotlight of cyber-attackers armed with more tools and ground to play. Password compromise is one of the targets of cyber-criminals and major motivation for this remains economic. This information alongside existing knowledge gaps stated earlier deepened the passion of the researcher in surveying password experience in south-eastern Nigeria with particular interest on the staff and students at some University communities therein to understand their password use experiences. Password experience the authors argue appears to be related to some factors ranging from cyber hygiene, economic status, political environment and so on when the outcome of this study is compared with other studies conducted in different economic zone and climes. For instance, Beyond Identity (2021) reveals that 46% of US consumers failed to complete transactions due to authentication failure. Lance (2021) shows that 67% of the respondents of their study conducted in the USA said that they forgot their password when they are trying to complete an online banking transaction, 56% said it happens when trying to get travel information, 55% reported it happens when they are attempting to buy something, and 43% said it happens when they try to access a document. The works of Chiasson and Biddle, (2007); Butler and Butler, (2015) are of the views that password-based authentication are problematic. Butler and Butler (2015) opened that because of unlimited online options, end-users have become impatient with the time-consuming and inconvenient login experiences. These experiences include being prompted to reset a Password or create an account with a long-form and managing Passwords (measures related to the safekeeping of Passwords). These experiences are expressed by social media users in a study conducted by Majid and Kouser (2019), which reported that many of their respondents that are social media users forgot to log out of their social media accounts leading to possible account compromise especially in public computers. Furthermore, Beyond Identity (2021) reveals that 46% of US consumers failed to complete transactions due to authentication failure. Also, 18.75% of returning users abandon the cart after forgetting their password and having issues with password reset emails. Likewise, 36% of the consumers said they try to guess a forgotten password twice before resetting it, while 28% said they guess a forgotten password once and 22% three times. Singh _et al._, (2007) describe users' experiences of how an entire village would delegate bank credentials to a single person who would conduct the online transaction on everybody's behalf. In addition, Dunphy _et al._, (2014: 2) report on how older adults delegate the withdrawal of their monies from an automated teller machine to their helpers thereby disclosing their personal identification numbers (PINs) to a third party. The same experience is also shared by electronic health clients, where the issues of forgetting passwords, password reset and so on could be critical, dangerous or fata (Stantinides _et al._, (2020)). Password experience of end-users from this and related studies appears to be hugely relational and could be based on individual geographical location, economic and societal status, personal cyber security consciousness, personal cyber hygiene culture, password practices, and so on. ### Recommendations Based on the above findings, the following recommendations are suggested to further enhance the password hygiene for end-users of password-based authentication systems. ### Recommendations for end-users **Periodic change of password**: Periodic change of password is essential for effective security. This is because most password-based hacks have more to do with bad password, phishing attacks, and keystroke loggers (Isobe and Ito, 2021). Also, with the emerging trends of technology, the possibilities of new computing devices may emerge, and the pattern of usage may change (Ofusori, 2019). Hence, periodically changing your password helps avoids common threats. **Continuous user education and security awareness:** It is the responsibility of end-users to ensure that they are updated with the latest scientific knowledge on password security best practices by undertaking more training on digital literacy programs and security awareness. This continuous education can help foster a culture of responsible password usage such as creating strong passwords, avoiding common password mistakes, and understanding the importance of password hygiene. Furthermore, having this knowledge will help them avoid the risks associated with password-based authentication system (Renaud, Otondo and Warkentin, 2019) **Two-Factor Authentication (2FA):** As an additional layer of security, it is recommended that two-factor authentication method should be adopted. This can involve combining passwords with another authentication factor, such as SMS-based OTPs, app-based OTPs, or biometrics when feasible. While it is noted that single authentication such as password authentication is popularly in use as a security measure, two-factor authentications has been proven to be a better effective security measure, reducing the likelihood of unauthorized access fraud (Lambiase _et al._, 2020) **Password recovery mechanism:** Password recovery mechanisms are important for users who may forget their passwords. In developing economies, where access to email or mobile services may be limited, alternative method for password recovery, such as security questions or offline recovery options can be used to regain access to their accounts (Majid and Kouser, 2019). **Password Managers**: It is suggested that end-users are educated on the benefits of using password managers to simplify and enhance password management. The use of password manager applications will help users generate and securely store complex passwords (Fagan _et al._, (2017). Furthermore, it helps users avoid the pitfall of reusing passwords and simplify the process of managing multiple passwords **Continuous Monitoring and Updates**: Regularly monitor authentication systems for security vulnerabilities and apply timely updates and patches. It is important to note that keeping computing devices, operating systems and applications up to date with the latest security patches, helps ensure that user accounts are protected against emerging threats and security breaches. Outdated software can have vulnerabilities that attackers can exploit to gain unauthorized access to passwords (AlSabah, Oligeri and Riley, 2018). ### Recommendations for policy makers **Password Policies**: Policymakers in developing economies can implement simplified password policies that strike a balance between security and usability. This can be done by avoiding overly complex requirements that may lead to users creating weak or easily forgotten passwords. For example, encourage longer passphrases instead of traditional passwords (Guven, Boyaci and Aydin, 2022). **Collaborative Efforts:** Policymakers in developing economies can foster collaboration between governments, private organizations, and technology providers to address the challenges of password-based authentication. Pool resources and expertise to develop localized solutions, share best practices, and support initiatives aimed at improving user experiences and security. By implementing these recommendations, the usability and security of password-based authentication can be enhanced, leading to better experiences for end users in developing economies. ## 5 Research contributions and suggestions for future research This study provides insights into end-users' password-based experience in developing economies to ascertain whether their experience is associated with educational qualification, age and gender. Furthermore, it provides an empirical basis on which future studies on password-based authentication and end-user experiences can be built upon. Also, the outcome of this research will help inform policymakers and research communities towards password hygiene culture, management, and the feasibility of passwordless authentication systems in developing economies. It is important to note that this study did not delve into understanding the relationship between password experience and the economic and social status of individuals. Hence, future work should incorporate this area by having a mix of people from different societal and economic levels. In addition, further research can be done to investigate the relationship between ppassword entropy and end-users experience towards maintaining password hygiene. ## 6 Conclusion This work examined the experience of end-users of password-based authentication using questionnaire. The survey followed an established model used in information system in studying users' experiences in adopting new technologies. The result showed that the issues of account compromise in the geographical area is not common as the respondents reported having good experience with password-based authentication systems. Furthermore, that both age and educational qualifications have no impact on end-user experience as shown in this study. In all, the age of the respondents is under 60 and the result of this study may not apply to ages over 60 that may have problems with remembering passwords, people with multiple accounts that do not use Password management utility such as Password Managers. All the same, because this study did not consider a measure of end-users Password entropy, use of Password managers and Password hygiene culture, the results may also vary when these are put into consideration. This study concludes that password experience is relational and varies in geographical locations, communities, economic and societal status, ICT tool usage and engagements. On the basis of these, the authors recommend that end-users of ICT tools should endeavour to take the security of their password management serious as that largely depends on the level of their cyber hygiene culture, consciousness, and discipline.
2308.10371
A Framework for Curriculum Transformation in Quantum Information Science and Technology Education
The field of Quantum Information Science & Technology (QIST) is booming. Due to this, many new educational courses and university programs are needed in order to prepare a workforce for the developing industry. Owing to its specialist nature, teaching approaches in this field can easily become disconnected from the substantial degree of science education research which aims to support the best approaches to teaching in Science, Technology, Engineering & Mathematics (STEM) fields. In order to connect these two communities with a pragmatic and repeatable methodology, we have synthesised this educational research into a decision-tree based theoretical model for the transformation of QIST curricula, intended to provide a didactical perspective for practitioners. The Quantum Curriculum Transformation Framework (QCTF) consists of four steps: 1. choose a topic, 2. choose one or more targeted skills, 3. choose a learning goal and 4. choose a teaching approach that achieves this goal. We show how this can be done using an example curriculum and more specifically quantum teleportation as a basic concept of quantum communication within this curriculum. By approaching curriculum creation and transformation in this way, educational goals and outcomes are more clearly defined which is in the interest of the individual and the industry alike. The framework is intended to structure the narrative of QIST teaching, and with future testing and refinement it will form a basis for further research in the didactics of QIST.
Simon Goorney, Jonas Bley, Stefan Heusler, Jacob Sherson
2023-08-20T21:44:10Z
http://arxiv.org/abs/2308.10371v3
The Quantum Curriculum Transformation Framework for the development of Quantum Information Science and Technology Education ###### Abstract The field of Quantum Information Science and Technology (QIST) is booming. Due to this, many new educational courses and programs are needed in order to prepare a workforce for the developing industry. Owing to its specialist nature, teaching approaches in this field can suffer from being disconnected to the substantial degree of science education research which aims to support the best approaches to teaching in STEM fields. In order to connect these two communities with a pragmatic and repeatable methodology, we have generated an innovative approach, the Quantum Curriculum Transformation Framework (QCTF), intended to provide a didactical perspective on the creation and transformation of quantum technologies curricula. For this, we propose a decision tree consisting of four steps: 1. choose a topic, 2. choose one or more targeted skills, 3. choose a learning goal and 4. choose a teaching approach that achieves this goal. We show how this can be done using an example curriculum and more specifically quantum teleportation as a basic concept of quantum communication within this curriculum. By approaching curriculum creation and transformation in this way, educational goals and outcomes are more clearly defined which is in the interest of the individual and the industry alike. The framework is intended to structure the narrative of QIST teaching, and will form a basis for further research in the didactics of QIST, as the need for high quality education in this field continues to grow. ## 1 Introduction The field of quantum information science and technology (QIST), although originating in physics, is gaining increasing interdisciplinary popularity [1]. This is presently inducing a significant rise in the number of degree programs and courses in this field [2, 3], which requires a didactical perspective which is currently significantly lacking in this field, due to its rapid development and specialist nature. As a result, it is of great importance that the results of didactical research are incorporated into quantum curriculum creation and transformation at the present time. There are also several immense challenges that the field presents: Its interdisciplinarity and complexity makes content preparation difficult as many concepts require perspectives from different fields. Additionally, the content needs to be adapted to learners from various different academic backgrounds. Given the importance of widespread, progressive education and training in QIST, the European Quantum Technology Education community (QTEdu) have developed practical guidelines for educators in non-formal contexts, and in the high school environment [4, 5]. However, the most urgent need for quality education is in the university context, where graduates of Bachelor and Master programs in STEM fields will make up the emerging quantum workforce [3]. More advanced topics in QIST are particularly challenging to teach, due to the specialist knowledge required. As a result, the majority of teaching faculty are based in Physics, Information Sciences, or Engineering, without access to significant pedagogical support. Many of these practitioners are not grounded in knowledge of those core educational principles which may be background for high school teachers. For this reason, and in order to promote dialogue between didactics and educators, there needs to exist advice that is easy to follow. Additionally, this should be industry-needs oriented [6] as preparing the quantum workforce for work in the industry is a major intent of many quantum courses. For this purpose, in this work we present a didactical teaching framework, the Quantum Curriculum Transformation Framework (QCTF), based on a decision tree which can be used to guide practitioners through teaching design, from topic selection to the specific teaching approach based on existing didactical theory. It should be emphasised that subsections of the framework may also be used in order to support specific aspects of pedagogical preparation, even if the complete framework is not applicable in every case. In addition, it may be extended and modified when presented with new research results. The framework is summarised in Fig. 1. In the level of school education, there exists a storied tradition of research on the use of pedagogical content mapping, [7, 8], as a means of aligning learning material with teaching methods which are most optimal for promoting different kinds of learning outcomes. However, this approach is rarely taken at the university level, and never in the field of QIST. There are many differences between higher education and secondary school in the complexity and the interdisciplinarity of content, in the time scopes and time available to study specific subjects and, most importantly, in the learners. In the QCTF, we make use of a "what" - "why" - "how" approach on an abstract level, whilst proposing a framework that considers these many differences. The paper is structured as follows: 2, we summarise the basis for choosing educational content (the "what"), which makes use of the European Competence Framework for Quantum Technologies [9]. Then, three targeted skills areas are introduced in Sect. 3, as both a motivation and a means of communicating the content chosen to teach. In 4, a modified Bloom's taxonomy is described for use in defining learning goals. In Sect. 5, Ainsworth's Design-Functions-Tasks (DeFT) framework is summarised and applied to the QIST context in 6. Lastly, we show a possible explicit teaching approach to quantum teleportation based on our framework in Sect. 7. Throughout this work, we will stick to examples to ground the QCTF and convey the meaning of the didactical theory in the QIST context. Quantum teleportation is chosen as a topic specifically because it makes use of various concepts central to QIST: entanglement, measurements and unitary operations in three-qubit systems and has many applications [10]. ## 2 Competence Framework The European Competence Framework for Quantum Technologies is a tool designed by the Quantum Flagship, the EU funding program for QIST, intended as a common language for communicating skills in the field. The framework has utilised a Delphi method to identify key content according to industry needs [6], and is presently available in version 2.0 [11], providing a structured overview of the field. It may be used to describe skills required for jobs, learning outcomes from courses, and content of entire degree programs. The QCTF builds on the Competence Framework as a backbone, providing a means to determine content selection when designing educational initiatives. Each topical domain is assigned a number 1-8, covering Quantum Background (domains 1 and 2), Core Device Technologies (domains 3 and 4), and QT systems and applications (domains 5-8). Within each domain, topics are assigned numbers such that a complete map can be generated of knowledge in QIST. Figure 1: The Quantum Curriculum Transformation Framework (QCTF), a decision-tree based methodology for designing and transforming educational material and methods in QIST. It is based on a What - Why - How - approach, starting with the European Competence Framework for Quantum Technologies [9] and concretising towards specific teaching approaches that are based on didactic research. An overview of the CF domains is shown in Fig. 2. For use in designing curricula, we present an application-oriented example of a small curriculum with topics chosen from the CF, focusing on the basics of quantum technologies and quantum communication, in Fig. 3. That plain knowledge transfer e.g. in the form of purely a traditional lecture is not effective is well known [12, 13]. Therefore, planning courses should not stop after it has been decided which concepts to teach but rather should go further, considering first the skills that the learners should achieve in the context of the chosen concepts. Figure 3: Application oriented example curriculum coloured using the Competence Framework [9]. These three modules could be part of a one- or two-week workshop covering the basics of QIST, quantum communication and giving an outlook into quantum sensing and computing. Figure 2: The 8 domains of the European Competence Framework for Quantum Technologies, covering Quantum Background (domains 1 and 2), Core Device Technologies (domains 3 and 4), and QT systems and applications (domains 5-8). Adapted from [9]. We classify these skills in the following. ## 3 Targeted Skills In the context of higher education on quantum technology, we see possible targeted skills in three main areas: Theory & Analytics, Computation & Simulation and Experiment & Real World. We emphasise that these targeted skills are those engaged in different approaches to teaching topics CF1x-8x. We do not present these as a classification scheme for concepts, which is in fact the role of the competence framework itself [6]. Rather it is possible for many concepts to engage several or all of the "targeted skills". Quantum Teleportation, for example, is a rich topic which can be approached with a view to developing each of these three targeted skills, while other topics may be limited to developing a subset of these skills. QIST, in many cases, allows or even necessitates an interdisciplinary view. For example, quantum computing algorithms are significantly restricted by the hardware that they run on. Gate times and fidelities limit circuit depth and general complexity of possible algorithms, which can be summarised by performance benchmarks [14]. This leads to the practical problems of finding the right quantum computer for a specific algorithm [15] and, in addition, determining which algorithms work well on a given hardware architecture [16, 17]. This means that expertise in both areas is required to make justified decisions. As is also pointed out in [6], generally one would address industry needs by having skills in all three of these skill areas. ### Theory & Analytics When targeting theoretical skills, one targets the skill of derivation and proof of theoretical ideas within QIST like the no-cloning-theorem or the no-communication theorem. Instead of just being able to recite these theorems, learners ideally should understand the process of arriving at them and gain the ability to expand a given set of ideas, the theoretical framework of QIST, built up on complex euclidean spaces or so-called Hilbert spaces [18, 19], to generate new knowledge. In Sect. 4, we concretise this using Bloom's taxonomy. Analytical skills refer to calculations involving numbers or variables. In QIST, e.g. the Dirac ket notation is popular to represent vectors and linear transformations in these complex vector spaces. As an example, the skill of calculating the effects of various quantum logic gates (linear transformations) within this notation or the equivalent vector notation can be targeted. We refer to e.g. the textbooks [18, 19, 20] for more details on QIST from this perspective. ### Computation & Simulation When targeting computational skills, a focus lies on programming within QIST. This can be done in various different languages and libraries, e.g. in python using a library like Qiskit [21] or Pennylene [22], in C++ e.g. with the Intel Quantum Software Development Kit (QSDK) [23] or in the standalone languages Open Quantum Assembly Language (OpenQASM) [24], Q# [25] or Silq [26], but there are also many more that are reviewed in e.g. [27]. These languages can be deployed on real quantum hardware or simulators. Real quantum hardware tends to be noisy, especially for large systems of qubits and simulating them is exponentially difficult in the number of qubits [28, 29]. Gate errors can also be simulated and analysed within these simulations [30] as these are and will be much more relevant than in classical computing. This is where the field of quantum error correction starts [31]. We refer to textbooks such as [32, 33] for more detail on QIST from this perspective. ### Experiment & Real World Last but not least, QIST can be approached from an experimental perspective. Skills in this area can cover a vast array of planning and preparation of experimental setups, a deep understanding of the functioning of the devices that are used and practical skills like calibration and adjustment of these devices. In the context of quantum information science and technology, qubit platforms and types are of special interest for transport of quantum information (quantum communication), precise measurement of real world quantities like temperature or magnetic fields (quantum sensing) or universal and analogue quantum computing. Knowledge and skills required to get into one of these fields in these areas can be targeted with textbooks like [34, 35] and in practical lab courses. All of these three categories are vast in that they each define multiple fields of research. The sheer endless amount of content to choose from and skills that could be acquired is a challenge for educators in the field of QIST education. Due to this complexity, theoretical frameworks like the competence framework are needed to aid in the selection process. With the Quantum Curriculum Transformation Framework, we aim to go beyond the level of pure content towards a practical guideline that is applied to curriculum transformation and creation by educators which is shown further in the following sections. ## 4 Learning Objectives/Bloom's taxonomy & intuition The more specific question of which cognitive learning goals within these areas of expertise should be accomplished can be asked next. A classification of these learning goals can help for various reasons. First of all, it encourages aiming for learning goals beyond just remembering and understanding concepts. Secondly, the structure can help align the educator's purposes with that of the learners, aligning the teaching process with valid performance tests [36]. In [37], the authors discuss this alignment for the purpose of designing empirical studies, arguing that if learning goals beyond memorisation should be achieved, ways to test this also have to be found. Bloom's taxonomy is a classification scheme of such learning goals widely used in high school education. It comes in variations, but often six levels are defined based on increasing cognitive demand which represent lower order thinking skills to higher order thinking skills [38, 39]. In the traditional taxonomy, higher order skills can only be reached if the previous skill is attained, defining a theoretical progression. This formulation of Bloom's taxonomy is displayed in Fig. 4. Bloom's taxonomy is used e.g. in Computer Science Education [40] and is also explored in physics education [41, 42]. Its wide use may be attributed to its simplicity and practicality, particularly in the High school setting where the hierarchical nature of the 6 levels can be used to set clear learning objectives for students with a variety of backgrounds and interests. Despite this, the taxonomy in its traditional form may come with some limitations, particularly when applied to the context of higher education. In [43], Shaw and Kriek point out that while cognitive demand associated with the six levels may indeed be hierarchical, learning situations in science education rarely involve only individual levels. Rather, they are inherently connected, for example when applied to problem-solving. In an evaluation problem for example, comparing two different approaches to address a problem requires understanding them both. Similarly, in the context of QIST, we further re-interpret the taxonomy, for the following reason. The foundational concepts in QIST are well known to be abstract and disconnected from everyday life, far more so than many other areas of Science [43]. As a result, the lower-order skills: understanding, and even basic knowledge, of concepts in QIST actually demand application of the higher-order skills. It is impossible to understand ideas such as superposition and entanglement, off which all quantum technologies are based, without creative thought and imagination, as any possible representation of these concepts are inherently non-everyday objects. This abstract nature also demands an inherent comparison to the classical world, and associated evaluative and analytical thinking. Our re-interpretation of Bloom's taxonomy, as applicable in fields such as QIST in which the relationship between the cognitive skills involved is not purely hierarchical, is shown in Fig. 4. ### Bloom's taxonomy applied to Quantum Teleportation In the following, we show example tasks in the context of quantum teleportation for each of the three targeted skill areas (Sect. 3 and for every level of Bloom's taxonomy) and cite references that can be consulted to approach these tasks. In this way, Bloom's Figure 4: Revised Classical Bloom’s Taxonomy appropriate for use in describing learning outcomes in QIST, adapted from [38]. Since creative abstraction is required in order to understand basic quantum concepts, the relationship between Bloom’s thinking skills is non-hierarchical. In the following, we provide example tasks in each of the three targeted skill areas for every level of the original Bloom’s taxonomy in the context of quantum teleportation. These don’t constitute an exhaustive list of such questions. Nevertheless, one can assume that the concept of quantum teleportation and its connection to QIST is thoroughly understood if all these tasks are accomplished by a learner. taxonomy enables the creation of tasks for topics within QIST and structuring of the educator's intentions for student's learning outcomes. #### Theory & Analytics **Remember**: Reproduce the quantum gates used in the quantum teleportation algorithm. [20] **Understand**: Describe the process of quantum teleportation in principle [20]. Describe why the no-cloning-theorem is not violated [44]. Discuss why no faster than light communication is happening. [44] **Apply**: Describe how quantum teleportation can be applied in quantum communication and quantum computing. [45] **Analyse**: Examine the effect of decoherence on the resulting state in the quantum teleportation algorithm. [46, 47] Examine the effect of quantum teleportation if it is applied to a qubit that is itself entangled in a multi-qubit system. [48] **Evaluate**: Argue whether Quantum Teleportation is an inherent advantage of quantum physics in respect to classical physics or if it is more of a workaround necessary due to no-cloning. [45] **Create**: Design a quantum teleportation algorithm for states of multiple qubits. [49, 50] Design a quantum teleportation algorithm such that three parties can exchange quantum information with each other. [51] #### Computation & Simulation **Remember**: Program the quantum teleportation algorithm in Qiskit. [23] **Understand**: Structure and comment the code of quantum teleportation to be more generalised and readable, add visualisations for investigating whether the code is working as intended. [52] **Apply**: Implement the quantum teleportation algorithm into another algorithm [insert algorithm here] where it is needed to function. [53] **Analyse**: Compare the resulting fidelities of teleported states in various quantum computers for different qubit choices. [47, 52] **Evaluate**: Select the best hardware to perform quantum teleportation on (being restricted e.g. to the free IBM hardware). [23, 54] **Create**: Design a "better" algorithm using quantum error correcting code and/or substituting quantum gates for gates with smaller errors. [55] Improve error correction using quantum teleportation. [56] _Experiment & Real World_ **Remember**: List the necessary optical components used in all-optical quantum teleportation. [57] **Understand**: Describe how the algorithm can work in practice in the context of quantum optics. [57] **Apply**: Sketch an experimental design of quantum teleportation incorporated into a quantum cryptography protocol to extend the range of this protocol. [58] **Analyse**: Differentiate Quantum Teleportation from other quantum optical algorithms shown in e.g. [59]. What are the differences and similarities in experimental setups? **Evaluate**: Weigh - in the context of quantum communication - different areas of application (urban areas, ground-satellite,...) against each other to conclude which might be the most used/valuable in a quantum future, considering advantages and challenges. [60, 61, 62, 63, 64, 65] **Create**: Create an experimental design that can teleport two-qubit states. [66] ## 5 Representations An important part of the question of why and how to teach is why and how to represent the chosen concepts. By representation, here we mean anything external (as opposed to internal or mental), visual or auditory that is used to convey a concept [67]. Most of the time, multiple external representations (MERs) will be used in science education as, for example, text will be combined with equations or figures. The Designs-Functions-Tasks (DeFT) Framework [68] is a theoretical framework for the use of MERs that will be summarised here for our purpose. The DeFT framework for MERs consists of three layers to consider when combining representations: Designs, functions and tasks. While designs and tasks consider the how of representing information, the function layer constitutes the why. Therefore, in this work, we will firstly cover the function layer in order to line up the DeFT framework with the taxonomy presented here. ### Functions The _Function_ layer within this framework describes three main roles that MERs can play: Firstly, to _complement_ which means to provide various perspectives and information on the same concept so that learners can benefit from the specific advantages of each representation and to support different processes, i.e. strategies and individual differences of learners. For example, purely text based representations can be complemented by suitable mathematical/symbolic or graphical representations. Another possible function MERs is to _constrain interpretation_ i.e. to specify ambiguous or complex information e.g. coming from text, for example with symbols or mathematical formulas. And lastly to _construct deeper understanding_, i.e. to support learners in achieving new insights on a concept which would be difficult with only one representation [68]. In these various ways, different MERs should be chosen carefully and given as tools to support learners in achieving the levels in Bloom's taxonomy that the educators want them to achieve. For example, in order to answer the question of how to implement the quantum teleportation algorithm into existing quantum algorithms, a schematic like the one in [69] can be used as aid. ### Designs The _Design_ layer constitutes the _how_ of the use of MERs. It describes the number of representations used, the way the information is distributed, the form of the MERs (i.e. pictures, text, animations, sound, equations or graphs), the sequence (i.e. the order in which the representations are shown), and the translation (i.e. indications of relations between the representations, for example visual cues using colouring or spatial proximity). MERs should be designed to reduce unnecessary cognitive tasks as described in the following. ### Tasks Last but not least, the _Task_ layer describes the cognitive tasks that learners need to overcome in order to benefit from the MERs used. Cognitive tasks depend on characteristics of the representations and on characteristics of the learners. For example, learners should gain representational competence, i.e. they need to understand the various Design aspects of the depicted MERs, in order to benefit from them [70]. Cognitive tasks are analysed within the cognitive load theory (CLT) [71, 72]. The core idea is that "Learning, reflected by performance change, requires working-memory capacity" [71]. It is therefore beneficial for learning to make use of multiple different working memory channels, e.g. visual and audio information, at the same time as described by the modality principle [73]. Another known effect of CLT is the redundancy effect suggesting that "redundant material interferes with rather than facilitates learning" [74]. The redundancy effect suggests that experts, i.e. learners with high previous knowledge, could be hindered by the use of MERs when the different representations don't provide them with additional information or processes. This is commonly referred to as the expertise-reversal effect [75]. The split attention effect [76] suggests that MERs should be integrated to one single source of information rather than split either spatially or temporally. Another point to consider when using MERs is that learners need to translate between these representations. This translation will be harder the more the representations differ in format and operators like various levels of abstraction or supported strategies [68]. All of these effects have an influence on _extraneous cognitive load_ which is the cognitive load imposed on the learner by the way the information is provided, i.e. the _Design_, and not the complexity of the information itself which is referred to as _intrinsic cognitive load_[77]. When extraneous cognitive load is reduced, learners can focus more on learning the content itself. ## 6 Applying the DeFT-framework to QIST by the example of quantum teleportation Quantum information science and technology is a field with a rich variety of forms of representations. To show this and how to apply the DeFT framework to this space, we again refer to quantum teleportation and show examples of descriptive (i.e. with text), symbolic (i.e. mathematical) and graphical (i.e. with pictures and diagrams) visual representations of the process within the three different approaches. This is chosen as a practical classification of visual external representations, rather than a comprehensive one. As pointed out in e.g. [78], creating a comprehensive classification of representations is a non-trivial endeavour that has been approached in e.g. [79, 80] that does not serve the main contributions of this paper. Rather than giving a comprehensive characterisation of such representations, this application of the DeFT framework serves to show how possible representations can be categorised into the three targeted skill areas and point towards the thought processes that come with choice or creation of multiple suitable external representations using the DeFT framework. As is pointed out in [81], learners need to understand all the representations that they work with, all the given support on the syntactic or semantic level and how these representations connect to each other in order to learn effectively. ### Visual Representations of the Theory & Analytics of Quantum Teleportation DescriptiveWe provide a text/descriptive representation of the _Theory & Analytics_ of quantum teleportation in A. This is an abstract text describing the quantum teleportation protocol as a transfer of quantum information made possible due to the transitivity of entanglement. This description leaves some room for interpretation. For example, it describes a connection between entanglement and measurement that is not necessarily immediately visible. Therefore, it could benefit learners to constrain interpretation by providing symbolic and/or graphical representations to complement the description. SymbolicThere are different ways to represent quantum teleportation symbolically. It is often represented in Dirac ket notation [20]. Within this notation, one can use the computational basis \(|0\rangle\) and \(|1\rangle\). A more abstract formulation uses bell states and bell measurements, i.e. measurements in the bell basis [82]. Both are also shown in A. There is a more general formulation with density matrices that can be suitable for a formal mathematical approach to the topic and can also be used for generalisations like the teleportation of mixed states and quantum gates [83]. Within the MER framework, the mathematical representation serves the function of _complementing_, i.e. offering multiple perspectives on the same concept and providing learners with more strategies to explain these concepts to themselves or others by calculation. In addition, the mathematical language can _constrain interpretation_ by being less ambiguous than the text. Lastly, the formulation using bell states and bell measurements could _construct deeper understanding_ by abstraction. Thus it can be hypothesised that the text supported by mathematical representations helps learners understanding and problem solving in this context. Symbolic descriptions of quantum teleportation using the Dirac ket notation can be found in A. GraphicalThe theory of quantum teleportation can be visualised in abstract schematics (Fig. 5) [84, 85]. Abstract representations like this allow for re-design and generalisation of processes like quantum teleportation and could provide benefits even for experts in that regard by supporting higher levels in Bloom's taxonomy. A graphical representation providing an example for the creative abstraction needed to illustrate Bell-state measurements is provided by Fig. 4/5 in [86]. Alternatively, the explicit analytics can e.g. be graphically visualised using circle notation [83] (see Fig. A2) which complements the mathematical notation by providing the information of complex numbers in a graphical form. This is done with circles representing the basis states, inner circles of which the radius represents the absolute value of the corresponding coefficient and a line which represents the phase. Another possibility is introducing dimensionality, assigning every qubit an axis in space [87]. The approaches can be combined [88]. ### Visual Representations of the Computation & Simulation of Quantum Teleportation DescriptiveWe provide computer code of Quantum Teleportation as a supplementary file. The code is annotated with description of what the functions do (complementing the python code/providing support on the syntactic level [78]), which step of quantum teleportation is being executed (providing support on the semantic level [81]) and some other important notes like the impossibility to "read out" the quantum information of a qubit in the real world (providing deeper understanding). SymbolicAs the gate-based language is often used in contexts describing the mathematical procedure, the same can be used for the purpose of supporting the computer code. Various functions and variables used within the code can be seen as the symbolic representation of the computation/simulation of quantum teleportation. This is a concise representation, albeit not easily understandable for someone not Figure 5: Schematic representations of Quantum Teleportation. These illustrations support a high level explanation of quantum teleportation and describe the flow of information, whether that information is quantum or classical as well as the role of the qubits in the protocol. To the left, the description leans more towards the mathematical perspective with states and classical information written as variables. The illustration to the right might be more suitable for learners that are more reluctant towards the mathematical symbols. Both combine multiple representations as both are arrow schematics supported by text or symbolic representations. The schematic to the left is adapted from [85] while the schematic to the right is adapted from [57]. as familiar with the python functions and the Qiskit library. Therefore, comments within the computer code are usually used to support the more concise formulation, improving readability. In addition, educators should make sure that learners have the necessary representational competence or know how to gain it using e.g. the Qiskit documentation. GraphicalA common visual representation of the code of quantum teleportation is the corresponding quantum circuit showing a line for every qubit and "boxes" for different gates. This picture can be annotated with the creation of the bell state, bell state measurement and final unitary operations (a question of design within the DeFT framework). It is a more concise representation of the computer code and can therefore serve to constrain interpretation. In addition, the Bloch sphere can be used to describe the state \(|\psi_{1}\rangle\) of the qubit to be teleported and compare it to the state \(|\psi_{3}\rangle\) of the qubit that the state is teleported onto in order to visualise whether the algorithm has worked. It can be used to more easily check whether the code works or not, giving access to the otherwise hidden information to provide learners with another strategy of bug fixing (in practice, this information is, ofcourse, actually hidden and can not be accessed by any means), complementing the Dirac ket notation output. On the Bloch sphere, it can be seen for example whether the X and Z gates are assigned to the wrong classical registers which could be more difficult when comparing the Dirac ket state output. An example of such an algorithm that incorporates the quantum circuit representation and the Bloch sphere is shown in Fig. 11. Educators need to make sure that learners have the necessary representational competence to use these tools. Visual Representations of Experimental Implementation & Real World Application of Quantum Teleportation DescriptiveWe provide a text representation of experiment and real world application of quantum teleportation in Appendix C. It is described how bell states between photons are created and how quantum teleportation can be verified in a proof-of-concept experiment by measuring correlations. This is a surface-level description with the aim of introducing quantum teleportation broadly while leaving open questions, for example how exactly spontaneous parametric down conversion [89] or Hong-Ou-Mandel interference [90] function. Explanations of these could be referenced, providing semantic support for the learners, but possibly increasing cognitive load significantly. SymbolicThe underlying theory can be represented with polarisation letters \(|H\rangle\) and \(|V\rangle\) or arrows \(|\leftrightarrow\rangle\) and \(|\updownarrow\rangle\) as is done in e.g. [57]. This means that the representation is designed such that the Dirac Ket notation is related to the representation of horizontal and vertical polarisation, providing a connection between the two different representations and possibly benefiting learners in understanding of how the qubit is implemented optically. GraphicalTo describe real world quantum teleportation graphically, one can use a diagram including Alice and Bob to illustrate the process more abstractly, or show the more explicit optical experimental setup, both of which are commonly done, e.g. [20, 43]. The more abstract representation can complement the information provided in a corresponding text on the overall process while the graphic of the optical experimental setup (see Fig. C1) is more suitable to support a text describing the optical components and experimental setup and procedure. ## 7 Teaching Approach Much of modern pedagogy is underpinned by Voygotsky's social constructivism. While in this article we will not dig into its development, which arose from the field of psychology, we will instead consider its implications for designing Quantum Physics curricula. Put most simply, the chief implication of constructivism for the educator is that learners construct their own understanding, knowledge, and skills, through interaction with their social environment. We argue that teaching must take into account this social nature of learning, and thus propose two key considerations for educators which we call "teaching approaches": the chosen level of inquiry and the degree of cooperative learning. These are heavily intertwined as will become clear in this section. ### Level of Inquiry Inquiry based education is a pedagogical approach in which learning is driven interactively [91]: by asking, and attempting to answer, questions. Inquiry is deeply embedded in science education because it is the nature of scientific study itself. There are many models of Inquiry-based learning (IBL) [92, 93], all sharing a cyclical and phasic nature. For example, the 5E's cycle [92] labels phases with Engagement, Exploration, Explanation, Elaboration, and Evaluation. These can be mapped onto the phases of the scientific inquiry cycle, as shown in Fig 6. For this reason, IBL may be not only beneficial, but in fact necessary for students to gain the skills needed for scientific thought. Given that some degree of IBL is essential, one may ask what strategies can be taken to implement it in a teaching scenario. Pragmatism dictates a distinction Figure 6: The scientific method (left) and the inquiry-based learning cycle (right). Both are cyclical processes, with a common structure (exploration, hypothesis-generation, testing/evaluation, and refinement.) As a result IBL is a powerful means of teaching QIST concepts whilst also engaging scientific thinking skills. between different "levels of inquiry", the most frequently used model of which is Bell and Banchi's (Confirmation, Structured, Guided, Open) [94]. Research suggests that conceptual understanding and subject knowledge are best fostered in more closed, structured IB tasks [91], whilst scientific thinking and inquiry skills are best fostered with more open, unstructured tasks [93]. This may be explained by the greater cognitive load associated with the more open levels of inquiry [93, 95], leaving less cognitive ability to develop conceptual understanding or commit ideas to long-term memory. For this reason, educators must choose carefully the level of inquiry in order to match their learning goals. In the field of QIST, there have been many recent examples of interactive tools which create possibilities for teaching approaches which span the four levels of inquiry [96]. For example, the visual and gamified simulation tool for Quantum Computing, Quantum Odyssey [97], is designed with a coherent structure of stand-alone scenarios addressing individual concepts, such as particular gate operations and communication protocols. These can be used to create more structured and confirmatory inquiry scenarios in which participants are both introduced to concepts and walked through them towards attaining particular targeted knowledge and understanding around the concepts. On the other hand, the Quantum Composer tool is suited to truly open inquiry as it provides a blank slate interface in which staff and students may construct any one-dimensional quantum system and investigate its properties. In fact the tool has been used to support research in real problems in QIST [98, 99], which demonstrates how this form of inquiry can support scientific thinking as a learning outcome. Both of these tools provide a means of visual representation of concepts in QIST, but it should be noted that IBL models need not only use visualisation, but other forms of representation as described in Sect. 5. ### Cooperative Learning Cooperative learning is an educational practice where learners work in small groups in order to achieve learning goals under the instruction of an educator. There is a large amount of empirical evidence that cooperation is more effective than competition and individualistic efforts in enabling higher achievement and productivity and that therefore, cooperative learning improves learning outcomes [100, 101, 102]. This is only true, however, if cooperative learning is implemented correctly. In [101], five basic elements of effective cooperative learning are identified: (1) positive interdependence, i.e. everyone is needed and contributes to the group's goal, (2) face-to-face promotive interaction, i.e. enabling constructive help and criticism within the group, (3) individual accountability and personal responsibility to reduce diffusion of responsibility, (4) frequent use of interpersonal and small group social skills that should be promoted and rewarded and (5) frequent, regular group processing of current function, i.e. conjoint reflection on the group work. Findings also suggest that this depends on the level of inquiry, because more open ended questions are an effective tool to promote group discussion [103]. Therefore, the decision of how to incorporate cooperative learning into teaching is intertwined with the decision on the level of inquiry discussed previously. There are findings that suggest that cooperative learning has advantages especially in an interdisciplinary context for tasks requiring input from multiple perspectives [104]. Therefore, cooperative or team-working skills should be fostered as they are especially needed not only in general, but also more specifically in the interdisciplinary context of QIST [2, 105, 106]. Findings suggest that diverse teams perform better than homogeneous teams under the right conditions [107, 108]. One such condition is psychological safety, reducing dysfunctional social barriers within the group. However, the exact impact of cognitive diversity on team performance, i.e. the differences of the way people think which is supposedly influenced by academic background, is still an open research question [109, 110]. Part of promoting functional team work is improving student's understanding that their individual previous knowledge and skills and also that of others are relevant to the group and the common goals that the group wants to achieve [111, 112] and therefore a "diversity mindset" [113]. Diversity in academic background is one dimension of diversity such that one can argue that cooperative learning might be especially beneficial in the field of QIST where the academic backgrounds of students vary. In practice, more open tasks that require multiple perspectives are often tasks reaching into higher levels of Bloom's taxonomy. These tasks are probably more suitable for cooperative learning methods because they allow cognitive diversity to be utilised more effectively. Making sure teams are diverse in academic background could help foster cooperative skills and improve group performance and learning. Examining this effect is a possible direction of future empirical research in QIST education. ### Teaching Approach to Quantum Teleportation In order to show how to put the framework to use, we present an example lesson on quantum teleportation using our framework. It is structured using the what - why - how approach and the corresponding steps in the framework. WhatThis example teaching unit could be part of the example curriculum presented in Fig. 3. The goal of this curriculum is to introduce learners to the basics of quantum technologies, targeting primarily applications in the real world. Previous to the lesson, learners have been introduced to single-photon states and optical equipment like beam splitters and wave plates. They know how to use single photons to decrypt messages using the BB84 key distribution protocol. Then, they were introduced to the creation of entangled photon pairs via spontaneous parametric down conversion and a possible use of entanglement for efficient transport of classical information using superdense coding. During the latter, they learned the concepts of bell states and bell state measurements. Learners understand both the experimental implementation and description of these concepts using qubits and Dirac Ket notation. During this example teaching unit, learners are introduced to quantum teleportation. To start the lesson, quantum teleportation can be introduced as a quantum protocol where - in contrast to superdense coding that is designed to send classical information - quantum information is transported over arbitrary distances. WhyAfter this teaching unit, learners should understand the theory behind quantum teleportation and its experimental realisation in quantum optics. They are able to apply quantum teleportation in quantum networks in practice and can discuss applications e.g. in quantum key distribution, comparing it to classical networks and classical key distribution in these networks. Learners can also evaluate the challenges that quantum networks pose and discuss possible ways of coping with these challenges. In terms of Bloom's taxonomy, this means that the learners should reach the level of _Understanding_ in the _Theory & Analytics_ area of expertise and reach the _Evaluate level in the _Experiment & Real world_ area of expertise. This should enable them to understand practical problems within quantum communication and make justified decisions regarding various aspects of implementation of the quantum teleportation algorithm. _How_ There are infinitely many possible ways to teach quantum teleportation with these prerequisites and goals in mind. We will discuss only one possibility that incorporates the ideas laid out in this paper. For this, we propose four steps. In the first step, learners are introduced to the idea behind quantum teleportation in general. In the second step, learners learn the explicit analytics behind the protocol. Then, they plan an experiment to test the protocol. After steps two and three, learners should have achieved the _Understanding_ level within Bloom's taxonomy in the _Theory & Analytics_ and the _Experiment & Real world_ areas of expertise. In the last step, learners are tasked with planning a presentation discussing applications of the protocol and comparing its use cases for quantum key distribution via quantum networks to classical key distribution in classical networks, aiming to reach the _Evaluate_ level in the _Experiment & Real world_ area of expertise. 1. Learners are introduced to the quantum teleportation protocol with a description and a schematic like the one in Fig. 5 via presentation. The general idea of a transfer of quantum information by transfer of entanglement is introduced. The presenter should keep principles and effects of cognitive load theory summarised in Sect. 5 in mind. The learners are asked why the protocol does not utilise faster than light communication and why it does not contradict the no-cloning theorem in group discussion. To end the presentation, possible applications in quantum computing and quantum communication are touched on. 2. Learners are instructed to find out the exact mathematical operations needed for the quantum teleportation protocol (see A). For this, they know that a bell pair \(|\)EPR\(\rangle\) has to be created, and \(|1\rangle\) is part of a bell state measurement by Alice with one of the two qubits in the bell pair. The aim is to find the right unitary operations that Bob has to apply to the third qubit. For this, circle notation or dimensional circle notation can be used or just the Dirac ket notation. The choice of representation can be left to the learners (which representation is the most helpful for this purpose is an open didactical research question). Examples of possible graphical representations are shown in Fig. A2. This task is probably not suitable for cooperative approaches, because it is very closed inquiry-wise. 3. The third task is to plan an optical experiment to test whether quantum teleportation actually works in reality. For this, a schematic like in Fig. C1, a list of necessary optical components and a descriptive text should be created, as well as a methodology and a prediction for the experiment. The learners are told to use the ideas of how to create an entangled photon pair and how to perform a Bell measurement. Therefore, they need to have or develop skills in the _Apply_ level of Bloom's taxonomy regarding the previous topics of bell state creation and measurements. They need to figure out how to make sure that it is clear that an entangled photon pair is created and how to prepare different initial states. The learners can try and predict Bob's measurement outcome probabilities depending on Alice's measurement outcomes. A problem in this experiment is that Bob has little time to adjust his experimental setup. To solve this, one could use ways of prolonging the path of Bob's photon and finding ways to apply conditional optical gates as in e.g. [64] or, as in [57], only consider the bell state \(\psi^{-}\) (coincidences of Alice's two detectors) and discard all other cases. In that case, Bob's measurement outcome is certain and he can check whether that is the case, referring to measurements at detector \(p\). This task can be done in groups as its openness and the possibility to incorporate many technical details into the experiment can promote group discussion. 4. Preparing and holding a presentation of the finished work as is usually done also in science and industry is a considerable final task. In this presentation that is again prepared and done in a group, learners discuss how to implement quantum teleportation into quantum networks, how these networks could look like and what these networks could be used for. They are tasked with discussing quantum key distribution over large distances where multiple network nodes are necessary. In addition, learners discuss the challenges of building such networks, especially decoherence, quantify these errors from the literature and explain what could be possible measures to cope with these challenges. They then evaluate the advantages and disadvantages in comparison to classical networks that are secured with classical key distribution. Groups should make a plan and split up, completing subtasks individually or in pairs and staying in contact. For example, one person could learn about classical networks and inform the others about how they function while another person learns about the details of spontaneous parametric down conversion, such that this information can be incorporated into the final presentation. This procedure makes sure that the group members are positively interdependent. Communication and social interaction should play a role in the working process and regular group meetings and discussions about the current status should be encouraged [101]. Learners are also tasked to find or create suitable representations for conveying their thoughts and ideas. 5. Outlook and possible next steps: If sufficient lab equipment is available, preparing this protocol and testing it in a real lab could be a possible next step. The amount of time this would take, however, can only be estimated by the educators. It could be a difficult and time consuming experiment to make work, depending on the available equipment and preparation done. Alternatively, the learners could be tasked with programming quantum teleportation in a programming language of their/the educators choice. An example of such a program is shown in Fig. 11, using a python notebook provided in the supplementary files. Different levels of fill-in code could be prepared for the students such that the different backgrounds in coding python and Qiskit are considered. Then, learners would have reached at least the level of _Understanding_ in Bloom's taxonomy from each of the three different perspectives that QIST offers. ## 8 Outlook and Discussion Given the substantial expansion in the number of courses and degree programs attempting to educate a workforce in the specialist field of QT, it is crucial that careful attention is paid to how to do this most effectively. The development of the QCTF approach described in this article is intended as a guiding tool to be used for educators, in a pragmatic and digestible manner, to reduce an overwhelming library of educational research down to a single framework which may be used when designing educational material and classroom approaches. The framework is structured in the manner of "what" - "why" - "how", and thus breaks down curriculum construction into individual elements, which may be addressed with the support of didactical theory and tools. The first - which content to include - may be addressed using the Quantum Technology Competence Framework, designed as a common language for skills in QIST, and a major EU policy initiative which will become widespread in years to come. One must also select which skills are intended to be targeted in their education: be they theoretical, computational, or experimental. These skills are part of both the "what" and the "why", as they are in themselves a learning outcome as skills to be developed through the education, but equally they are skills used akin to the knowledge used in addressing the areas of the Competence Framework representing field-specific content. Educators must then select the learning-outcome specific thinking skills they wish to address. Foundational to this approach is Bloom's Taxonomy, which we argue in the field of QIST requires re-interpretation due to the strongly counter-intuitive nature of quantum concepts, but is still a useful guiding principle, albeit with less hierarchy in skill development. Given the content and targeted skills, educators designing material must next decide on the kind of representation they use to present concepts or explain to students such that they can represent concepts themselves. The DeFT framework is the tool intended to support this, and we emphasise that the framework is a support structure which may be used to several degrees of depth. In the most pragmatic and simple application, one may choose tasks based on the representational competences they wish learners to acquire, for example using Dirac's Bra-Ket notation when intending a mathematical approach, or using a simulation tool for a visualisation approach. The final aspect of how the framework is applied is in choosing the teaching approach applied in the classroom. Here we consider the level of inquiry, ranging from more open and exploratory approaches, addressing more scientific thinking skills, to more guided ones appropriate for more directly targeting acquisition of field-specific knowledge. One must also consider how students interact with others in their environment, and cooperation in learning can be a key factor in determining the exact activities used for teaching, beyond traditional "chalk and talk" approaches which lack the benefit of a social underpinning. We emphasise that the Quantum Curriculum Transformation Framework is a guiding structure, intended to distil the results of educational research into a pragmatic approach for educators. Whilst most effectively applied in full, educators may also pick aspects from the framework to address individually - for example refining primarily their level of inquiry, or choosing topics using the Competence Framework. The framework is also open to refinement and iteration based on community feedback, as more and more educational material is developed in this field, and in particular with an eye to a didactically-sound approach. This feedback may be aggregated into iterations of the framework, which we leave for future work, in the framework of the flagship European Quantum Technology education projects, DigiQ [114] and QTIndu [115]. ## Acknowledgements J.B. acknowledges support by the project QuanTUK at the RPTU in Kaiserslautern, supported by the Federal Ministry of Education and Research (FKZ13N15995). ## Appendix A Visual Representations of the Theory & Analytics of Quantum Teleportation Here, we categorise typical representations in QIST in three categories, but point out that this is neither a comprehensive categorisation of representations [78] nor a comprehensive list of possible representations of quantum teleportation. These representations are suitable for introductory QIST courses targeting different skills in the three skill areas (Sect. 3). ### Descriptive Quantum teleportation can be seen as a transfer of quantum information made possible by a transfer of entanglement [20]. Qubit #1 is in an arbitrary state that is to be teleported to qubit #3. For this, qubit #3 is entangled with qubit #2 in a bell state. Upon performing a bell measurement on qubit #2 and qubit #3, one will get one of four possible results with equal probability. By doing so, quantum information of the third qubit is transferred to the first. However, by definition of entanglement, the exact state of the first qubit after measurement depends on the measurement result. This way, quantum information is transported over arbitrary distances, but can only be retrieved after the classical information has been transmitted. This information is not copied, because the quantum information contained in the third qubit collapsed due to the measurement. ### Symbolic The quantum teleportation algorithm can be seen as a gate-based computing algorithm and is explained similarly to here in many textbooks. The system starts in the state \(\ket{\psi_{i}}\) that consists of a bell state \(\ket{\psi_{23}}\) and the state \(\ket{\psi_{1}}\) of qubit #1 to be teleported: \[\ket{\psi_{i}} =\ket{\psi_{1}}\otimes\ket{\psi_{23}} \tag{11}\] \[=(\alpha\ket{0}+\beta\ket{1})\otimes(\frac{1}{\sqrt{2}}\ket{00}+ \frac{1}{\sqrt{2}}\ket{11})\] (12) \[=\frac{\alpha}{\sqrt{2}}\ket{000}+\frac{\beta}{\sqrt{2}}\ket{001 }+\frac{\alpha}{\sqrt{2}}\ket{011}+\frac{\beta}{\sqrt{2}}\ket{111}. \tag{13}\] Alice entangles qubit #1 and #2 using a CNOT-gate and applies a Hadamard gate on qubit #1 to prepare a bell state measurement of qubit #1 and #2 to obtain the final state \(\ket{\psi_{f}}\) before measuring: \[\ket{\psi_{f}} =H_{1}CNOT_{12}\ket{\psi_{i}} \tag{14}\] \[=H_{1}(\frac{\alpha}{\sqrt{2}}\ket{000}+\frac{\alpha}{\sqrt{2}} \ket{011}+\frac{\beta}{\sqrt{2}}\ket{110}+\frac{\beta}{\sqrt{2}}\ket{101})\] (15) \[=\frac{1}{2}\left(\alpha(\ket{000}+\ket{100})+\alpha(\ket{011}+ \ket{111})+\beta(\ket{010}-\ket{119})+\beta(\ket{001}-\ket{101})\right). \tag{16}\] To show what happens upon measuring qubit #1 and #2, the state can be rewritten as \[\ket{\psi_{f}}= \ket{00}\otimes(\alpha\ket{0}+\beta\ket{1}) \tag{14}\] \[+ \ket{01}\otimes(\beta\ket{0}+\alpha\ket{1})\] (15) \[+ \ket{10}\otimes(\alpha\ket{0}-\beta\ket{1})\] (16) \[+ \ket{11}\otimes(\beta\ket{0}-\alpha\ket{1}). \tag{17}\] Upon measuring, Bobs qubit \(\#3\) will be in one of these four possible states and Bob needs to apply gates depending on the measurement result in order to obtain the initial state of qubit \(\#1\): \[00:\alpha\ket{0}+\beta\ket{1} \tag{18}\] \[01:\beta\ket{0}+\alpha\ket{1}\xrightarrow{X}\alpha\ket{0}+\beta \ket{1}\] (19) \[10:\alpha\ket{0}-\beta\ket{1}\xrightarrow{Z}\alpha\ket{0}+\beta \ket{1}\] (20) \[11:\beta\ket{0}-\alpha\ket{1}\xrightarrow{XZ}\alpha\ket{0}+\beta \ket{1}. \tag{21}\] One can alternatively represent the initial state of quantum teleportation using the bell basis: \[\ket{\psi}_{1}\ket{\phi^{+}}_{23}= \frac{1}{\sqrt{2}}(\alpha\ket{000}+\alpha\ket{011}+\beta\ket{100} +\beta\ket{111}) \tag{22}\] \[= [\ket{\phi^{+}}_{12}(\alpha\ket{0}+\beta\ket{1})_{3}\] (23) \[= \ket{\phi^{-}}_{12}(\alpha\ket{0}-\beta\ket{1})_{3}\] (24) \[= \ket{\psi^{+}}_{12}(\beta\ket{0}+\alpha\ket{1})_{3}\] (25) \[= \ket{\psi^{-}}_{12}(\beta\ket{0}-\beta\ket{1})_{3}] \tag{26}\] Then, a bell measurement is performed by Alice to achieve the desired result. This representation supports the more abstract explanation of quantum teleportation using the bell basis [84, 86] (see also Fig. A1). Another alternative is the use of the vector and matrix representation [20], but, considering these three-qubit vectors would have eight entries, this seems like an impractical solution. ### Graphical Here we provide graphical representations of the application of Alice's gates in the quantum teleportation algorithm. In the more abstract Fig. A1, Alice's shift to the Bell basis ("looking" at the system differently) transforms the system such that Bob can receive the appropriate information, while the representations in Fig. A2 are more explicit and only represent the step in the computational basis. ## Appendix B Visual Representations of the Computation & Simulation of Quantum Teleportation In this section, we provide examples of descriptive, symbolic and graphical representations of the quantum teleportation algorithm or parts of the algorithm that can be used to target the _Computation & Simulation_ area of expertise. ### Descriptive Considering the _Computation & Simulation_ area of expertise, one can describe the algorithm in terms of the quantum gates that are used, which is also done e.g. in [20]. The first and second qubit are initialised in the state \(\ket{0}\), while the third qubit is in an arbitrary state to be teleported. Then, the first and second qubit are entangled via a Hadamard gate on the first qubit and a CNOT gate with the first as control and the second as target. The same is done to entangle the second with the third qubit. Then, the first and second qubit are measured to a classical register. In order for the first qubit to be in the original state of the third qubit, quantum gates have to be applied depending on the measurement result, using classical if-operations. If the measurement result of the second qubit is 1, a \(Z\)-gate has to be applied. If the measurement result of the first qubit is 1, an \(X\)-gate has to be applied. The order matters only by a global phase. ### Symbolic As symbolic representation, the same as in Appendix A can be chosen. ### Graphical Computer code is also a form of using multiple external representations. Python functions can be called symbolic, while comments and pictures are supportive representations useful for understanding the code and bug fixing. Some excerpts from a python notebook for quantum teleportation are depicted in Fig. 12. Appendix C Visual Representations of Experimental Implementation & Real World Application of Quantum Teleportation In this section, we provide examples of descriptive, symbolic and graphical representations of the quantum teleportation algorithm or parts of the algorithm that can be used to target the _Experiment & Real World_ area of expertise. ### Descriptive One possible experimental approach to quantum teleportation is described in [57] in the context of quantum optics, i.e. photons. Initiated by a light pulse in the ultraviolet (UV) range, a pair of entangled photons is created via a process called spontaneous parametric down conversion (SPDC) [89] in a non-linear crystal in such a way that the polarizations are anti-correlated. When the pair now passes through a polarising beam splitter (PBS), they are spatially separated. One can be sent to Bob, the other to Alice. The initial UV pulse is sent back through the crystal to create a second pair of entangled photons, one of which will be the third photon to be teleported for the sake of the experiment, the other of which will serve as a trigger photon, indicating that the others are on the way. Alice's photon and the third photon are superposed on another beam splitter. If they constructively interfere at that beam splitter, the result will be that each of two detectors will trigger, showing that they were in an anti-correlated state (A phenomenon called Hong-Ou-Mandel interference [90]). As Photon 1 and 2 were anti-correlated as well as photon 2 and 3, we know (apart from a phase) that photon 3 is now in the original state of photon 1. This phase is detected by Bob using another PBS. ### Symbolic The same symbolic representation as in A can be chosen, but \(|0\rangle\) and \(|1\rangle\) can be represented as \(|\leftrightarrow\rangle\), \(|\updownarrow\rangle\) to represent the experimental implementation of qubits as done in e.g. [57]. ### Graphical Fig. C1 shows the explicit experimental setup of quantum teleportation and refers to some process and components that can be used for optical quantum teleportation, namely a nonlinear crystal for SPDC, mirrors, polarisors, (polarizing) beam splitters and detectors. Figure C1: Illustration of experimental quantum teleportation as introduced in [57]. An ultraviolet pulse is shot at a nonlinear crystal (blue) and back through the crystal to create four photons labelled 1, 2 and 3 and an idler photon that is used to indicate that the photons were created. Photon 2 and 3 are entangled. Photon 1’s state is manipulated using a polarizer and Alice interferes photon 1 and 2 at a beam splitter. Upon detection of photon 1 and 2 at different detectors, Photon 3 has, apart from a phase, the same polarization as photon 1 initially had which Bob can confirm using detectors and a polarizing beam splitter.
2304.08833
Invertible QFTs and differential Anderson duals
This is the proceeding of a talk given at Stringmath 2022. We introduce a Cheeger-Simons type model for the differential extension of Anderson dual to generalized homology theory with physical interpretations. This construction generalizes the construction of the differential Anderson dual to bordism homology theories, given in a previous work of Yonekura and the author.
Mayuko Yamashita
2023-04-18T08:56:11Z
http://arxiv.org/abs/2304.08833v2
# Invertible QFTs and differential Anderson duals ###### Abstract. This is the proceeding of a talk given at Stringmath 2022. We introduce a Cheeger-Simons type model for the differential extension of Anderson dual to generalized homology theory with physical interpretations. This construction generalizes the construction of the differential Anderson dual to bordism homology theories, given in a previous work of Yonekura and the author [11]. ## 1. Introduction In this article, we introduce a Cheeger-Simons type model for the differential extension of Anderson dual to generalized homology theory with physical interpretations. This construction generalizes the construction of the differential Anderson dual to bordism homology theories, given in a previous work of Yonekura and the author [11]. The Anderson duals have been important in the recent study of quantum field theories (QFTs) in terms of algebraic topology. The Anderson duals to bordism homology theories are conjectured to classify invertible QFTs by Freed and Hopkins [14]. Based on this conjecture, Yonekura and the author gave a model \(I\Omega^{G}_{\mathrm{dR}}\) for the Anderson dual to \(G\)-bordism homology theory with a physical interpretation [11]. Interestingly, it turns out that the natural object produced from QFTs is a _differential refinement_ of it, and passing to topological group corresponds to taking deformation classes. The construction has a similarity to the Cheeger-Simons' model of differential ordinary cohomology in terms of _differential characters_[12]. In this article, we explain this construction in a generalized form. We provide a Cheeger-Simons' type models of the Anderson dual to generalized homology theories. More precisely, we construct a differential extension \(\widehat{IE}^{*}_{\mathrm{dR}}\) of the Anderson dual \(IE^{*}\) to a spectrum \(E\), provided that a differential _homology_\(\widehat{E}_{*}\) is given. Differential cohomology theories have been much studied, but it seems that the homology version has not been interested so much. Elmrabty [15] defines differential \(K\)-homology, and as far as the author know it is the only example existing explicitly in the literature. But certainly there are situations where we want to refine generalized homology in a differential way, for example in the case of bordism homology theories. Such situations also naturally arise in physics; for example D-branes fits into the framework of differential \(K\)-homology. Differential bordism theories naturally arise in (non-topological) QFT's. Thus the author feels that it is valuable to have a general framework for differential refinements in the homology settings, as we do in Subsection 4. Given a differential homology theory \(\widehat{E}_{*}\), the corresponding differential Anderson dual cohomology \(\widehat{IE}^{*}_{\mathrm{dR}}\) consists of pairs \((\omega,h)\), where \(h\colon\widehat{E}_{n-1}\to\mathbb{R}/\mathbb{Z}\) is a group homomorphism which satisfies a compatibility with a closed differential form \(\omega\in\Omega^{n}(X;E^{\bullet}(\mathrm{pt})\otimes\mathbb{R})\). Applied to \(E=MTG\), we recover the model \(\widehat{I\Omega^{G}_{\mathrm{dR}}}\) in [10], where the homomorphism \(h\) is regarded as the complex phase of the _partition function_ of an invertible QFT. Applied to \(E=H\mathbb{Z}\), we recover the Cheeger-Simons' differential character groups (note that we have the Anderson self-duality \(IH\mathbb{Z}\simeq H\mathbb{Z}\)). This article is organized as follows. The first two sections are devoted to preliminaries. We give a brief introduction to differential cohomology theories in Section 2. We explain invertible QFTs and Anderson duals in Section 3. In Section 4, we introduce differential homology theories, and in Section 5 we construct the differential Anderson duals. In Section 6 we explain the physical interpretation of our model. ### Notations and Conventions In this article we use differential forms and currents. * \(\Omega^{*}(X)=\Omega^{*}(X;\mathbb{R})\) denotes the space of differential forms on \(X\) with coefficients in \(\mathbb{R}\). \(\Omega_{*}(X):=\operatorname{Hom}_{\mathrm{conti}}(\Omega^{*}(X),\mathbb{R})\) denotes the space of compactly supported currents on \(X\). * If \(V_{\bullet}\) is a graded vector space over \(\mathbb{R}\), we topologize it as the direct limit of all finite subspaces and define \(\Omega^{*}(X;V_{\bullet})\). We also set \[\Omega_{n}(X;V_{\bullet}):=\oplus_{p+q=n}\operatorname{Hom}_{\mathrm{conti}}( \Omega^{p}(X),V_{q}).\] * The de Rham chain complex is denoted as \[\dots\xrightarrow{\partial}\Omega_{n+1}(X;V_{\bullet})\xrightarrow{\partial} \Omega_{n}(X;V_{\bullet})\xrightarrow{\partial}\dots.\] * For a spectrum \(E\), we set \(V_{\bullet}^{E}=E_{\bullet}(\mathrm{pt})\otimes\mathbb{R}\) and \(V_{E}^{\bullet}:=E^{\bullet}(\mathrm{pt})\otimes\mathbb{R}\), so that \(V_{n}^{E}=V_{E}^{-n}\). The homology Chern-Dold character for \(E_{*}\) is denoted by \[\operatorname{ch}\colon E_{*}(-)\to H_{*}(-;V_{\bullet}^{E}).\] ## 2. Background : Differential cohomology In this section we give a brief review of generalized differential cohomology theories. A _differential extension_ of a generalized cohomology theory \(E^{*}\) is a refinement \(\widehat{E}^{*}\) of the restriction of \(E^{*}\) to the category of smooth manifolds, which containes differential-geometric data. ### Ordinary differential cohomology #### 2.2. \(\widehat{H}^{2}(X;\mathbb{Z})\) In order to illustrate the idea of differential cohomology, first we explain a model of the second ordinary differential cohomology. Recall that \(H^{2}(X;\mathbb{Z})\) is identified as the group of isomorphism classes of complex line bundles over a space \(X\). The differential group is defined by adding differential information on it. **Definition 2.1**.: For a manifold \(X\), we define \(\widehat{H}^{2}_{\mathrm{geom}}(X;\mathbb{Z})\) to be the abelian group of isomorphism classes of hermitian line bundles with unitary connections over \(X\). The differential group is a refinement of the topological group, in the sense that there is a surjection (forgetful map) \[I\colon\widehat{H}^{2}_{\mathrm{geom}}(X;\mathbb{Z})\to H^{2}(X;\mathbb{Z}), \quad[L,\nabla]\mapsto[L]. \tag{2.2}\] But \(\widehat{H}^{2}_{\mathrm{geom}}\) has more information than \(H^{2}\). We can extract the differential information by taking the curvature, \[R\colon\widehat{H}^{2}_{\mathrm{geom}}(X;\mathbb{Z})\to\Omega^{2}_{\mathrm{ clo}}(X),\quad[L,\nabla]\mapsto F_{\nabla}/(2\pi\sqrt{-1}). \tag{2.3}\] Connections on topologically trivial line bundles are identified with one form. This gives a map \[a\colon\Omega^{1}(X)\to\widehat{H}^{2}_{\mathrm{geom}}(X;\mathbb{Z}),\quad \alpha\mapsto[\underline{\mathbb{C}},d+2\pi\sqrt{-1}\alpha]. \tag{2.4}\] These maps are part of the _differential cohomology hexagon_, Here \(\Omega^{n}_{\mathrm{clo}}(X)_{\mathbb{Z}}\) denotes the closed forms of integral periods. The above diagram is commutative, and the diagonal sequences are exact. We would like to define \(\widehat{H}^{n}(X;\mathbb{Z})\) for higher \(n\), having analogous structure maps and hexagons. #### 2.2.1. Cheeger-Simons' differential characters Here we explain a model for a differential extension of \(H\mathbb{Z}\) given by Cheeger and Simons, in terms of _differential characters_[10]. For a manifold \(X\) and a nonnegative integer \(n\), the group of _differential characters_\(\widehat{H}^{n}_{\mathrm{CS}}(X;\mathbb{Z})\) is the abelian group consisting of pairs \((\omega,k)\), where * A closed differential form \(\omega\in\Omega^{n}_{\mathrm{clo}}(X)\), * A group homomorphism \(k\colon Z_{\infty,n-1}(X;\mathbb{Z})\to\mathbb{R}/\mathbb{Z}\), * \(\omega\) and \(k\) satisfy the following compatibility condition. For any \(c\in C_{\infty,n}(X;\mathbb{Z})\) we have (2.5) \[k(\partial c)\equiv\langle\omega,c\rangle_{X}\pmod{\mathbb{Z}}.\] Here \(C_{\infty,n}(X;\mathbb{Z})\) and \(Z_{\infty,n-1}(X;\mathbb{Z})\) denote the smooth singular chains and cycles with integer coefficients. We have homomorphisms \[R_{\mathrm{CS}}\colon\widehat{H}^{n}_{\mathrm{CS}}(X;\mathbb{Z}) \to\Omega^{n}_{\mathrm{clo}}(X),\quad(\omega,k)\mapsto\omega\] \[a_{\mathrm{CS}}\colon\Omega^{n-1}(X)/\mathrm{Im}(d)\to\widehat{H} ^{n}_{\mathrm{CS}}(X;\mathbb{Z}),\quad\alpha\mapsto(d\alpha,\alpha).\] and the quotient map gives \[I_{\mathrm{CS}}\colon\widehat{H}^{n}_{\mathrm{CS}}(X;\mathbb{Z})\to\widehat{H}^{n} _{\mathrm{CS}}(X;\mathbb{Z})/\mathrm{Im}(a_{\mathrm{CS}})\simeq H^{n}(X;\mathbb{ Z}).\] We get the _differential cohomology hexagon_, (2.6) The above diagram is commutative. The diagonal sequences are exact. We can construct an isomorphism between the two models \(\widehat{H}^{2}_{\mathrm{geom}}\) and \(\widehat{H}^{2}_{\mathrm{CS}}\) by taking the holonomy of connections, \[\widehat{H}^{2}_{\mathrm{geom}}(X;\mathbb{Z})\to\widehat{H}^{2}_{\mathrm{CS}}( X;\mathbb{Z}),\quad[L,\nabla]\mapsto(F_{\nabla}/(2\pi\sqrt{-1}),\mathrm{Hol}(L, \nabla)). \tag{2.7}\] #### 2.2.2. Hopkins-Singer's differential cocycles Let \(X\) be a manifold. An \(n\)-th _differential cocycle_ on \(X\) is an element \[(c,h,\omega)\in Z^{n}_{\infty}(X;\mathbb{Z})\times C^{n-1}_{\infty}(X;\mathbb{ R})\times\Omega^{n}_{\mathrm{clo}}(X)\] such that \[\omega-c_{\mathbb{R}}=\delta h. \tag{2.8}\] Here \(C^{*}_{\infty}\) and \(Z^{*}_{\infty}\) denotes the groups of smooth singular cochains and cocycles. We introduce the equivalence relation \(\sim\) on differential cocycles by setting \[(c,h,\omega)\sim(c+\delta b,h-b_{\mathbb{R}}-\delta k,\omega)\] for some \((b,k)\in C^{n-1}_{\infty}(X;\mathbb{Z})\times C^{n-2}_{\infty}(X;\mathbb{R})\). **Definition 2.9** (\(\widehat{H}^{n}_{\mathrm{HS}}(X;\mathbb{Z})\)[Hs05]).: Set \[\widehat{H}^{n}_{\mathrm{HS}}(X;\mathbb{Z}):=\{(c,h,\omega):\text{ differential $n$-cocycle on $X$}\}/\sim\] The group \(\widehat{H}^{n}_{\mathrm{HS}}(X;\mathbb{Z})\) also fits into the hexagon (2.6). In fact we have a natural isomorphism \(\widehat{H}^{n}_{\mathrm{HS}}\simeq\widehat{H}^{n}_{\mathrm{CS}}\). ### Differential \(K\)-theory \(K\)-theory is the most studied generalized cohomology theory, other than \(H\mathbb{Z}\), in the context of differential cohomology. There are various models for topological \(K\)-theories. Correspondingly, there are various models for differential \(K\)-theory. Here we briefly review the model constructed by Freed and Lott [10] in terms of vector bundles with connections. We will explain another model in Subsection 5.2.2 below. Recall that the topological \(K\)-theory \(K^{0}(X)\) can be defined in terms of stable isomorphism classes \([E]\) of complex vector bundles over \(X\). The differential refinement is, roughly speaking, given by equipping vector bundles with connections. An element of \(\widehat{K}^{0}_{\mathrm{FL}}(X)\) is represented by a triple \((E,\nabla^{E},\eta)\), where \((E,\nabla^{E})\) is a hermitian vector bundle over \(X\) with a unitary connection and \(\eta\in\Omega^{2\mathbb{Z}-1}(X)/\mathrm{im}(d)\). We take equivalence classes with respect to a suitable equivalence relation including the relation \[\big{(}E,\nabla^{E}_{1},\eta\big{)}\sim\big{(}E,\nabla^{E}_{0},\eta+\mathrm{CS }(\nabla^{E}_{0},\nabla^{E}_{1})\big{)}\,, \tag{2.10}\] where \(\mathrm{CS}(\nabla^{E}_{0},\nabla^{E}_{1})\) is the Chern-Simons form measuring the difference of two connections. We have the structure homomorphisms \[I\colon\widehat{K}^{0}_{\mathrm{FL}}(X)\to K^{0}(X),\quad[E,\nabla^{E}, \eta]\mapsto[E] \tag{2.12}\] \[R\colon\widehat{K}^{0}_{\mathrm{FL}}(X)\to\Omega^{2\mathbb{Z}}_{ \mathrm{clo}}(X),\quad[E,\nabla^{E},\eta]\mapsto\mathrm{Ch}(\nabla^{E})+d\eta\] (2.13) \[a\colon\Omega^{2\mathbb{Z}-1}(X)/\mathrm{im}(d)\to\widehat{K}^{ 0}_{\mathrm{FL}}(X),\quad\eta\mapsto[0,0,\eta]. \tag{2.11}\] We also have the similar hexagon as (2.6). ### The axiomatic formulation of differential cohomology We explain the axiomatic formulation of differential cohomology theory by Bunke and Schick [1]. We remark that there are important another formulation in terms of sheaves of spectra [1]. Let \(E^{*}\) be a generalized cohomology theory. Let \(N^{\bullet}\) be a graded vector space over \(\mathbb{R}\) equipped with a transformation of cohomology theories \[\mathrm{ch}\colon E^{*}\to H^{*}(-;N^{\bullet}). \tag{2.14}\] The universal choice is \(N^{\bullet}=E^{\bullet}_{\mathbb{R}}(\mathrm{pt})=:V^{\bullet}_{E}\) with \(\mathrm{ch}\) the Chern-Dold homomorphism ([12, Chapter II, 7.13]) for \(E\). For a manifold \(X\), set \(\Omega^{*}(X;N^{\bullet}):=C^{\infty}(X;\wedge T^{*}M\otimes_{\mathbb{R}}N^{ \bullet})\) with the \(\mathbb{Z}\)-grading by the total degree. Let \(d\colon\Omega^{*}(X;N^{\bullet})\to\Omega^{*+1}(X;N^{\bullet})\) be the de Rham differential. We have the natural transformation \[\mathrm{Rham}\colon\Omega^{*}_{\mathrm{clo}}(X;N^{\bullet})\to H^{*}(X;N^{ \bullet}).\] **Definition 2.15** (Differential extensions of a cohomology theory, [1, Definition 2.1]).: A _differential extension_ of the pair \((E^{*},\mathrm{ch})\) is a quadruple \((\widehat{E},R,I,a)\), where * \(\widehat{E}\) is a contravariant functor \(\widehat{E}\colon\mathrm{Mfd}^{\mathrm{op}}\to\mathrm{Ab}^{\mathbb{Z}}\). * \(R\), \(I\) and \(a\) are natural transformations \[R\colon\widehat{E}^{*}\to\Omega^{*}_{\mathrm{clo}}(-;N^{\bullet})\] \[I\colon\widehat{E}^{*}\to E^{*}\] \[a\colon\Omega^{*-1}(-;N^{\bullet})/\mathrm{im}(d)\to\widehat{E}^ {*}.\] We require the following axioms. * \(R\circ a=d\). * \(\mathrm{ch}\circ I=\mathrm{Rham}\circ R\). * For all manifolds \(X\), the sequence (2.16) \[E^{*-1}(X)\xrightarrow{\mathrm{ch}}\Omega^{*-1}(M;N^{\bullet})/\mathrm{im}(d) \xrightarrow{a}\widehat{E}(X)\xrightarrow{I}E^{*}(X)\to 0\] is exact. In the case \(N^{\bullet}=V^{\bullet}_{E}\) and \(\mathrm{ch}\) is the Chern-Dold homomorphism, we simply call it a _differential extension of \(E^{*}\)_. Such a quadruple \((\widehat{E},R,I,a)\) itself is also called a _generalized differential cohomology theory_. We usually abbreviate the notation and just write a generalized cohomology theory as \(\widehat{E}^{*}\). The above axiom is equivalent to the half of the hexagon 2.6. Hopkins and Singer gave a model of an differential extension for an arbitrary \((E^{*},\operatorname{ch})\) in terms of _differntial function spectra_ ([10]). But we cannot guarantee that any differential cohomology theory is isomorphic to the Hopkins-Singer's model. For more about this uniqueness issue, we refer to [1]. ## 3. Background : Invertible QFTs and Anderson duals ### QFTs and invertibility Basically, a quantum field theory (QFT) is a symmetric monoidal functor \[\mathcal{T}\colon\operatorname{Bord}_{n}^{\mathcal{S}}\to\mathcal{C}. \tag{3.1}\] Here \(n\) is the dimension of the theory, and \(\mathcal{S}\) specifies a structure on manifolds, such as orientations, Riemannian metrics, spin structures and principal \(G\)-bundles with connections. The domain \(\operatorname{Bord}_{n}^{\mathcal{S}}\) is the \(n\)-dimensional _Bordism category_ of \(\mathcal{S}\)-manifolds. The mathematical formulation for it varies depending on the context. If we are talking about _non-extended topological QFTs (TQFTs)_, \(\operatorname{Bord}_{n}^{\mathcal{S}}\) is an ordinary category whose objects are \((n-1)\)-dimensional closed \(\mathcal{S}\)-manifolds and morphisms are given by bordisms by compact \(n\)-dimensional \(\mathcal{S}\)-manifolds ([1], [21]). In the case of _extended TQFTs_, \(\operatorname{Bord}_{n}^{\mathcal{S}}\) is a higher category, up to \((\infty,n)\)-category in the fully extended case ([13]). If we consider possibly _non-topological_ QFTs, we need to impose some smoothness to the functor (3.1). For this we can use the smooth version of (higher) categories formulated by Grady and Pavlov [1]. According to the above variations, the formulation of the target \(\mathcal{C}\) also depends on the context. Mathematically we can allow any symmetric monoidal \(((\infty,n)\)-, smooth, etc) category. However, the physically natural target category is the category of \(\mathbb{Z}/2\)-graded complex vector spaces \(\operatorname{sVect}_{\mathbb{C}}\) in the non-extended case. In the extended case, it is natural to consider higher categories extending \(\operatorname{sVect}_{\mathbb{C}}\), such as \(\operatorname{sAlg}_{\mathbb{C}}\). We do not know the canonical target in general. Moreover, in order for a functor to be physically meaningful, we need other conditions such as reflection positivity and unitarity. A QFT \(\mathcal{T}\) is called _invertible_ if it factors as \[\mathcal{T}\colon\operatorname{Bord}_{n}^{\mathcal{S}}\to\mathcal{C}^{\times }\subset\mathcal{C}, \tag{3.2}\] where \(\mathcal{C}^{\times}\) is the maximal Picard (\(\infty\)-, smooth...) groupoid of \(\mathcal{C}\). Here a _Picard groupoid_ is a symmetric monoidal category whose all objects and morphisms are invertible under the monoidal product and the composition, respectively. This means that \(\mathcal{T}\) assigns something invertible to all objects and morphisms of \(\operatorname{Bord}_{n}^{\mathcal{S}}\). For example, if \(\mathcal{C}=\operatorname{sVect}_{\mathbb{C}}\), we have \(\mathcal{C}^{\times}=\operatorname{sLine}_{\mathbb{C}}\), the groupoid of one dimensional \(\mathbb{Z}/2\)-graded complex vector spaces and grading-preserving invertible linear maps. Invertible QFTs are a special, but an important class of QFTs. For example they arise as _Symmetry Protected Topological phases (SPT phases)_ in condensed matter physics. In that context, physical systems are described by Hamiltonians having uniquely gapped ground states, and the effective theories are considered as invertible theories. Invertible theories are also important in the study of _anomalies_. A \((n-1)\)-dimensional anomalous theory is formulated as a boundary theory of a \(n\)-dimensional invertible theory. Thus the classification of anomalies in \((n-1)\)-dimension reduces to the classification of invertible theories in \(n\)-dimension. ### Classification of invertible QFTs and Freed-Hopkins conjecture Fully extended invertible QFTs are believed to be classified in terms of generalized cohomology. The argument uses the _stable homotopy hypothesis_, which gives equivalence between Picard \(\infty\)-groupoids with connective spectra. By this the functor (3.1) is transformed into a map of spectra. This means that \(\mathcal{T}\) is regarded as an element of generalized cohomology represented by the mapping spectra. For simplicity we focus on the case where the structure \(\mathcal{S}\) is given by a tangential \(G\)-structure (for precise formulation see[13, Section 3] ), where \(G=\{G_{d},s_{d},\rho_{d}\}_{d\in\mathbb{Z}_{\geq 0}}\) is a sequence of compact Lie groups with homomorphisms \(s_{d}\colon G_{d}\to G_{d+1}\) and \(\rho_{d}\colon G_{d}\to\operatorname{O}(d;\mathbb{R})\) satisfying the compatibility with \(\operatorname{O}(d;\mathbb{R})\hookrightarrow\operatorname{O}(d+1,\mathbb{R})\). By the work of Galatius, Madsen, Tillmann and Weiss [10], the spectrum corresponding to the groupoid completion of \(\operatorname{Bord}_{n}^{G}\) is the (shift of) _Madsen-Tillmann spectrum_\(MTG\). This is a normal version of the Thom spectrum, and represents the tangential \(G\)-bordism homology theory \(\Omega_{*}^{G}\). On the other hand, the spectrum corresponding to the target \(\mathcal{C}^{\times}\) is unclear if we do not specify it. As explained above, we do not know the universal target of physically meaningful fully extended QFTs. Freed and Hopkins proposed an _ansatz_ that the universal target for invertible _topological_ QFTs corresponds to the spectrum \(I\mathbb{C}^{\times}\), the _Brown-Comenetz dual_ to the sphere (footnote 2). Based on this ansatz, they conjecture that **Conjecture 3.3** ([14, Conjecture 8.37]).: _There is a \(1:1\) correspondence_ \[\left\{\begin{aligned} \text{deformation classes of reflection positive invertible $n$-dimensional fully extended}\\ \text{field theories with symmetry type}\lx@note{footnote}{If we replace $\mathbb{R}/\mathbb{Z}$ by $\mathbb{C}^{*}$, we get the Brown-Comenetz dual to the sphere.}\end{aligned}\right\}\simeq(I \Omega^{G})^{n+1}(\operatorname{pt}). \tag{3.4}\] In fact they prove that _topological_ ones are classified by the torsion part of \((I\Omega^{G})^{n+1}(\operatorname{pt})\). ### Anderson duals In this subsection we collect basics on the Anderson duals for spectra. For more details, see for example [11, Appendix B] and [12, Appendix B]. First note that the assignment \(X\mapsto\operatorname{Hom}(\pi_{*}(X),\mathbb{R}/\mathbb{Z})\) for each spectrum \(X\) satisfies the Eilenberg-Steenrood axioms, so it is represented by an spectrum denoted by \(I(\mathbb{R}/\mathbb{Z})^{2}\). On the other hand we have \(H^{n}(X;\mathbb{R})\simeq\operatorname{Hom}(\pi_{n}(X);\mathbb{R})\). So we have a mod \(\mathbb{Z}\) reduction map \(H\mathbb{R}\to I(\mathbb{R}/\mathbb{Z})\). The _Anderson dual to sphere \(I\mathbb{Z}\)_ is the spectrum defined as the homotopy fiber \[I\mathbb{Z}\to H\mathbb{R}\to I(\mathbb{R}/\mathbb{Z}). \tag{3.5}\] For a general spectrum \(E\), we define its _Anderson dual_ as the function spectrum, \[IE:=F(E,I\mathbb{Z}). \tag{3.6}\] This implies that we have the exact sequence (Universal Coefficient Theorem) \[0\to\operatorname{Ext}(E_{n}(X),\mathbb{Z})\to IE^{n+1}(X)\to\operatorname{ Hom}(E_{n+1}(X),\mathbb{Z})\to 0. \tag{3.7}\] \(MTG\) represents the \(G\)-bordism homology theory \(\Omega_{*}^{G}\). The Anderson dual \(I\Omega^{G}=F(MTG,I\mathbb{Z})\) fits into the exact sequence \[0\to\operatorname{Ext}(\Omega_{n}^{G}(X),\mathbb{Z})\to(I\Omega^{G})^{n+1}(X) \to\operatorname{Hom}(\Omega_{n+1}^{G}(X),\mathbb{Z})\to 0. \tag{3.8}\] We note that \(\operatorname{Ext}(\Omega_{n}^{G}(X),\mathbb{Z})\) is the torsion part of \((I\Omega^{G})^{n+1}(X)\) for reasonable spaces \(X\). According to the Freed-Hopkins' result about the classification in _topological_ case, this group classifies \(n\)-dimensional invertible _topological_ QFTs on \(G\)-manifolds. Conjecture 3.3 says that we can use whole of \((I\Omega^{G})^{n+1}(X)\) to classify possibly non-topological invertible QFTs. In order to give a better understanding of this conjecture, Yonekura and the author gave a new model for \(I\Omega^{G}\) with a physical interpretation ([11]). It turns out that the natural object produced from QFTs is a differential refinement of it, and passing to topological group corresponds to taking deformation classes. Also it has turned out that the analogous construction applies to give a "QFT-like" model of differential refinement of \(IE\) for general \(E\), provided that a differential _homology_\(\widehat{E}_{*}\) is given. In the rest of this article we explain this construction. ## 4. Differential homology theories In this section we give a formulation of differential _homology_ theories and explain examples. ### The axiom Here we give the axioms for differential extensions of generalized homology theories. It is just a straightforward homology-version of the axiom in [1]. We set \(V_{\bullet}^{E}:=\pi_{\bullet}(E)\otimes_{\mathbb{Z}}\mathbb{R}\). We have the homology Chern-Dold character homomorphism \[\operatorname{ch}\colon E_{*}\to H_{*}(-;V_{\bullet}^{E}). \tag{4.1}\] Here we focus on the differential extension in the coefficient \(V_{\bullet}^{E}\) for our purpose, but the variant in coefficient can be formulated as in the cohomology case (Section 2.4). **Definition 4.2** (Differential extensions of homology theories).: A _differential extension_ of a generalized homology theory \(E_{*}\) is a quadruple \((\widehat{E}_{*},R,I,a)\), where * \(\widehat{E}_{*}\) is a covariant functor \(\widehat{E}_{*}\colon\operatorname{Mfd}\to\operatorname{Ab}^{\mathbb{Z}}\). * \(R\), \(I\) and \(a\) are natural transformations \[R \colon\widehat{E}_{*}\to\Omega_{*}^{\mathrm{clo}}(-;V_{\bullet}^{E})\] \[I \colon\widehat{E}_{*}\to E_{*}\] \[a \colon\Omega_{*+1}(-;V_{\bullet}^{E})/\mathrm{im}(\partial)\to \widehat{E}_{*}.\] * \(R\circ a=\partial\). * \(\mathrm{ch}\circ I=\mathrm{Rham}\circ R\). * For all manifolds \(X\), the sequence (4.3) \[E_{*+1}(X)\xrightarrow{\mathrm{ch}}\Omega_{*+1}(M;V_{\bullet}^{E})/\mathrm{ im}(\partial)\xrightarrow{a}\widehat{E}_{*}(X)\xrightarrow{I}E_{*}(X)\to 0\] is exact. ### Examples #### 4.2.1. Differential \(K\)-homology Elmrabty [1] defines differential \(K\)-homology, by refining Baum-Douglas model of \(K\)-homology in terms of geometric cycles ([1]). This has a physical interpretation in terms of \(D\)-_branes_. Recall that in the Baum-Douglas model, an element in \(K_{n}(X)\) is represented by a \(K\)_-cycle_, a triple \((M,E,f)\) consisting of closed \(\mathrm{Spin}^{c}\)-manifold \(M\) with \(\dim M\equiv n\pmod{2}\), a complex vector bundle \(E\) over \(M\) and a continuous map \(f\colon M\to X\). The \(K\)-homology group \(K_{n}(X)\) is defined to be the group of equivalence classes of \(K\)-cycles under the equivalence relation generated by direct sum, bordism and "vector bundle modification". Correspondingly, the differential \(K\)-homology group \(\widehat{K}_{n}(X)\) is represented by a _differential \(K\)-cycle_\(((M,\nabla^{M}),(E,\nabla^{E}),f,\psi)\), where in addition to the data of a \(K\)-cycle, \(M\) is equipped with a \(\mathrm{Spin}^{c}\)-connection \(\nabla^{M}\), \(E\) is equipped with a hermitian metric and unitary connection \(\nabla^{E}\), \(f\) is required to be smooth, and we have additional data \(\psi\in\Omega_{2\mathbb{Z}+n+1}(X)/\mathrm{im}(\partial)\). The group \(\widehat{K}_{n}(X)\) is given by taking equivalence classes under the corresponding equivalence relation suitably modified with differential data. For example the bordism relation in this case is, if we have a \(2\mathbb{Z}+n+1\)-dimensional compact \(((W,\nabla^{W}),(\mathcal{E},\nabla^{\mathcal{E}}),f,0)\) with boundary, we set \[\left((\partial W,\nabla^{W}|_{\partial W}),(\mathcal{E}|_{\partial W},\nabla ^{\mathcal{E}}|_{\partial W}),f|_{\partial W},\int_{W}\mathrm{Todd}(\nabla^{ E})\mathrm{Ch}(\nabla^{\mathcal{E}})f^{*}\right)\sim 0. \tag{4.4}\] The axiom above is satisfied with the structure maps \[R\left([(M,\nabla^{M}),(E,\nabla^{E}),f,\psi]\right) :=\int_{M}\mathrm{Todd}(\nabla^{M})\mathrm{Ch}(\nabla^{E})f^{*}- \partial\psi, \tag{4.6}\] \[I\left([(M,\nabla^{M}),(E,\nabla^{E}),f,\psi]\right) :=[M,E,f],\] (4.7) \[a\left([\psi]\right) :=[\varnothing,0,(\varnothing\to X),\psi]. \tag{4.5}\] where we identify \(\Omega_{n}(X;V_{\bullet}^{K})\simeq\Omega_{2\mathbb{Z}+n}(X)\). #### 4.2.2. Differential ordinary homology We can construct a refinement \(\widehat{H}^{\rm HS}_{*}(-;\mathbb{Z})\) of the ordinary integral homology given by a homology-version of differential cocycles [10]. Namely, a _differential \(n\)-cycles_ over \(X\) is defined to be an element \[(c,h,\omega)\in Z^{\infty}_{n}(X;\mathbb{Z})\times C^{\infty}_{n+1}(X;\mathbb{ R})\times\Omega^{\rm clo}_{n}(X) \tag{4.8}\] such that \(\partial h=\omega-c\) as smooth currents3. We define a homomorphism Footnote 3: Note that we have \(C^{\infty}_{n}\subset\Omega_{n}\). \[\partial\colon C^{\infty}_{n+1}(X;\mathbb{Z})\times C^{\infty}_{n+2}(X; \mathbb{R})\to Z^{\infty}_{n}(X;\mathbb{Z})\times C^{\infty}_{n+1}(X;\mathbb{R })\times\Omega^{\rm clo}_{n}(X) \tag{4.9}\] by \(\partial(b,k):=(\partial b,-\partial k-b,0)\). We define \[\widehat{H}^{\rm HS}_{n}(X;\mathbb{Z}):=\{\text{differential $n$-cocycles over $X$}\}/\text{im}\partial. \tag{4.10}\] It is straightforward to construct structure maps and check the axioms above. #### 4.2.3. Differential \(G\)-bordism homology theory \(\widehat{\Omega^{G}_{*}}\) Here we construct a differential refinement of tangential \(G\)-bordism homology theories. We use the notations and conventions about differential \(G\)-structures introduced in [11, Section 3]. Let \(G=\{G_{d},s_{d},\rho_{d}\}_{d\in\mathbb{Z}_{\geq 0}}\) be tangential structure groups. Then we get the _Madsen-Tillmann spectrum_\(MTG\), which represents the _tangential \(G\)-bordism homology theory_\(\Omega^{G}_{*}\). For details see [10, Section 6.6] The topological group \(\Omega^{G}_{n}(X)\) is the stable \(G\)-bordism group, the group consisting of bordism classes \([M,g^{\rm top},f]\), where \((M,g^{\rm top})\) is a \(n\)-dimensional closed manifold with a stable tangential \(G\)-structure and \(f\colon M\to X\) is a continuous map. The differential refinement \(\widehat{\Omega^{G}_{n}}(X)\) is constructed in terms of _differential stable tangential \(G\)-cycles_, as follows. A \(n\)-dimensional differential stable tangential \(G\)-cycle over \(X\) is a triple \((M,g,f)\), where in this case \(g\) is a _differential_ stable \(G\)-structure (i.e., equipped with \(G\)-connection) and \(f\colon M\to X\) is required to be smooth (for details see [11, Section 3]). We use the _Bordism Picard groupoid_\(h{\rm Bord}^{G\nabla}_{n}(X)\) defined in [11, Definition 3.8]. The objects are differential stable tangential \(G\)-cycles \((M,g,f)\) of dimension \(n\) over \(X\), and the morphisms are bordism class \([W,g_{W},f_{W}]\) of bordisms of differential stable tangential \(G\)-cycles. We need the Chern-Weil construction in this setting ([11, Subsection 4.1.1]) We set4 Footnote 4: In the general notation introduced in (5.1) below, we have \(N^{\bullet}_{G}=N^{\bullet}_{MTG}\). We abbreviate the notation. \[N^{\bullet}_{G}:=H^{\bullet}(MTG;\mathbb{R})=\varprojlim_{d}H^{\bullet}(BG_{d} ;\mathbb{R}_{G_{d}})=\varprojlim_{d}(\text{Sym}^{\bullet/2}\mathfrak{g}^{*}_ {d}\otimes_{\mathbb{R}}\mathbb{R}_{G_{d}})^{G_{d}}. \tag{4.11}\] In the case where \(G\) is oriented, i.e., the image of \(\rho_{d}\) lies in \(\text{SO}(d,\mathbb{R})\) for each \(d\), the \(G_{d}\)-module \(\mathbb{R}_{G_{d}}\) is trivial and \(N^{\bullet}_{G}\) is the projective limit of invariant polynomials on \(\mathfrak{g}_{d}\). In general cases, \(N^{\bullet}_{G}\) can be regarded as the projective limit of polynomials on \(\mathfrak{g}_{d}\) which change the sign by the action of \(G_{d}\). A differential stable tangential \(G\)-structure \(g\) on a manifold \(M\) defines a homomorphism ([13, Definition 4.4]), \[\operatorname{cw}_{g}\colon\Omega^{*}(M;N^{\bullet}_{G})\to\Omega^{*}(M;\operatorname {Ori}(M)), \tag{4.12}\] where \(\operatorname{Ori}(M)\) is the orientation line bundle of \(M\). An object \((M,g,f)\) in \(h\mathrm{Bord}^{G\nabla}_{n}(X)\) gives a closed \(n\)-current \[\operatorname{cw}(M,g,f)\in\Omega^{\mathrm{clo}}_{n}(X;V^{MTG}_{\bullet}) \subset\operatorname{Hom}_{\mathrm{conti}}(\Omega^{n}(X;N^{\bullet}_{G}), \mathbb{R}), \tag{4.13}\] by, for \(\omega\in\Omega^{n}(X;N^{\bullet}_{G})\), \[\operatorname{cw}(M,g,f)(\omega):=\int_{M}\operatorname{cw}_{g}(f^{*}\omega). \tag{4.14}\] Similarly, a bordism \((W,g_{W},f_{W})\colon(M_{-},g_{-},f_{-})\to(M_{+},g_{+},f_{+})\) of differential stable tangential \(G\)-cycles of dimension \(n\) gives an \((n+1)\)-current \[\operatorname{cw}(W,g_{W},f_{W})\in\Omega_{n+1}(X;V^{MTG}_{\bullet})\subset \operatorname{Hom}_{\mathrm{conti}}(\Omega^{n+1}(X;N^{\bullet}_{G}),\mathbb{R}), \tag{4.15}\] by, for \(\omega\in\Omega^{n+1}(X;N^{\bullet}_{G})\), \[\operatorname{cw}[W,g_{W},f_{W}](\omega):=\int_{W}\operatorname{cw}_{g_{W}}({f_ {W}}^{*}\omega), \tag{4.16}\] If we have two such bordisms \((W,g_{W},f_{W})\) and \((W^{\prime},g^{\prime}_{W},f^{\prime}_{W})\) which are bordant, the corresponding currents (4.15) differs by an image of \(\partial\). Thus for a morphism \([W,g_{W},f_{W}]\) in \(h\mathrm{Bord}^{G\nabla}_{n}(X)\) we get an element \[\operatorname{cw}[W,g_{W},f_{W}]\in\Omega_{n+1}(X;V^{MTG}_{\bullet})/ \mathrm{im}(\partial). \tag{4.17}\] **Definition 4.18** (\(\widehat{\Omega^{G}_{*}}\)).: Let \(X\) be a manifold and \(n\) be an integer. 1. We set (4.19) \[\widehat{\Omega^{G}_{n}}(X):=\{(M,g,f,\eta)\}/\sim,\] where \((M,g,f)\) is an object in \(h\mathrm{Bord}^{G\nabla}_{n}(X)\) and \(\eta\in\Omega_{n+1}(X;V^{MTG}_{\bullet})/\mathrm{im}(\partial)\). The relation \(\sim\) is the equivalence relation generated by the relation (4.20) \[(M_{-},g_{-},f_{-},\eta)\sim(M_{+},g_{+},f_{+},\eta-\operatorname{cw}[W,g_{W},f_{W}])\] for each morphism \([W,g_{W},f_{W}]\colon(M_{-},g_{-},f_{-})\to(M_{+},g_{+},f_{+})\) in \(h\mathrm{Bord}^{G\nabla}_{n}(X)\). We regard \(\widehat{\Omega^{G}}\) as a functor \(\mathrm{Mfd}\to\mathrm{Ab}^{\mathbb{Z}}\) in the obvious way. 2. We define natural transformations \(R\), \(I\) and \(a\) by \[R\colon\widehat{\Omega^{G}_{n}}(X)\to\Omega^{\mathrm{clo}}_{n}(X;V^{ MTG}_{\bullet}),\quad[M,g,f,\eta]\mapsto\operatorname{cw}(M,g,f)+\partial\eta,\] \[I\colon\widehat{\Omega^{G}_{n}}(X)\to\Omega^{G}_{n}(X),\quad[M,g,f,\eta]\mapsto[M,g,f],\] \[a\colon\Omega_{n+1}(X;V^{MTG}_{\bullet})/\mathrm{im}(\partial) \to\widehat{\Omega^{G}_{n}}(X),\quad\eta\mapsto[\varnothing,\eta].\] We can easily check that the quadruple \((\widehat{\Omega^{G}_{*}},R,I,a)\) satisfies the axiom in Definition 4.2. ## 5. The Anderson duals to differential homology ### The construction Let \(E\) be a spectrum and assume that we are given a differential extension \((\widehat{E}_{*},R_{E_{*}},I_{E_{*}},a_{E_{*}})\) of \(E\)-homology. In this section we explain that it associates a model \(IE^{*_{\rm dR}}_{\rm dR}\) of the Anderson dual cohomology theory \(IE\) and its differential extension of the pair \(\big{(}IE^{*},{\rm ch}^{\prime}\big{)}\) (Definition 2.15), where \({\rm ch}^{\prime}\) is defined below. Set \[N_{E}^{\bullet}:={\rm Hom}(\pi_{-\bullet}E,\mathbb{R}). \tag{5.1}\] By the third arrow in (3.7), we have a canonical transformation of cohomology theories \[{\rm ch}^{\prime}\colon(IE)^{*}(X)\to H^{*}(X;N_{E}^{\bullet})\simeq{\rm Hom}(E _{*}(X),\mathbb{R}), \tag{5.2}\] For example if \(E_{n}({\rm pt})\) is finitely generated for all \(n\), we have an isomorphism \(V_{IE}^{\bullet}\simeq N_{E}^{\bullet}\) and (5.2) coincides with the Chern-Dold homomorphism. We have the pairing \[\langle-,-\rangle\colon\Omega_{*}(X;V_{\bullet}^{E})\otimes\Omega^{*}(X;N_{E} ^{\bullet})\to\mathbb{R}.\] Using this we regard \(\Omega_{*}(X;V_{\bullet}^{E})\subset{\rm Hom}_{\rm conti}(\Omega^{*}(X;N_{E}^ {\bullet}),\mathbb{R})\). **Definition 5.3** (\((\widehat{IE}_{\rm dR})^{*}\) and \(IE^{*}_{\rm dR}\) associated to \(\widehat{E}_{*}\)).: Let \(E\) be a spectrum and assume that we are given a differential extension \((\widehat{E}_{*},R_{E_{*}},I_{E_{*}},a_{E_{*}})\) of \(E\)-homology. 1. For a manifold \(X\) and integer \(n\), we set (5.4) \[(\widehat{IE}_{\rm dR})^{n}(X):=\{(\omega,h)\},\] where * \(\omega\in\Omega^{n}_{\rm clo}(X;N_{E}^{\bullet})\). * \(h\colon\widehat{E}_{n-1}(X)\to\mathbb{R}/\mathbb{Z}\) is a group homomorphism. * \(\omega\) and \(h\) fits into the following commutative diagram. (5.5) 2. We define \[a_{IE^{*}}\colon\Omega^{n-1}(X;N_{E}^{\bullet})/{\rm im}(d)\to(\widehat{IE}_{ \rm dR})^{n}(X),\quad\alpha\mapsto(0,({\rm mod}\mathbb{Z})\circ\langle-, \omega\rangle)\,.\] We set (5.6) \[IE^{n}_{\rm dR}(X):=(\widehat{IE}_{\rm dR})^{n}(X)/{\rm im}(a_{IE^{*}}).\] (\(\widehat{IE}_{\rm dR}\))\({}^{*}\) and \(IE^{*}_{\rm dR}\) are regarded as functors \({\rm Mfd}^{\rm op}\to{\rm Ab}^{\mathbb{Z}}\) in the obvious way. **Definition 5.7** (Structure maps for \((\widehat{IE}_{\rm dR})^{*}\) and \(IE^{*}_{\rm dR}\)).: We define the following maps natural in \(X\). The well-definedness is easily checked. * We denote the quotient map by \[I_{IE^{*}}\colon(\widehat{IE}_{\rm dR})^{*}(X)\to IE^{*}_{\rm dR}(X).\] * We define \[R_{IE^{*}}\colon(\widehat{IE}_{\mathrm{dR}})^{*}(X)\to\Omega^{*}_{\mathrm{clo}}(X; N^{\bullet}_{E}),\quad(\omega,h)\mapsto\omega.\] * We define \[\mathrm{ch}^{\prime}\colon IE^{*}_{\mathrm{dR}}(X)\to H^{n}(X;N^{\bullet}_{E}) \left(\simeq\mathrm{Hom}(E_{n}(X),\mathbb{R})\right),\quad I_{IE^{*}}((\omega,h ))\mapsto\mathrm{Rham}(\omega).\] * We define \[p\colon\mathrm{Hom}(E_{*-1}(X),\mathbb{R}/\mathbb{Z})\to IE^{*}_{\mathrm{dR}}( X),\quad h\mapsto I_{IE^{*}}((0,h\circ I_{E_{*}})).\] ### Examples Here we list some examples. We apply the construction in the last subsection to each of the examples of differential homology theories in Subsection 4.2. We will see that they recover the existing Cheeger-Simons type models of differential cohomology theories. #### 5.2.1. The differential ordinary cohomology in terms of differential characters by [10]. The ordinary integral cohomology is Anderson self-dual, \(H\mathbb{Z}\simeq IH\mathbb{Z}\). Thus the above construction produces a differential extension of \(H\mathbb{Z}\). Indeed, it is easy to verify that the above construction applied to \(\widehat{H}^{\mathrm{HS}}(-;\mathbb{Z})\) in Subsection 4.2.2 recovers the Cheeger-Simons' model of differential ordinary cohomology in terms of differential characters. #### 5.2.2. The differential \(K\)-theory in terms of "differential characters in \(K\)-theory" by [1]. The \(K\)-theory is also Anderson self-dual, \(K\simeq IK\). Thus the above construction applied to the geometric-cycle model of differential \(K\)-homology by [16] mentioned in Subsection 4.2.2 produces a differential \(K\)-theory in terms of functions which assign an \(\mathbb{R}/\mathbb{Z}\)-value to each differential \(K\)-cycle. Indeed, this recovers the "differential characters in \(K\)-theory" by Benameur and Maghfoul [1]. A typical element of \(\widehat{IK}^{0}_{\mathrm{dR}}(X)\) in this model comes from a hermitian vector bundle \((F,\nabla^{F})\) with a unitary connection over \(X\). Indeed, we get a pair \(\left(h_{(E,\nabla^{E})},\mathrm{Ch}(\nabla^{E})\right)\in\widehat{IK}^{0}_{ \mathrm{dR}}(X)\), where \(h_{(E,\nabla^{E})}\colon\widehat{K}_{-1}(X)\to\mathbb{R}/\mathbb{Z}\) is defined by \[h_{(E,\nabla^{E})}[(M,\nabla^{M}),(E,\nabla^{E}),f,\psi]:=\overline{\eta} \left(D_{E\otimes f^{*}F}\right)+\langle f_{*}\psi,\mathrm{Ch}(\nabla^{F}) \rangle\pmod{\mathbb{Z}}. \tag{5.8}\] Here \(D_{E\otimes f^{*}F}\) denotes the \(\mathrm{Spin}^{c}\) Dirac operator on \(M\) twisted by \(E\otimes f^{*}F\) with the connection \(\nabla^{E}\otimes f^{*}\nabla^{F}\). \(\overline{\eta}(D)=\frac{1}{2}(\eta(D)+\dim\ker D)\) is the reduced eta invariant. The well-definedness of the map (5.8) uses the Atiyah-Patodi-Singer's index theorem. Indeed, to be compatible with the bordism relation (4.4) of differential \(K\)-cycles, we need \[\int_{W}\mathrm{Todd}(\nabla^{W})\mathrm{Ch}(\nabla^{\mathcal{E}}\otimes f^{ *}\nabla^{F})\equiv\overline{\eta}(D_{(\mathcal{E}\otimes f^{*}F)|_{\partial W }})\pmod{\mathbb{Z}}, \tag{5.9}\] which is a consequence of the APS index theorem. This element corresponds to the element \([E,\nabla^{E},0]\in\widehat{K}^{0}_{\mathrm{FL}}(X)\) in the Freed-Lott model (Subsection 2.3). An easy generalization of the above construction gives the isomorphism \(\widehat{K}^{0}_{\mathrm{FL}}(X)\simeq\widehat{IK}^{0}_{\mathrm{dR}}(X)\). #### 5.2.3. The differential Anderson dual to \(G\)-bordism theories by [15] Applying the construction for \(\widehat{\Omega^{G}_{*}}\) in Subsection 4.2.3, we recover the following model given in [15]. Recall that we are using the abbreviation \(N^{\bullet}_{G}:=N^{\bullet}_{MTG}\) (4.11). **Definition 5.10** (\((\widehat{I\Omega^{G}_{\mathrm{dR}}})^{*}\) and \((I\Omega^{G}_{\mathrm{dR}})^{*}\), [15]).: Let \(n\) be a nonnegative integer. 1. Define \((\widehat{I\Omega^{G}_{\mathrm{dR}}})^{n}(X)\) to be an abelian group consisting of pairs \((\omega,h)\), such that 1. \(\omega\) is a closed \(n\)-form \(\omega\in\Omega^{n}_{\mathrm{clo}}(X;N^{\bullet}_{G})\). 2. \(h\) is a group homomorphism \(h\colon\mathcal{C}^{G\varphi}_{n-1}(X)\to\mathbb{R}/\mathbb{Z}\). 3. \(\omega\) and \(h\) satisfy the following compatibility condition. Assume that we are given two objects \((M_{-},g_{-},f_{-})\) and \((M_{+},g_{+},f_{+})\) in \(h\mathrm{Bord}^{G\varphi}_{n-1}(X)\) and a morphism \([W,g_{W},f_{W}]\) from the former to the latter. Then we have (5.11) \[h([M_{+},g_{+},f_{+}])-h([M_{-},g_{-},f_{-}])=\mathrm{cw}(\omega)([W,g_{W},f_ {W}])\pmod{\mathbb{Z}},\] where the right hand side is defined in (4.16). Abelian group structure on \((\widehat{I\Omega^{G}_{\mathrm{dR}}})^{n}(X)\) is defined in the obvious way. 2. We define a homomorphsim of abelian groups, (5.12) \[a\colon\Omega^{n-1}(X;N^{\bullet}_{G})/\mathrm{Im}(d) \to(\widehat{I\Omega^{G}_{\mathrm{dR}}})^{n}(X)\] \[\alpha \mapsto(d\alpha,\mathrm{cw}(\alpha)).\] Here the homomorphism \(\mathrm{cw}(\alpha)\colon\mathcal{C}^{G\varphi}_{n-1}(X)\to\mathbb{R}/ \mathbb{Z}\) is defined by (5.13) \[\mathrm{cw}(\alpha)([M,g,f]):=\int_{M}\mathrm{cw}_{g}(f^{*}\alpha)\pmod{ \mathbb{Z}}.\] We set \[(I\Omega^{G}_{\mathrm{dR}})^{n}(X):=(\widehat{I\Omega^{G}_{\mathrm{dR}}})^{n} (X)/\mathrm{Im}(a).\] ### The proof of \(IE\simeq IE_{\mathrm{dR}}\) The goal of the rest of the section is to prove that the functor \(IE^{*}_{\mathrm{dR}}\) is actually the model of the Anderson dual cohomology \(IE\) to \(E\). First we show that \(IE_{\mathrm{dR}}\) fits into the exact sequence for the Anderson dual. **Proposition 5.14**.: _For any manifold \(X\) and integer \(n\), the following sequence is exact._ \[\mathrm{Hom}(E_{n-1}(X),\mathbb{R}) \to\mathrm{Hom}(E_{n-1}(X),\mathbb{R}/\mathbb{Z})\xrightarrow{p}( IE_{\mathrm{dR}})^{n}(X)\] \[\xrightarrow{\mathrm{ch}^{\prime}}\mathrm{Hom}(E_{n}(X),\mathbb{ R})\to\mathrm{Hom}(E_{n}(X),\mathbb{R}/\mathbb{Z}). \tag{5.15}\] Proof.: The proof is analogous to that for [15, Proposition 4.25]. **Theorem 5.16**.: _There is a natural isomorphism of the functors \(\mathrm{Mfd}^{\mathrm{op}}\to\mathrm{Ab}^{\mathbb{Z}}\),_ \[F\colon IE_{\mathrm{dR}}\simeq IE,\] _which fits into the following commutative diagram._ (5.17) _Here the bottom row is the exact sequence in (3.7). Moreover, the quintuple \(((\widehat{I}\widehat{E}_{\mathrm{dR}})^{*},R,I,a)\) in Definitions 5.3 is a differential extension of the pair \(((IE)^{*},\mathrm{ch})\)._ Proof.: The proof is essentially the same as the corresponding claim for the case \(E=MTG\) given in [13, Theorem 4.51]. We use the model of \(IE^{*}\) in [10, Corollary B.17] in terms of functors of Picard groupoids. Given an element \((\omega,h)\in(\widehat{I}\widehat{E}_{\mathrm{dR}})^{n}(X)\), we get the associated functor of Picard groupoids, \[\widetilde{F}_{(\omega,h)}\colon\,\Bigl{(}\Omega_{n}(X;V_{\bullet}^{E})/ \mathrm{im}(\partial)\xrightarrow{a_{E_{*}}}\widehat{E}_{n-1}(X)\Bigr{)} \to(\mathbb{R}\xrightarrow{\mathrm{mod}\mathbb{Z}}\mathbb{R}/\mathbb{Z}) \tag{5.18}\] by applying \(h\) on objects and \(\omega\) on morphisms. This is well-defined thanks to the commutativity of the diagram (5.5). Moreover, given two elements \((\omega,h)\) and \((\omega^{\prime},h^{\prime})\), and an element \(\alpha\in\Omega^{n-1}(X;(V_{\bullet}^{E})^{\vee})/\mathrm{Im}(d)\) so that \((\omega^{\prime},h^{\prime})-(\omega,h)=a_{IE^{*}}(\alpha)\), we get the associated natural transformation, \[\widetilde{F}_{\alpha}\colon\widetilde{F}_{(\omega,h)}\Rightarrow\widetilde{ F}_{(\omega^{\prime},h^{\prime})}, \tag{5.19}\] by \(\widetilde{F}_{\alpha}:=\langle R_{E_{*}}(-),\alpha\rangle\). Summarizing, we have defined the homomorphism which is functorial in \(X\), \[\widetilde{F}_{X}\colon IE_{\mathrm{dR}}^{n}(X)\to\pi_{0}\mathrm{Fun}_{ \mathrm{Pic}}\left(\Bigl{(}\Omega_{n}(X;V_{\bullet}^{E})/\mathrm{im}(\partial) \xrightarrow{a_{E_{*}}}\widehat{E}_{n-1}(X)\Bigr{)}\,,(\mathbb{R}\to \mathbb{R}/\mathbb{Z})\right), \tag{5.20}\] where \(\pi_{0}\mathrm{Fun}_{\mathrm{Pic}}\) denotes the group of natural isomorphism classes of functors of Picard groupoids. Now by [10, Corollary B.17] we have an isomorphism \[IE^{n}(X)\simeq\pi_{0}\mathrm{Fun}_{\mathrm{Pic}}(\pi_{\leq 1}(L(X^{+}\wedge E )_{1-n}),(\mathbb{R}\to\mathbb{R}/\mathbb{Z})). \tag{5.21}\] Here we set \(L\) is the specification and \(\pi_{\leq 1}\) is the fundamental Picard groupoid. We have \(\pi_{i}(L(X^{+}\wedge E)_{1-n})=E_{n-1+i}(X)\), and the exact sequence in the bottom row of (5.17) becomes the canonical ones in this model (see the explanation in [13, Fact 2.6]). As mentioned in [10, Example B.6], the Picard groupoids \(\mathcal{C}\) coming from maps of Abelian groups are equivalent to \((\pi_{1}(\mathcal{C})\xrightarrow{0}\pi_{0}(\mathcal{C}))\), so we have an equivalence \[\Bigl{(}\Omega_{n}(X;V_{\bullet}^{E})/\mathrm{im}(\partial)\xrightarrow{a_{ E_{*}}}\widehat{E}_{n-1}(X)\Bigr{)}\simeq\Bigl{(}\ker a_{E_{*}}\xrightarrow{0} \text{coker }a_{E_{*}}\Bigr{)}\,, \tag{5.22}\] and we have canonical isomorphisms \[\ker a_{E_{*}} \simeq\mathrm{im}\left(E_{n}(X)\xrightarrow{\mathrm{ch}}H_{n}( X;V_{\bullet}^{E})\right), \tag{5.24}\] \[\text{coker }a_{E_{*}} \simeq E_{n-1}(X), \tag{5.23}\] by the exactness of (4.3). Now we construct a (natural isomorphism class of) functor of Picard groupoids \[\pi_{\leq 1}(L(X^{+}\wedge E)_{1-n})\to\left(\ker a_{E_{*}}\xrightarrow{0}\text{ coker }a_{E_{*}}\right) \tag{5.25}\] as follows. Let \(S\mathbb{R}/\mathbb{Z}\) denote the Moore spectrum for \(\mathbb{R}/\mathbb{Z}\), so that we have \(\pi_{i}(X^{+}\wedge E\wedge S\mathbb{R}/\mathbb{Z})=(E_{\mathbb{R}/\mathbb{Z} })_{i}(X)\). Let \((X^{+}\wedge E\wedge S\mathbb{R}/\mathbb{Z})\langle n\rangle\to X^{+}\wedge E \wedge S\mathbb{R}/\mathbb{Z}\) denote the \(n\)-connected cover. Composing it with the map \(\Sigma^{-1}(X^{+}\wedge E\wedge S\mathbb{R}/\mathbb{Z})\to X^{+}\wedge E\) coming from the extension of coefficient group \(0\to\mathbb{Z}\to\mathbb{R}\to\mathbb{R}/\mathbb{Z}\to 0\), we get a morphism of spectra, \[\Sigma^{-1}\left((X^{+}\wedge E\wedge S\mathbb{R}/\mathbb{Z})\langle n\rangle \right)\to X^{+}\wedge E. \tag{5.26}\] Let \(J\) be the homotopy cofiber of the map (5.26). We have \(\pi_{n}(J)\simeq H_{n}(X;V_{\bullet}^{E})\) and \(\pi_{n-1}(J)\simeq E_{n-1}(X)\). Since \(\pi_{n}(J)\) is torsion-free, the \(k\)-invariant for the Picard groupoid \(\pi_{\leq 1}(LJ_{1-n})\) vanishes. Thus we have \[\pi_{\leq 1}(LJ_{1-n})\simeq\left(H_{n}(X;V_{\bullet}^{E})\xrightarrow{0}E_{n -1}(X)\right). \tag{5.27}\] Compose this equivalence with the functor of fundamental Picard groupoids associated to the map \(X^{+}\wedge E\to J\). Then the image is contained in the subgroupoid \(\left(\ker a_{E_{*}}\xrightarrow{0}\text{coker }a_{E_{*}}\right)\). So we get the desired functor (5.25). By construction, the functor (5.25) induces identity on \(\pi_{0}\) and ch on \(\pi_{1}\). By composing (5.20) with the pre-composition of the functor (5.25) and equivalence (5.22), we get the desired functor \(F\) as \[F_{X}\colon IE_{\text{dR}}^{n}(X)\to\pi_{0}\text{Fun}_{\text{Pic}}\left(\pi_{ \leq 1}(L(X^{+}\wedge E)_{1-n}),(\mathbb{R}\to\mathbb{R}/\mathbb{Z})\right) \xrightarrow{\eqref{eq:f_eq_eq_1}}IE^{n}(X). \tag{5.28}\] The commutativity of (5.17) is obvious. Applying that diagram to any manifold \(X\), we know that the top row is exact by Proposition 5.14, and bottom row is also exact. Thus by the five lemma we see that \(F\) is a natural isomorphism. The last statement is easily checked. This completes the proof of Theorem 5.16. ## 6. The interpretation of \(\widehat{IE}_{\text{dR}}\) in terms of QFTs In this section we explain the interpretation of our model \(\widehat{IE}_{\text{dR}}\) in terms of QFTs. As explained in Subsection 3.2, the Anderson dual to the \(G\)-bordism theory is supposed to classify deformation classes of possibly non-topological invertible QFTs on \(G\)-manifolds. Indeed, the \(\mathbb{R}/\mathbb{Z}\)-valued function \(h\) of an element \((h,\omega)\in(\widehat{I\Omega_{\text{dR}}^{G}})^{n+1}(X)\) can be regarded as the complex phase of the partition functions of an invertible QFTs for manifolds equipped with differential stable \(G\)-structures and smooth maps to \(X\). The forgetful map \(I\colon(\widehat{I\Omega_{\text{dR}}^{G}})^{n+1}(X)\to(I\Omega_{\text{dR}}^{G })^{n+1}(X)\) is regarded as taking the deformation classes of such invertible QFTs. This interpretation is based on the following empirical facts known for physically meaningful5 invertible QFTs. Generally in a non-topological \(n\)-dimensional QFT \(\mathcal{T}\), the partition function \(Z_{\mathcal{T}}(M^{n},\mathfrak{s})\in\mathbb{C}\) for closed \(n\)-dimensional \(\mathcal{S}\)-manifold varies smoothly according to the smooth variation of the input \((M^{n},\mathfrak{s})\). It is empirically known that, if \(\mathcal{T}\) is _invertible_, the variation of the complex phase of \(Z_{\mathcal{T}}\) can be measured by an integration of some characteristic form, i.e., there exists a \((d+1)\)-dimensional characteristic polynomial \(\omega_{\mathcal{T}}\) such that, for each bordism \((W^{d+1},\mathfrak{s})\colon(M_{-},\mathfrak{s}_{-})\to(M_{+},\mathfrak{s}_{+})\) we have Footnote 5: Note that the physically meaningfulness includes conditions such as reflection positivity, locality and Wick-rotated unitarity. \[\arg\left(\frac{Z_{\mathcal{T}}(M^{d}_{+},\mathfrak{s}_{+})}{Z_{\mathcal{T}}(M ^{d}_{-},\mathfrak{s}_{-})}\right)=\int_{W}\operatorname{cw}(\omega_{\mathcal{ T}})(W^{d+1},\mathfrak{s})\pmod{\mathbb{Z}}, \tag{6.1}\] where the left hand side is well-defined since the partition function is nonzero in the invertible case. Note that a priori there is nothing to assign for \((d+1)\)-dimensional bordism, since \(\mathcal{T}\) is \(d\)-dimensional theory. Moreover, it is also empirically known that the effect of smooth deformation of invertible theories can also be measured by local integrations. A smooth deformation \(\mathcal{H}\) from \(\mathcal{T}_{0}\) to \(\mathcal{T}_{1}\) provides us a \(d\)-dimensional characteristic form \(\alpha_{\mathcal{H}}\) so that \[\arg\left(\frac{Z_{\mathcal{T}_{1}}(M^{d},\mathfrak{s})}{Z_{\mathcal{T}_{0}}(M ^{d},\mathfrak{s})}\right)=\int_{M}\operatorname{cw}(\alpha_{\mathcal{H}})(M ^{d},\mathfrak{s})\pmod{\mathbb{Z}}. \tag{6.2}\] These empirical facts explain our interpretation of our model. Indeed, recall that \(\Omega^{*}(X;N^{\bullet}_{G})\) is the differential forms on \(X\) with coefficient in invariant polynomials on \(\mathfrak{g}\), which can be regarded as characteristic polynomials for the structure \(\mathcal{S}\) given by differential stable tangential \(G\)-structures and maps to \(X\). The equation (6.1) corresponds to the compatibility condition (5.5) for \((\arg Z_{\mathcal{T}},\omega_{\mathcal{T}})\), and the equation (6.2) says that the deformation corresponds to the addition by \(a(\alpha_{\mathcal{H}})\) in (5.12). Finally we comment on the possibility of interpreting the general construction of differential Anderson dual to differential homology \(\widehat{E}_{*}\) also in terms of a kind of invertible QFTs. The idea is the following. Differential homology \(\widehat{E}_{*}\) typically comes from a Picard groupoid \(\mathcal{C}_{\nabla}\) with differential data. For example \(h\mathrm{Bord}_{n}^{G\nabla}(X)\), the groupoid of differential \(K\)-cycles (which has an interpretation in terms of _D-branes_), and the groupoid of differential ordinary cocycles. On morphisms of these categories, we can integrate differential forms. Differential Anderson dual \((\widetilde{I}\widehat{E}_{\mathrm{dR}})^{*}(X)\) classifies functors \[(\omega,h)\colon\mathcal{C}_{\nabla}(X)\to(\mathbb{R}\to\mathbb{R}/\mathbb{Z}) \tag{6.3}\] which reflects the differential information. The map \(h\colon\mathrm{Obj}(\mathcal{C}_{\nabla})\to\mathbb{R}/\mathbb{Z}\) may be regarded as the phase of a partition function for some invertible QFTs with domain extending \(\mathcal{C}_{\nabla}\).
2310.05250
Simplifying GNN Performance with Low Rank Kernel Models
We revisit recent spectral GNN approaches to semi-supervised node classification (SSNC). We posit that many of the current GNN architectures may be over-engineered. Instead, simpler, traditional methods from nonparametric estimation, applied in the spectral domain, could replace many deep-learning inspired GNN designs. These conventional techniques appear to be well suited for a variety of graph types reaching state-of-the-art performance on many of the common SSNC benchmarks. Additionally, we show that recent performance improvements in GNN approaches may be partially attributed to shifts in evaluation conventions. Lastly, an ablative study is conducted on the various hyperparameters associated with GNN spectral filtering techniques. Code available at: https://github.com/lucianoAvinas/lowrank-gnn-kernels
Luciano Vinas, Arash A. Amini
2023-10-08T17:56:30Z
http://arxiv.org/abs/2310.05250v1
# Simplifying GNN Performance with Low Rank Kernel Models ###### Abstract We revisit recent spectral GNN approaches to semi-supervised node classification (SSNC). We posit that many of the current GNN architectures may be over-engineered. Instead, simpler, traditional methods from nonparametric estimation, applied in the spectral domain, could replace many deep-learning inspired GNN designs. These conventional techniques appear to be well suited for a variety of graph types reaching state-of-the-art performance on many of the common SSNC benchmarks. Additionally, we show that recent performance improvements in GNN approaches may be partially attributed to shifts in evaluation conventions. Lastly, an ablative study is conducted on the various hyperparameters associated with GNN spectral filtering techniques. Code available at: [https://github.com/lucianoAvinas/lowrank-gnn-kernels](https://github.com/lucianoAvinas/lowrank-gnn-kernels) ## 1 Introduction The problem of semi-supervised node classification (SSNC) (Seeger, 2002; Belkin et al., 2006) has been a focal point in graph-based semi-supervised learning. Modern approaches to node classification on graphs make use of complex Graph Neural Networks (GNNs) (Scarselli et al., 2009) for prediction. These networks are trained to predict node labels, drawing on both the individual features of nodes and the broader network structure. From a statistical standpoint, SSNC represents a compelling regression or classification problem that incorporates network information. The fundamental premise of SSNC is that the network structure (\(\mathbf{A}\)) allows us to borrow information from the neighbors of nodes for which we lack a response. This borrowing can enhance the prediction of the unobserved responses beyond what could be achieved with a traditional regression of \(y_{i}\) on \(\mathbf{x}_{i}\). Recently, there has been a wide breadth of literature (Velickovic et al., 2018; Chien et al., 2021; Luan et al., 2022) which attempt to leverage network structure using GNNs. This recent flurry of activity has led to the proposal of many competing, often complex, architectures to solve the SSNC problem. In this paper, we review top-of-the-leaderboard, benchmarking practices and confirm whether or not this "zoo" of models is necessary to achieve SOTA-like results. Recent studies by Maurya et al. (2022) and Wang and Zhang (2022) have suggested that simple spectral approaches may be sufficient to achieve SOTA performance for semi-surpervised graph classification. Using standard techniques from functional estimation, we simultaneously simplify and generalize previous spectral approaches to SSNC while maintaining or exceeding previous performance benchmarks. In particular, we are able to achieve improvements of +5% and +20% compared to other spectral methods on directed networks such as Chameleon and Squirrel (Rozemberczki et al., 2021). Our contributions are as follows: * Highlight spectral reshaping and modeling techniques which generalize previous spectral filtering approaches. * Outline common evaluation practices which have an outsized effect on model performance. * Simplify modeling hyperparameters (e.g. dropout probabilities, model depth, parameter-specific optimizers) while retaining SOTA or near-SOTA performance. By standardizing evaluation practices and simplifying modeling considerations, we aim to disambiguate performance in the GNN model-space and believe our results will lead to more interpretable models and heuristics for future SSNC problems. ## 2 GNN and SSNC Formalism Consider a network represented by an adjacency matrix \(A\) on \(n\) nodes. Each node \(i\) in the network is associated with a feature vector \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) and a response \(y_{i}\in\mathbb{R}\). The collection of \(n\) feature vectors will be succinctly expressed as \(\mathbf{X}=[\mathbf{x}_{1},\dots,\mathbf{x}_{n}]^{T}\in\mathbb{R}^{n\times d}\). In the case of SSNC, all node features \(\mathbf{X}\), the observed network \(\mathbf{A}\), and a subset of the responses \((y_{i})_{i\in S}\) are known. The goal of SSNC will be to correctly predict unobserved responses \((y_{i})_{i\in S^{c}}\) given the previously stated knowns. A mainstay of all GNN architectures is the feature propagation structure \(\mathbf{P}\in\mathbb{R}^{n\times n}\). Common choices of \(\mathbf{P}\) include the adjacency matrix \(\mathbf{A}\) and its transformed variants, e.g. normalized Laplacian. These propagation structures need not be static. Indeed there are popular GNN architectures (Velickovic et al., 2018) which introduce layer-dependent interactions between a base propagation \(\mathbf{P}^{0}\) and intermediate features \(\mathbf{Z}\in\mathbb{R}^{n\times d^{\prime}}\). If we abstract away the aggregation specifics of propagations \(\{\mathbf{P}^{\ell}\}_{\ell}\), then intermediate representations of most GNNs can be recursively expressed as \[\mathbf{Z}^{\ell+1}=\phi\odot(\mathbf{P}^{\ell}\mathbf{Z}^{\ell}\mathbf{W}^{\ell})\qquad\text{ for layers }\ell=1,\dots,L, \tag{1}\] where \(\mathbf{W}^{\ell}\in\mathbb{R}^{d_{\ell}\times d_{\ell+1}}\) are weight matrices and \(\phi:\mathbb{R}\rightarrow\mathbb{R}\) is a scalar function which is to be applied element-wise. In the case of a \(C\)-class classification, it is common to extract row-wise "argmax"s of the final features \(\mathbf{Z}^{L}\in\mathbb{R}^{n\times C}\) using differentiable argmax surrogates such as \(\operatorname{softmax}\). Our studies will consider the simplest variant of GNN: a one layer, linear GNN, that is \(\phi=\operatorname{id}\), where special attention is paid to the propagation structure \(\mathbf{P}\). We will consider fixed and learnable propagation structures derived from variants of the adjacency matrix \(\mathbf{A}\). Throughout, we will make use of spectral and singular value decompositions (SVD) where, in the case of SVD, \(\mathbf{A}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{T}\) with \(\mathbf{\Sigma}=\operatorname{diag}(\sigma_{i})\) and \(\sigma_{1}\geq\dots\geq\sigma_{n}\) are the singular values of \(\mathbf{A}\). In our analysis, we will consider combinations of low-rank \[\mathbf{A}^{(r)}=\mathbf{U}_{:r}\mathbf{\Sigma}_{:r}(\mathbf{V}_{:r})^{T}\] and kernelized \[\mathbf{P}^{(\mathcal{K})}=\mathbf{U}(\operatorname{diag}(\mathbf{K}\mathbf{\alpha}))\mathbf{V}^{T}\] representations of the network \(\mathbf{A}\). In the kernelized case, \(\mathbf{\alpha}\in\mathbb{R}^{n}\) is a trainable free parameter and \(K_{i,j}=\mathcal{K}(\sigma_{i},\sigma_{j})\) is a kernel matrix formed by applying a kernel function \(\mathcal{K}:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}\) to the singular values of \(A\). The idea is that we can achieve a reshaping of the spectrum \(h(\sigma_{i})\) by a general function \(h\) through an appropriate choice of the kernel function and \(\mathbf{\alpha}\) such that \(h(\sigma_{i})=(\mathbf{K}\mathbf{\alpha})_{i}\). This is motivated by the so-called representer theorem (Scholkopf et al., 2001) which holds valid if \(h\) belongs to reproducing kernel Hilbert space of continuous functions \(\mathbb{H}\). ### Motivating Spectral Methods and Learnable Propagations Implicit in all graph learning problems, is the assumption that the nodal features \(\mathbf{X}\) are only partially informative towards learning the response \(\mathbf{y}\). Regression on the full set of observations \((\mathbf{X},\mathbf{A})\) is expected to lead to better response outcomes, but without knowledge of the underlying graph generation process it becomes difficult to determine how observation \(\mathbf{A}\) should be included in our modeling. Nevertheless, there are some broad strokes we can make when talking about generation of network data. Consider the following, similar lenses from which we can view network data generation: * The feature-network pair \((\mathbf{X},\mathbf{A})\) share a dependent structure. That is, we may assume there is a correlation between pairs of features \((\mathbf{x}_{i},\mathbf{x}_{j})\) and the appearance of corresponding edges \(A_{i,j}\). * The spectral representation of \(\mathbf{A}\) can be used to form meaningful partitions on the set of nodes \([n]\). These partitions may vary depending on the graph learning task at hand. The first view is natural and leads to considering propagation schemes between features \(\mathbf{X}\) and a propagation \(\mathbf{P}\) formed from polynomial and algebraic combinations of the the adjacency matrix \(\mathbf{A}\). The second view point is primarily motivated by analysis done in community detection, where class clusters of certain processes like the stochastic block model (SBM) (Holland et al., 1983) and the random dot product graphs (RDPG) Young and Scheinerman (2007) can be determined using the spectral information of the observed graph. This second view point leads to the consideration of _graph Fourier_ methods (Shuman et al., 2013; Ricaud et al., 2019) using the spectral data found in \(\mathbf{A}\). Of the two approaches, the graph Fourier methods are more general and will be our focus when constructing learnable propagation stuctures \(\mathbf{P}\). In graph Fourier methods, considering the undirected case for simplicity, a learnable filter \(h:\mathbb{R}\rightarrow\mathbb{R}\) is applied to the eigenvalue-eigenvector pairs \((\lambda_{i},\mathbf{u}_{i})\) of \(\mathbf{A}\) to construct the propagation operator \[\mathbf{P}=\sum_{i=1}^{n}h(\lambda_{i})\mathbf{u}_{i}\mathbf{u}_{i}^{T}. \tag{2}\] A sufficiently general and practical family of functions to estimate \(h\) from could be a reproducing kernel Hilbert space (RKHS) of continuous functions \(\mathbb{H}\) with associated kernel function \(\mathcal{K}\). The choices of kernel \(\mathcal{K}\) is flexible and determines the kind of regularity we wish to impose on graph the spectral domain. Important to note however, is that our point evaluations \(h(\lambda_{i})=(\mathbf{K}\mathbf{\alpha})_{i}\) are dependent on a kernel matrix \(K\) which is created using noisy observations \((\lambda_{i})_{i}\). For this reason we will also consider truncations \(r\leq n\) of the form \[\mathbf{P}^{(r,\mathcal{K})}=\sum_{i=1}^{r}(\mathbf{K}\mathbf{\alpha})_{i}\mathbf{u}_{i}\mathbf{u }_{i}^{T}, \tag{3}\] with eigenvalue-eigenvector pairs \((\lambda_{i},\mathbf{u}_{i})\) being ordered according to their eigenvalue magnitude in decreasing order. ## 3 Experiments Our modeling efforts will be specific to the propagation structure \(\mathbf{P}\) with no modifications made on the original features \(X\) or linear weights \(\mathbf{W}\). In our experiments we do not consider any model augmentations such as dropout (Srivastava et al., 2014), batchnorm (Ioffe and Szegedy, 2015), or per-parameter optimizers (i.e. different learning rates for different layers). The design of \(\mathbf{P}\) will have the following degrees of freedom: * **Matrix representation of network**. We will consider adjacency \(\mathbf{A}\) and Laplacian \(\mathbf{D}-\mathbf{A}\) and their normalized variants. Here \(\mathbf{D}\) is column-wise sums of \(\mathbf{A}\) placed in diagonal matrix format. * **Spectral truncation factor**. Given a truncation factor \(r\), the spectral system \((\mathbf{U},\mathbf{\Lambda})\), resp. \((\mathbf{U},\mathbf{\Sigma},\mathbf{V}^{T})\), will be reduced to \((\mathbf{U}_{:r},\mathbf{\Lambda}_{:r})\), resp. \((\mathbf{U}_{:r},\mathbf{\Sigma}_{:r},(\mathbf{V}_{:r})^{T})\), where the eigenvectors associated with the bottom \(n-r\) eigenvalue magnitudes are dropped. In our experiments, truncation factors from 0 to 95% in 5% intervals will be considered. * **Choice of kernel**. Some kernels we will consider are the identity (\(\mathbf{1}\{i=j\}\)), linear (\(\lambda_{i}\lambda_{j}\)), compact Sobolev (\(\min(\lambda_{i},\lambda_{j})\)), unbounded Sobolev (\(\exp(\gamma|\lambda_{i}-\lambda_{j}|)\)), and Gaussian radial basis function (RBF) (\(\exp(\gamma|\lambda_{i}-\lambda_{j}|^{2})\)) kernels. Note that the identity kernel does not generate a continuous RKHS. In the case of the last two kernels, bandwidth parameter \(\gamma\in\mathbb{R}_{+}\) will be selected using a validation process. Our methods are evaluated against common SSNC benchmark datasets, summarized in Table 1.More information on the benchmarks can be found in Pei et al. (2020). All values are recorded using the _balanced splits_ defined in Chien et al. (2021). Section 5 further provides a comprehensive analysis on the impact of the splitting conventions. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline **Dataset** & Cora & Citeseer & Pubmed & Chameleon & Squirrel & Actor & Cornell & Texas & Wisconsin \\ \hline Nodes & 2708 & 3327 & 19717 & 2277 & 5201 & 7600 & 183 & 183 & 251 \\ Edges & 5429 & 4732 & 44338 & 36101 & 217073 & 33544 & 295 & 309 & 499 \\ Features & 1433 & 3703 & 500 & 2325 & 2089 & 931 & 1703 & 1703 & 1703 \\ Classes & 7 & 6 & 3 & 5 & 5 & 5 & 5 & 5 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary statistics on benchmark networks, provided by Pei et al. (2020). The following linear and spectral models will be considered for evaluation: Linear (\(\mathbf{XW}\)), Propagated Linear (\(\mathbf{PXW}\)), Kernel (\(\mathbf{P}^{(K)}\mathbf{XW}\)), and LR Kernel (\(\mathbf{P}^{(r,K)}\mathbf{XW}\)). Similar to the model hyperparameters, learning rate and weight decay of the optimizer, Adam (Kingma and Ba, 2015), will be determined using mean accuracies of the validation split of each dataset. For completeness, we have also implemented a non-linear baseline which learns using only feature information \(\mathbf{X}\). This model will be a simple two-layer ReLU multi-layer perceptron MLP2 with hidden layer size determined through validation. Our models and their results compared to other current SOTA methods can be found in Table 2. We note that, for most of the large graph benchmarks, our models perform within uncertainty or better compared to SOTA. In particular for directed graphs like Chameleon and Squirrel, we see gains in accuracy as high as 5% and 20% over other SOTA methods. A point of emphasis here is the relative simplicity of our models compared to the performance they attain. The absence of any post-model augmentations distinguishes our approach from other competing SOTA spectral methods (Wang and Zhang, 2022). A point of difficulty where a performance gap persists, is where the node response \(\mathbf{y}\) is overwhelming described by its nodal information \(\mathbf{X}\). Graphs with this property (Actor, Cornell, Texas, and Wisconsin) can be identified by the negative performance gap between Linear and Propagated Linear as well as the SOTA-like performance of MLP2. Note that, even without using any graph information, MLP2 is able to achieve SOTA within uncertainty on almost all of the \(X\)-dominated, network datasets. Furthermore, in the cases of Cornell, Texas, and Wisconsin, we may be running into a sample size issues, as these dataset sare only 1/10 the size (a few hundred nodes) of the other benchmarks, and the networks are extremely sparse. Future work should explore how to make these simple kernel methods, no worse than a linear model in the worst case. The introduction of an extra _regularization_ parameter \(\beta\) of the form \[\mathbf{P}^{\prime}=\mathbf{P}+\beta\mathbf{I} \tag{4}\] may help here at the cost of minor complexity overhead. So far, preliminary implementations of equation 4 have \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{3}{c}{Directed Networks} \\ \hline & \multicolumn{2}{c}{Chameleon} & \multicolumn{2}{c}{Squirrel} & \multicolumn{2}{c}{Actor} \\ \hline MLP2 & \(48.5\pm\) & \(2.6\) & \(34.8\pm\) & \(1.4\) & \(40.3\pm\) & \(2.3\) \\ Linear & \(48.1\pm\) & \(3.2\) & \(34.9\pm\) & \(1.4\) & \(38.9\pm\) & \(1.2\) \\ Prop. Linear & \(79.0\pm\) & \(1.4\) & \(78.0\pm\) & \(1.1\) & \(32.4\pm\) & \(1.3\) \\ Kernel & \(78.7\pm\) & \(1.1\) & \(76.0\pm\) & \(1.2\) & \(32.2\pm\) & \(1.8\) \\ LR Kernel & \(79.4\pm\) & \(1.4\) & \(76.8\pm\) & \(1.3\) & \(32.3\pm\) & \(1.7\) \\ \hline GPRNN\({}^{*}\) & \(67.5\pm\) & \(0.4\) & \(49.9\pm\) & \(0.5\) & \(39.3\pm\) & \(0.3\) \\ JacobiConv\({}^{*}\) & \(74.2\pm\) & \(1.0\) & \(55.8\pm\) & \(0.6\) & \(40.7\pm\) & \(1.0\) \\ ACMII-GCN & \(68.4\pm\) & \(1.4\) & \(54.5\pm\) & \(2.1\) & \(41.8\pm\) & \(1.2\) \\ \hline \hline \end{tabular} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Undirected Networks} \\ \hline & \multicolumn{2}{c}{Cora} & \multicolumn{2}{c}{CiteSeer} & \multicolumn{2}{c}{PubMed} & \multicolumn{2}{c}{Cornell} & \multicolumn{2}{c}{Texas} & \multicolumn{2}{c}{Wisconsin} \\ \hline MLP2 & \(77.8\pm\) & \(1.6\) & \(77.2\pm\) & \(1.1\) & \(88.2\pm\) & \(0.5\) & \(86.1\pm\) & \(3.0\) & \(91.7\pm\) & \(4.4\) & \(95.0\pm\) & \(2.6\) \\ Linear & \(78.9\pm\) & \(2.0\) & \(76.2\pm\) & \(1.2\) & \(85.8\pm\) & \(0.4\) & \(84.9\pm\) & \(5.6\) & \(89.7\pm\) & \(3.8\) & \(95.0\pm\) & \(3.8\) \\ Prop. Linear & \(84.0\pm\) & \(2.0\) & \(73.9\pm\) & \(1.4\) & \(82.6\pm\) & \(0.5\) & \(67.8\pm\) & \(8.7\) & \(86.8\pm\) & \(3.5\) & \(83.8\pm\) & \(3.2\) \\ Kernel & \(88.6\pm\) & \(1.0\) & \(81.1\pm\) & \(1.0\) & \(89.4\pm\) & \(0.8\) & \(83.3\pm\) & \(5.9\) & \(88.2\pm\) & \(2.6\) & \(92.1\pm\) & \(3.4\) \\ LR Kernel & — & — & — & — & — & — & — & — & — \\ \hline GPRGNN\({}^{*}\) & \(79.5\pm\) & \(0.4\) & \(67.6\pm\) & \(0.4\) & \(85.1\pm\) & \(0.1\) & \(91.4\pm\) & \(0.7\) & \(92.9\pm\) & \(0.6\) & **NA** \\ JacobiConv\({}^{*}\) & \(89.0\pm\) & \(0.5\) & \(80.8\pm\) & \(0.8\) & \(89.6\pm\) & \(0.4\) & \(92.3\pm\) & \(2.8\) & \(92.8\pm\) & \(2.0\) & **NA** \\ ACMII-GCN & \(89.0\pm\) & \(0.7\) & \(81.8\pm\) & \(1.0\) & \(90.7\pm\) & \(0.5\) & \(95.9\pm\) & \(1.8\) & \(95.1\pm\) & \(2.0\) & \(96.6\pm\) & \(2.4\) \\ \hline \hline \end{tabular} \end{table} Table 2: Performance: Mean test accuracy \(\pm\) std. dev. over 10 data splits. Models include our own variations of “Linear” and “Propagated Linear” GNNs, along with other state-of-the-art (SOTA) GNNs. Dashed entry in for LR Kernel signifies validated choice is the same as the full-rank Kernel. Performance is comparable between our simple GNNs and SOTA in some cases. Results for GPRGNN, JacobiConv and ACMII-GCN are cited from Chien et al. (2021), Wang and Zhang (2022), and Luan et al. (2022) respectively. Entries marked with ‘\(*\)’ report 95% confidence intervals. not shown to be any more competitive than standard kernel approaches. It could be that a more complicated regularizing form is needed to balance the propagation and identity terms \(\mathbf{P}\) and \(\mathbf{I}\). ## 4 Spectral Kernel Ablation We next conduct an ablation study on the three degrees of freedom (kernel choice, matrix representation, and truncation factor) in constructing the propagation matrix \(P\). Optimal choice of the kernel and other hyperparameters seem specific to particular datasets themselves. Although out-of-scope for the paper, one may consider contrasting the best and worst performing hyperparameters to gain insight into the underlying generative processes of these benchmark datasets. For a first study, we consider ablating the kernel choice. Results of the ablation are shown in Table 3 for the full-rank Kernel model, where a complicated dependence can be seen between the kernel choice and the accuracy of the estimated response. Although some results are within uncertainty, the dependence between kernel regularity and SSNC performance is not immediately clear. For Chameleon and Squirrel datasets, we see that the wrong kernel may lead to performance degradations up to \(\sim\)30%. This is a problem which is partially alleviated by the LR Kernel model, where the option to reduce the kernel rank homogenizes some of the model performance. Figure 1 illustrates this homogenization effect. We stress however that this solution is partial, as the same order of homogenization is not observed for the undirected datasets. Identifying the relevant graph statistics which describe this homogenization discrepancy is something which is left to future work. The next relevant hyperparameter is the matrix representation of the network. This choice can have an outsized impact on performance as the eigenvectors, otherwise known as the modes of the graph, are fixed once a representation is chosen. Similar to the kernel choice, it is not immediately clear when one matrix representation will outperform the other. Figure 2 shows the impact on performance is variable across both directed and undirected datasets. Lastly, we carry out an ablation relative to the spectral truncation factor \(r\). Larger spectral truncations have the benefit of accelerating model execution at the potential cost of performance. Figure 3 demonstrates how performance degrades gradually with the truncation factor. The rate at which performance degrades seems to be dependent on the dataset, but most benchmarks retain \(\sim\)90% performance even after a 50% spectral trunctation. In special cases like Squirrel and Chameleon, performance is even seen to increase at larger truncation values. Figure 1: Performance homogenization achieved by LR Kernel model on directed networks. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c} Kernel & Cora & CiteSeer & PubMed & Chameleon & Squirrel & Actor & Cornell & Texas & Wisconsin \\ \hline Identity & \(78.8\pm 2.7\) & \(72.6\pm 2.0\) & \(81.6\pm 0.9\) & \(69.7\pm 2.7\) & \(44.9\pm 2.9\) & \(28.6\pm 3.0\) & \(60.4\pm 8.1\) & \(76.2\pm 4.3\) & \(71.6\pm 5.7\) \\ Sob. Cmpct. & \(75.1\pm 1.9\) & \(73.0\pm 1.4\) & \(88.5\pm 0.4\) & \(41.4\pm 2.2\) & \(33.2\pm 1.1\) & \(\mathbf{32.2\pm 1.8}\) & \(\mathbf{83.3\pm 5.9}\) & \(88.6\pm 4.0\) & \(\mathbf{92.1\pm 3.4}\) \\ Linear & \(81.1\pm 2.0\) & \(72.1\pm 1.8\) & \(82.3\pm 1.0\) & \(\mathbf{78.7\pm 1.2}\) & \(\mathbf{76.0\pm 1.2}\) & \(31.6\pm 0.9\) & \(66.5\pm 6.1\) & \(77.2\pm 8.0\) & \(81.3\pm 4.8\) \\ Sob. Uubnd. & \(88.8\pm 0.8\) & \(\mathbf{81.1\pm 1.0}\) & \(89.2\pm 2.0\) & \(54.5\pm 6.4\) & \(68.8\pm 8.2\) & \(30.7\pm 1.0\) & \(80.6\pm 6.4\) & \(\mathbf{88.2\pm 2.6}\) & \(90.4\pm 5.6\) \\ Gauss. RBF & \(\mathbf{88.6\pm 1.0}\) & \(80.3\pm 1.9\) & \(\mathbf{89.4\pm 0.8}\) & \(60.4\pm 8.4\) & \(71.3\pm 4.4\) & \(30.4\pm 1.3\) & \(79.4\pm 5.3\) & \(84.0\pm 4.5\) & \(85.8\pm 4.7\) \\ \hline \end{tabular} \end{table} Table 3: Impact of the kernel choice on the performance of the full-rank Kernel model. Bold entries correspond to the model selected by validation. ## 5 Changes in Evaluation Conventions As interest in the SSNC learning task increased (Seeger, 2002; Zhu et al., 2003), so did the number of publicly available, real-world network datasets. These datasets spanned a variety of topics from citation networks Sen et al. (2008) to social co-occurences graphs Tang et al. (2009) to web link networks (Craven et al., 1998; Rozemberczki et al., 2021). From the modeling perspective, a common set of datasets was useful to benchmark different methods and one set of networks which quickly saw serialization were the citation datasets (Cora, Citeseer, Pubmed) popularized by Yang et al. (2016). Yang et al. (2016) defined the "sparse" train-test split on the citation datasets and their node masks were made publically available. The sparse split had set 20 nodes per class for training and a 1000 nodes total for testing. These values were held constant for all three citation datasets, meaning larger networks like Pubmed were left with a total label rate of about \(0.3\%\). Quickly following was the semi-supervised work of Kipf and Welling (2017) and Velickovic et al. (2018). These follow-up papers considered an additional 500 previously unlabeled nodes to use as validation. In the respective code implementations of each paper, these additional labels were used in an early stopping criterion for the final model checkpoint. Introduced later was the "dense" split by Pei et al. (2020), where train, validation, and test were now fractions Figure 3: LR Kernel performance relative to the full-rank Kernel for different truncation factors \(r\). Performance is seen to gradually decline on most datasets as the truncation factor \(r\) increases. LR Kernel performance can also be seen to periodically increase above full-rank Kernel performance for the datasets Chameleon (red) and Squirrel (purple). Figure 2: Accuracy comparison of the Kernel model for different graph representations \(A\) and \(D-A\). Shown above is the signed accuracy difference between the adjacency and Laplacian representations. Best performing kernel was selected per dataset. of the whole graph, set to 60%-20%-20% respectively. This paper also popularized two new benchmark datasets, the WebKB dataset Craven et al. (1998) and the Wikipedia animal pages network Rozemberczki et al. (2021). After the introduction of these new benchmarks, a new "balanced" split was proposed by Chien et al. (2021). Here, each class in a network was masked according to a 60%-20%-20% split, which are then collected into the final train-validation-test splits. Both the new datasets and the balanced splits are common benchmarking practices for current SSNC papers and implementations of these conventions can be found in the code of various SOTA GNN papers (Luan et al., 2022; Wang and Zhang, 2022). Provided in Figures 4-5 are visualizations on the impacts of different evaluation techniques on simple linear models (\(\mathbf{X}\mathbf{W}\) and \(\mathbf{AXW}\)). To keep things comparable to the sparse split, both the learning rate (\(10^{-3}\)) and the weight decay (0.0) were fixed for the Adam optimizer. Despite this lack of tuning, note that the best of these models, per dataset, achieve roughly 85% and above of the performance relative to SOTA SSNC methods. For the high-end of this performance, see the classification results on the Squirrel dataset in Figure 5 where a mean accuracy of 77.3% is achieved. New GNN architectures which make use of the more recent splitting techniques may also experience a performance bump similar to our linear models. This perhaps leads to an overstatement of the modeling contribution for certain new architecture and, on the downside, has the potential to persuade later researchers to incorporate unnecessary modeling complexities in their SSNC experiments. For this reason, we believe it is Figure 4: Accuracy results and uncertainties on the citation datasets using different splits with linear models \(\mathbf{XW}\) and \(\mathbf{AXW}\). “Public” refers to the split introduced by Kipf and Welling (2017). Both “Sparse” and “Public” are single splits, so one cannot associate uncertainty to them. Figure 5: Accuracy results on datasets introduced by Pei et al. (2020). “Dense” refers to the original split while “Balanced” refers to the split introduced by Chien et al. (2021). Test results and uncertainties are evaluated using models \(\mathbf{XW}\) and \(\mathbf{AXW}\). Results shown are on the method with best validation per dataset. important to be upfront on the impact that the different splitting conventions have on performance. ## 6 Conclusions We have shown how classically-inspired nonparametric techniques can be used to match, and sometimes exceed, previous spectral and non-linear GNN approaches. Our methods make no use of post-model augmentations such as dropout (Srivastava et al., 2014) or batchnorm (Ioffe and Szegedy, 2015) allowing for clean theoretical analysis in future work. We briefly note, that the formulation of the spectral kernel model itself may be of theoretical interest, as its simplified variants have ties to low-rank, noisy matrix sensing problems Fazel et al. (2008); Zhong et al. (2015); Deng et al. (2023). Elaborating a little further, assume a regression setting with a scalar real-valued \(y_{i}\), and let \(\mathbf{\beta}\in\mathbb{R}^{d}\) take the place of our linear weights. In this case, our evaluation outputs will be scalar valued, so the \(j\)-th evaluation of the LR Kernel model can be rearranged as \[h_{j}(A,X) =\mathbf{e}_{j}^{T}\mathbf{U}(\mathrm{diag}(\mathbf{K}\mathbf{\alpha})\mathbf{U}^{T} )\mathbf{X}\mathbf{\beta}\] \[=\sum_{i=1}^{r}\mathbf{e}_{j}^{T}\mathbf{u}_{i}(\mathbf{e}_{i}^{T}\mathbf{K}\mathbf{ \alpha})\mathbf{u}_{i}^{T}\mathbf{X}\mathbf{\beta}\] \[=\sum_{i=1}^{r}(\widetilde{\mathbf{k}}_{i}^{j})^{T}\mathbf{\alpha}\mathbf{ \beta}^{T}\widetilde{\mathbf{x}}_{i}\] \[=\Big{\langle}\sum_{i=1}^{r}\widetilde{\mathbf{k}}_{i}^{j}\widetilde {\mathbf{x}}_{i}^{T},\mathbf{\alpha}\mathbf{\beta}^{T}\Big{\rangle}_{F},\] where \(\widetilde{\mathbf{k}}_{i}^{j}=u_{i,j}\mathbf{K}\mathbf{e}_{i}\), \(\widetilde{\mathbf{x}}_{i}=\mathbf{X}^{T}\mathbf{u}_{i}\), and \(\langle\mathbf{A},\mathbf{B}\rangle_{F}:=\mathrm{tr}(\mathbf{A}^{T}\mathbf{B})\) is the Frobenius matrix inner product. This formulation has the goal to estimate a rank-1 matrix parameter \(\mathbf{\alpha}\mathbf{\beta}^{T}\) given \(n\), rank-\(r\) linear measurements of the form \(\mathcal{A}_{j}(\cdot)=\langle\sum_{i=1}^{r}\widetilde{\mathbf{k}}_{i}^{j} \widetilde{\mathbf{x}}_{i}^{T},\cdot\rangle_{F}\). If our underlying assumption is that adjacency \(\mathbf{A}\) is noisy then the construction of \(\widetilde{\mathbf{k}}_{i}^{j}\), and therefore our linear measurements, must be noisy as well. On the empirical side, we explored pertinent hyperparameters to the spectral kernel model and showed how the dependence on these parameters may vary across different network datasets. On the low-rank side, we showed how spectral truncation can homogenize response outcomes for different kernel choices. Additionally, it was shown that performance declines gradually with increases in the truncation factor, pointing to practical speed-ups for non-parametric kernel implementations. Lastly, we looked at how evaluation conventions on SSNC tasks have changed since the introduction of popular network datasets. We highlighted the recently defined balanced split and showed how its use can lead to increases in performance outside of what may be expected by uncertainty. By bringing attention to these changes, we hope to even the field on benchmark comparisons for later SSNC works, allowing future researchers to accurately compare their methods to previous SOTA results.
2303.05368
Encryption with Quantum Public Keys
It is an important question to find constructions of quantum cryptographic protocols which rely on weaker computational assumptions than classical protocols. Recently, it has been shown that oblivious transfer and multi-party computation can be constructed from one-way functions, whereas this is impossible in the classical setting in a black-box way. In this work, we study the question of building quantum public-key encryption schemes from one-way functions and even weaker assumptions. Firstly, we revisit the definition of IND-CPA security to this setting. Then, we propose three schemes for quantum public-key encryption from one-way functions, pseudorandom function-like states with proof of deletion and pseudorandom function-like states, respectively.
Alex B. Grilo, Or Sattath, Quoc-Huy Vu
2023-03-09T16:17:19Z
http://arxiv.org/abs/2303.05368v3
# Encryption with Quantum Public Keys ###### Abstract It is an important question to find constructions of quantum cryptographic protocols which rely on weaker computational assumptions than classical protocols. Recently, it has been shown that oblivious transfer and multi-party computation can be constructed from one-way functions, whereas this is impossible in the classical setting in a black-box way. In this work, we study the question of building quantum public-key encryption schemes from one-way functions and even weaker assumptions. Firstly, we revisit the definition of IND-CPA security to this setting. Then, we propose three schemes for quantum public-key encryption from one-way functions, pseudorandom function-like states with proof of deletion and pseudorandom function-like states, respectively. ## 1 Introduction The use of quantum resources to enable cryptographic tasks under weaker assumptions (or even _unconditionally_) than classically were actually the first concrete proposals of quantum computing, with the seminal quantum money protocol of Wiesner [21] and the key-exchange protocol of Bennet and Brassard [1]. Since then, it became a fundamental question in the field of quantum cryptography to find other primitives that can be implemented under weaker computational assumptions. It has recently shown that there exist quantum protocols for Oblivious Transfer (and therefore arbitrary multi-party computation (MPC)) based on the existence of one-way functions (OWF) [1, 2]. Moreover, the proposed protocols use simple quantum technology that is available currently. The assumption to construct these primitives has been recently improved by showing that the existence of pseudo-random states (PRS) is sufficient for such primitives. Notice that existence of PRS is plausibly a strictly weaker assumption than the existence of OWF, given that PRS families can be constructed from OWF in a black-box way [16], and we have oracle separations between PRS and OWF [14, 2]. We notice that classically, OT and MPC are "Cryptomania" objects, meaning that they can be constructed from assumptions that imply public-key encryption (PKE), but there are oracle separations between OWF and PKE (and thus OT and MPC) [13]. Therefore, we do not expect that such powerful objects can be built classically from OWF. In this work, we investigate the following natural question: Can we have quantum protocols for public-key encryption, the heart of Cryptomania, based on post-quantum one-way functions, or even weaker assumptions? Recent results imply that it is improbable to achieve PKE schemes from OWF if the public-key and ciphertext are classical even if the encryption or decryption algorithms are quantum [1]. Therefore, in this work, we will consider schemes where the public-key or ciphertext are quantum states. We notice that the first problem that we need to address is the syntax of quantum public-key encryption (qPKE) and the corresponding security games. We need to provide a general definition for qPKE where both the public-key and ciphertext might be general quantum states. Furthermore, we note that if the public-key is a quantum state, it might be measured, and the ciphertexts might depend on the measurement outcome. This motivates a stronger definition in which the adversary gets oracle access to the encryption, which we call IND-CPA-EO security. With our new security definition in hand, we propose three protocols for implementing qPKE from OWF or weaker assumptions, with different advantages and disadvantages. More concretely, we show the following: 1. Assuming the existence of post-quantum OWFs, there exists a qPKE scheme with quantum public-keys and classical ciphertexts that is IND-CPA-EO security. 2. Assuming the existence of pseudo-random function-like states with proof of destruction (PRFSPDs), there is a qPKE scheme with quantum public-key and classical ciphertext that is IND-CPA-EO secure. 3. Assuming the existence of pseudo-random function-like states (PRFSs) with super-logarithmic input-size1, there is a qPKE scheme with quantum public-key and quantum ciphertext. In this scheme, the quantum public key enables the encryption of a single message. Footnote 1: Note that PRS implies PRFS with logarithmic size inputs. No such implication is known for super-logarithmic inputs. We would like to stress that for the first two constructions, even if the public-key is a quantum state, the ciphertexts are classical and one quantum public-key can be used to encrypt multiple messages. Our third construction shows how to construct quantum public key encryption from assumptions (the existence of pseudorandom function-like states) which are potentially weaker than post-quantum one-way functions, but the achieved protocol only allows the encryption of one message per public-key. We would also like to remark that it has been recently shown that OWFs imply PRFSs with super-logarithmic input-size [1] and PRFSPDs [2]. Therefore, the security of the second and third protocols is based on a potentially weaker cryptographic assumption than the first one. Furthermore, PRFSs with super-logarithmic input-size is _separated_ from one-way functions [13]; therefore, our third result shows a black-box separation between a certain form of quantum public key encryption and one-way functions. However, the first protocol is much simpler to describe and understand since it only uses standard (classical) cryptographic objects. Moreover, it is the only scheme that achieves perfect correctness. ### Technical overview In this section we provide a technical overview of our results. In Section 1.1.1, we explain the challenges and choices in order to define qPKE and its security definition. In Section 1.1.2, we present our constructions for qPKE. #### 1.1.1 Definitions of qPKE and IND-CPA-EO In order to consider public-key encryption schemes with quantum public-keys, we need to first revisit the security definition and we define a new security game that we call IND-CPA-EO. In the classical-key case (even with quantum ciphertexts), the adversary is given a copy the public-key \(\mathsf{pk}\) and therefore is able to run the encryption algorithm \(\mathsf{Enc}(\mathsf{pk},\cdot)\) as many times they want (where the only constraint is that the adversary is polynomially bounded), and just then choose the messages \(m_{0}\) and \(m_{1}\) in the IND-CPA game. In the quantum public-key case, the first issue is that \(\mathsf{Enc}(\rho_{\mathpzc{qp}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! quantum public-key state is \[|\not\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! QPKE from PRFSs.Finally, our third scheme uses the public-key \[\frac{1}{\sqrt{2^{\lambda}}}\sum_{x\in\{0,1\}^{\lambda}}|x\rangle|\psi_{\mathsf{dk },x}\rangle,\] where \(\{|\psi_{k,x}\rangle\}_{k,x}\) is a PRFS family and the size of the input \(x\) is super-logarithmic on the security parameter. The encryptor will then measure the first register of \(|\mathpzc{qp}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Definitions and Preliminaries ### Notation Throughout this paper, \(\lambda\) denotes the security parameter. The notation \(\mathsf{negl}(\lambda)\) denotes any function \(f\) such that \(f(\lambda)=\lambda^{-\omega(1)}\), and \(\mathsf{poly}(\lambda)\) denotes any function \(f\) such that \(f(\lambda)=\mathcal{O}(\lambda^{c})\) for some \(c>0\). When sampling uniformly at random a value \(a\) from a set \(\mathcal{U}\), we employ the notation \(a\operatorname{\leftarrow}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### Quantum Pseudorandomness with Proofs of Destruction The rest of this section is taken verbatim from [1]. ``` 1:Given input \(1^{\lambda}\), Challenger samples \(k\leftarrow\{0,1\}^{w(\lambda)}\) uniformly at random. 2:\(\mathcal{A}\) sends \(m\) to the challenger. 3:Challenger runs \(\mathsf{Gen}(k)^{\otimes m}\) and sends \(|\psi_{k}\rangle^{\otimes m}\) to \(\mathcal{A}\). 4:\(\mathcal{A}\) gets classical oracle access to \(\mathscr{U}\!\mathit{er}(k,\cdot)\). 5:\(\mathcal{A}\) outputs \(c_{1},c_{2},\ldots,c_{m+1}\) to the challenger. 6:Challenger rejects if \(c_{i}\)'s are not distinct. 7:for\(i\in[m+1]\)doChallenger runs \(b_{i}\leftarrow\mathscr{U}\!\mathit{er}(k,c_{i})\) 8:endfor 9:Return \(\wedge_{i=1}^{m+1}b_{i}\). ``` **Algorithm 2** (Pseudorandom function-like state generator with proofs of destruction).: _A PRFSPD scheme with key-length \(w(\lambda)\), input-length \(d(\lambda)\), output length \(n(\lambda)\) and proof length \(c(\lambda)\) is a tuple of QPT algorithms \(\mathsf{Gen},\mathscr{U}\!\mathit{er},\mathscr{U}\!\mathit{er}\) with the following syntax:_ 1. \(|\psi_{k}^{x}\rangle\leftarrow\mathsf{Gen}(k,x)\)_: takes a key_ \(k\in\{0,1\}^{w}\)_, an input string_ \(x\in\{0,1\}^{d(\lambda)}\)_, and outputs an_ \(n\)_-qubit pure state_ \(|\psi_{k}^{x}\rangle\)_._ 2. \(p\leftarrow\mathscr{U}\!\mathit{er}(|\phi\rangle)\)_: takes an_ \(n\)_-qubit quantum state_ \(|\phi\rangle\) _as input, and outputs a_ \(c\)_-bit classical string,_ \(p\)_._ 3. \(b\leftarrow\mathscr{U}\!\mathit{er}(k,x,q)\)_: takes a key_ \(k\in\{0,1\}^{w}\)_, a_ \(d\)_-bit input string_ \(x\)_, a_ \(c\)_-bit classical string_ \(p\) _and outputs a boolean output_ \(b\)_._ Correctness._A PRFSPD scheme is said to be correct if for every \(x\in\{0,1\}^{d}\),_ \[\Pr_{k\stackrel{{ u}}{{\leftarrow}}\{0,1\}^{w}}[1\leftarrow \mathscr{U}\!\mathit{er}(k,x,p)\mid p\leftarrow\mathscr{U}\!\mathit{er}(|\psi_{ k}^{x}\rangle);|\psi_{k}^{x}\rangle\leftarrow\mathsf{Gen}(k,x)]=1\] Security. 1. _Pseudorandomness: A PRFSPD scheme is said to be (adaptively) pseudorandom if for any QPT adversary_ \(\mathcal{A}\)_, and any polynomial_ \(m(\lambda)\)_, there exists a negligible function_ \(\mathsf{negl}(\lambda)\)_, such that_ \[\left|\Pr_{k\leftarrow\{0,1\}^{w}}[\mathcal{A}^{\mathsf{Gen}(k,\cdot)}(1^{ \lambda})=1]-\Pr_{\forall x\in\{0,1\}^{d},|\phi^{x}\rangle\leftarrow\mu_{ \mathsf{(C^{2})}^{\otimes n}}}[\mathcal{A}^{\mathscr{U}\!\mathit{er}^{\{ \{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf }}}}}}}}}}}}}}}}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \; ``` 1: Given input \(1^{\lambda}\), Challenger samples \(k\leftarrow\{0,1\}^{w(\lambda)}\) uniformly at random. 2: Initialize an empty set of variables, \(S\). 3:\(\mathcal{A}\) gets oracle access to \(\mathsf{Gen}(k,\cdot)\), \(\mathpzc{VEr}(k,\cdot,\cdot)\) as oracle. 4:for Gen query \(x\) made by \(\mathcal{A}\)do 5:if\(\exists\) variable \(t_{x}\in S\)then\(t_{x}=t_{x}+1\). 6:else Create a variable \(t_{x}\) in \(S\), initialized to \(1\). 7:endif 8:endfor 9:\(\mathcal{A}\) outputs \(x,c_{1},c_{2},\ldots,c_{t_{x}+1}\) to the challenger. 10: Challenger rejects if \(c_{i}\)'s are not distinct. 11:for\(i\in[m+1]\)do\(b_{i}\leftarrow\mathpzc{VEr}(k,x,c_{i})\) 12:endfor 13: Return \(\wedge_{i=1}^{m+1}b_{i}\). ``` **Game 2** Cloning-Exp\({}^{\mathcal{A},\mathsf{PRFSPD}}_{\lambda}\) ## 3 Security definitions for qPKE In this section, we introduce the new notion of encryption with quantum public keys (Definition 3) and present our indistinguishability under chosen-plaintext attacks security for quantum public-key encryption. **Definition 3** (Encryption with quantum public keys).: _Encryption with quantum public keys (qPKE) consists of 4 algorithms with the following syntax:_ 1. \(\mathsf{dk}\leftarrow\mathsf{Gen}(1^{\lambda})\)_: a_ PPT _algorithm, which receives the security parameter and outputs a classical decryption key._ 2. \(|\mathpzc{qpk}\rangle\leftarrow\mathpzc{QPXGen}(\mathsf{dk})\)_: a_ QPT _algorithm, which receives a classical decryption key_ \(\mathsf{dk}\)_, and outputs a quantum public key_ \(|\mathpzc{qpk}\rangle\)_. We require that the output is a pure state, and that_ \(t\) _calls to_ \(\mathpzc{QPXGen}(\mathsf{dk})\) _should yield the same state, that is,_ \(|\mathpzc{qpk}\rangle^{\otimes t}\)_._ 3. \((\mathpzc{qpk}^{\prime},\mathpzc{qc})\leftarrow\mathpzc{Enc}(\mathpzc{qpk},m)\)_: a_ QPT _algorithm, which receives a quantum public key_ \(\mathpzc{qpk}\) _and a plaintext_ \(m\)_, and outputs a (possibly classical) ciphertext_ \(\mathpzc{qc}\) _and a recycled public-key_ \(\mathpzc{qpk}^{\prime}\)_._ 4. \(m\leftarrow\mathpzc{Dec}(\mathsf{dk},\mathpzc{qc})\)_: a_ QPT _algorithm, which uses a decryption key_ \(\mathsf{dk}\) _and a ciphertext_ \(\mathpzc{qc}\)_, and outputs a classical plaintext_ \(m\)_. In the case_ \(\mathpzc{qc}\) _is classical, we consider_ \(\mathpzc{Dec}\) _as a_ PPT _algorithm._ We say that a qPKE scheme is complete if for every message \(m\in\{0,1\}^{*}\) and any security parameter \(\lambda\in\mathbb{N}\), the following holds: \[\Pr\left[\mathpzc{Dsc}(\mathsf{dk},\mathpzc{qc})=m\ \middle|\begin{array}{c} \mathsf{dk}\leftarrow\mathsf{Gen}(1^{\lambda})\\ |\mathpzc{qpk}\rangle\leftarrow\mathpzc{QPXGen}(\mathsf{dk})\\ (\mathpzc{qpk}^{\prime},\mathpzc{qc})\leftarrow\mathpzc{Enc}(|\mathpzc{qpk} \rangle,m)\end{array}\right]\geq 1-\mathsf{negl}(\lambda),\] where the probability is taken over the randomness of Gen, \(\mathpzc{QPXGen}\) and \(\mathpzc{Enc}\). Next, we present a quantum analogue of classical indistinguishability under chosen-plaintext attacks security (denoted as IND-CPA) for qPKE. **Definition 4**.: _A qPKE scheme is IND-CPA secure if for every QPT adversary, there exists a negligible function \(\epsilon\) such that the probability of winning the IND-CPA security game (see Game 3) is at most \(\frac{1}{2}+\epsilon(\lambda)\)._ Note that this is the standard CPA-security game of a public-key encryption scheme, with the exception that the adversary can receive polynomially many copies of \(\lfloor\mathpzc{qp}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Game 5** (Multi-challenge) Chosen plaintext attack with an encryption oracle (IND-CPA-EO) security game for encryption with quantum public key schemes. ``` 1:The challenger generates \(\mathsf{dk}\leftarrow\mathsf{Gen}(1^{\lambda})\). 2:The adversary gets \(1^{\lambda}\) as an input, and oracle access to \(\mathcal{QPKI}\!\left(\mathsf{dk}\right)\). 3:The challenger generates \(\left|\mathpzc{qpK}\right\rangle\leftarrow\mathcal{QPKI}\!\left(\mathsf{dk}\right)\). Let \(\mathpzc{qpK}_{1}\coloneqq\left|\mathpzc{qpK}\right\rangle\). 4:For \(i=1,\ldots,\ell\), the adversary creates a classical message \(m_{i}\) and send it to the challenger. 5:The challenger computes \(\left(\mathpzc{qc}_{i},\mathpzc{qpK}_{i+1}\right)\leftarrow\mathpzc{Enc}( \mathpzc{qpK}_{i},m_{i})\) and send \(\mathpzc{qc}_{i}\) to the adversary. 6:The adversary sends two messages \(m^{\prime}_{0},m^{\prime}_{1}\) of the same length to the challenger. 7:The challenger samples \(b\in_{R}\{0,1\}\), computes \(\left(\mathpzc{qc}^{*},\mathpzc{qpK}_{i+2}\right)\leftarrow\mathpzc{Enc}( \mathpzc{qpK}_{\ell+1},m^{\prime}_{b})\) and sends \(\mathpzc{qc}^{*}\) to the adversary. 8:For \(i=\ell+2,\ldots,\ell^{\prime}\), the adversary creates a classical message \(m_{i}\) and send it to the challenger. 9:The challenger computes \(\left(\mathpzc{qc}_{i},\mathpzc{qpK}_{i+1}\right)\leftarrow\mathpzc{Enc}( \mathpzc{qpK}_{i},m_{i})\) and send \(\mathpzc{qc}_{i}\) to the adversary. 10:The challenger and the adversary can repeat step 3 - 9 polynomially many times. For the \(i\)-th repetition, let \(\mathpzc{qpK}_{1,i}\) be \(\mathpzc{qpK}_{l^{\prime}+1,i-1}\). 11:The challenger and the adversary can repeat step 3 - 10 polynomially many times. 12:The adversary outputs a bit \(b^{\prime}\). ``` We say that the adversary wins the game (or alternatively, that the outcome of the game is 1) iff \(b=b^{\prime}\). ## 4 Constructions ### QPKE with Classical ciphertext from OWFs We show a construction based on the existence of post-quantum one-way functions. Recalls that post-quantum one-way functions imply a qPRF [11], and IND-CPA3 symmetric encryption [1]. Footnote 3: In fact, this can be strengthened to IND-qCCA security, but this is unnecessary for our application. **Assumes:** A PRF family \(\{f_{k}\}_{k}\), and a symmetric encryption scheme \(\{\mathsf{Enc},\mathsf{Dec}\}\). \(\mathsf{Gen}(1^{\lambda})\) ``` 1. \(\mathsf{dk}\leftarrow_{R}\{0,1\}^{\lambda}\). 2\(\mathcal{QPKI}\!\left(\mathsf{dk}\right)\) 3. Output \(\left|\mathpzc{qpK}\right\rangle=\frac{1}{\sqrt{2^{\lambda}}}\sum_{x\in\{0,1\} ^{\lambda}}|x\rangle|f_{\mathsf{dk}}(x)\rangle\). 4\(\mathpzc{Enc}(\left|\mathpzc{qpK}\right\rangle,m)\) 5. Measure both registers of \(\left|\mathpzc{qpK}\right\rangle\) in the standard basis. Denote the result as \(x\) and \(y\). 6. Output \(c=(x,\mathsf{Enc}(y,m))\). 7\(\mathsf{Dec}_{\mathsf{dk}}(c)\) 8. Interpret \(c\) as \((x,z)\) 9. Set \(y:=f_{\mathsf{dk}}(x)\). 10. Output \(m=\mathsf{Dec}(y,z)\). ``` **Theorem 2**.: _Assuming the existence of quantum-secure PRF family \(\{f_{k}\}_{k}\) and post-quantum IND-CPA symmetric-key encryption scheme \(\left(\mathsf{Enc},\mathsf{Dec}\right)\), any QPT adversary \(\mathcal{A}\) wins the IND-CPA-EO game for the scheme presented in Figure 1 with advantage at most \(\mathsf{negl}(\lambda)\)._ Proof.: In order to prove our results, we define the following Hybrids. **Hybrid \(H_{0}\).** The original security game as defined in Algorithm 4. Figure 1: An encryption scheme with quantum public keys. **Hybrid \(H_{1}\).** Same as Hybrid \(H_{0}\), except that the challenger, instead of measuring \(\left|\mathpzc{qp}\xi\right\rangle\) when the adversary queries the encryption oracle for the first time, the challenger measures this state before providing the copies of \(\left|\mathpzc{qp}\xi\right\rangle\) to the adversary. Note that by measuring \(\left|\mathpzc{qp}\xi\right\rangle\) in the computational basis, the challenger would obtain a uniformly random string \(x^{*}\) (and the corresponding \(f_{\mathsf{dk}}(x^{*})\)). **Hybrid \(H_{2}\).** Same as Hybrid \(H_{1}\), except that the challenger samples \(x^{*}\) as in the previous hybrid, and instead of providing \(\left|\mathpzc{qp}\xi\right\rangle\) to the adversary, she provides \[\left|\mathpzc{qp}\xi^{\prime}\right\rangle=\frac{1}{\sqrt{2^{\lambda}-1}} \sum_{x\in\{0,1\}^{*},x\neq x^{*}}\left|x\right\rangle\!\left|f_{\mathsf{dk}}( x)\right\rangle\!.\] Moreover, the challenger provides ciphertexts \((x^{*},\mathsf{Enc}(f_{\mathsf{dk}}(x^{*}),m))\) for the chosen messages \(m\). We note that this state \(\left|\mathpzc{qp}\xi^{\prime}\right\rangle\) can be efficiently prepared by computing the functions \(\delta_{x,x^{*}}\) over the state \(\sum_{x}\left|x\right\rangle\) in superposition and measuring the output register. With overwhelming probability, the post-measurement state is \(\sum_{x\neq x^{*}}\left|x\right\rangle\!\). **Hybrid \(H_{3}\).** Same as Hybrid \(H_{2}\), except the challenger uses a random function \(H\) in place of \(f_{\mathsf{dk}}\), and provides \(\left|\mathpzc{qp}\xi^{\prime}\right\rangle=\frac{1}{\sqrt{2^{\lambda}-1}} \sum_{x\in\{0,1\}^{*},x\neq x^{*}}\left|x\right\rangle\!\left|H(x)\right\rangle\). Moreover, for each encryption query, the challenger uses \(H(x^{*})\) as \(\mathpzc{qp}\xi_{i}\), and answers the query with \((x^{*},\mathsf{Enc}(H(x^{*}),m))\) for the chosen message \(m\). **Hybrid \(H_{4}\).** Same as Hybrid \(H_{3}\), except the challenger samples uniformly at random a string \(z\), and uses \(z\) as \(\mathpzc{qp}\xi_{i}\). The answer to each encryption query is now \((x^{*},\mathsf{Enc}(z,m))\) for the chosen message \(m\). We will now show that, given our assumptions, Hybrids \(H_{i}\) and \(H_{i+1}\) are indistinguishable except with probability at most \(\mathsf{negl}(\lambda)\) and that every polynomial-time adversary wins \(H_{4}\) with advantage at most \(\mathsf{negl}(\lambda)\). We can then use triangle inequality to prove the security of Theorem 2. **Lemma 2**.: _No adversary can distinguish Hybrid \(H_{0}\) and Hybrid \(H_{1}\) with non-zero advantage._ Proof.: Notice that the operations corresponding to the challenger's measurement of \(\left|\mathpzc{qp}\xi\right\rangle\) and the creation of the copies of \(\left|\mathpzc{qp}\xi\right\rangle\) given to the adversary commute. In this case, we can swap the order of these operations and the outcome is exactly the same. **Lemma 3**.: _No adversary can distinguish Hybrid \(H_{1}\) and Hybrid \(H_{2}\) with non-negligible advantage._ Proof.: Notice that distinguishing the two adversaries imply that we can distinguish the following quantum states \(\left|\mathpzc{qp}\xi\right\rangle^{\otimes p}\otimes\left|x^{*}\right\rangle\) and \(\left|\mathpzc{qp}\xi^{\prime}\right\rangle^{\otimes p}\otimes\left|x^{*}\right\rangle\), but these two quantum states have \(1-\mathsf{negl}(\lambda)\) trace-distance for any polynomial \(p\). Therefore, this task can be performed with success at most \(\mathsf{negl}(\lambda)\). **Lemma 4**.: _No QPT adversary can distinguish Hybrid \(H_{2}\) and Hybrid \(H_{3}\) with non-negligible advantage._ Proof.: Suppose that there exists an adversary \(\mathcal{A}\) such that \(\Pr[F_{1}]-\Pr[F_{2}]\geq\frac{1}{p(\lambda)}\) for some polynomial \(p\), where \(F_{1}\) is the event where \(\mathcal{A}\) outputs \(1\) on \(H_{2}\) and \(F_{2}\) is the event where \(\mathcal{A}\) outputs \(1\) on \(H_{3}\). Then, we show that we can construct an adversary \(\mathcal{A}^{\prime}\) that can distinguish the PRF family from random. \(\mathcal{A}^{\prime}\) will behave as the challenger in the CPA-EO game and instead of computing \(f_{\mathsf{dk}}\) to create \(\left|\mathpzc{qp}\xi^{\prime}\right\rangle\) and answer the encryption queries, she queries the oracle \(O\) (that is either a PRF or a random function). \(\mathcal{A}^{\prime}\) then outputs \(1\) iff \(\mathcal{A}\) outputs \(1\). Notice that if \(O\) is a PRF, then the experiment is the same as Hybrid \(H_{2}\). On the other hand, if \(O\) is a random oracle, the experiment is the same as Hybrid \(H_{3}\). In this case, we have that \[\Pr_{O\sim\mathcal{F}}[\mathcal{A}^{O}()=1]-\Pr_{\mathsf{dk}}[ \mathcal{A}^{\prime\not\ell_{\mathsf{dk}}}()=1]\] \[=\Pr[F_{1}]-\Pr[F_{2}]\] \[\geq\frac{1}{p(\lambda)}.\] **Lemma 5**.: _No adversary can distinguish Hybrid \(H_{3}\) and Hybrid \(H_{4}\) with non-zero advantage._ Proof.: Since the adversary never gets the evaluation of \(H(x^{*})\) (as \(x^{*}\) was punctured from all \(\left\lvert\mathpzc{qp}\hat{\mathcal{K}}\right\rangle\)), the distributions of the two hybrid are identical. **Lemma 6**.: _Any polynomially-bounded adversary wins the game in Hybrid \(H_{4}\) with an advantage at most \(\mathsf{negl}(\lambda)\)._ Proof.: Suppose that there exists an adversary \(\mathcal{A}\) such that wins the game in Hybrid \(H_{4}\) with advantage \(\frac{1}{p(\lambda)}\) for some polynomial \(p\). Then, we show that we can construct an adversary \(\mathcal{A}^{\prime}\) that can break IND-CPA security of the symmetric-key encryption scheme with the same probability. \(\mathcal{A}^{\prime}\) will simulate \(\mathcal{A}\) and for that, she picks \(x^{*}\) and \(z\), creates \(|\mathpzc{qp}\xi^{\prime}\rangle=\frac{1}{\sqrt{2^{\lambda}-1}}\sum_{x\in\{0,1 ^{*},x\neq x^{*}\mid x\rangle|H(x)\}}\) using the compressed oracle technique [10] and uses oracle provided by the IND-CPA game of the symmetric-key encryption scheme for answering the encryption oracles. \(\mathcal{A}^{\prime}\) will output \(1\) iff \(\mathcal{A}\) outputs \(1\). We note that the encryption key \(z\) is sampled uniformly at random independently of all other variables. We have that the winning probability of \(\mathcal{A}^{\prime}\) in the IND-CPA game is the same of \(\mathcal{A}\) in the IND-CPA-EO game. ### QPKE with Classical Ciphertexts from PRFSPDs In this section, we propose a construction for qPKE from pseudo-random function-like states with proof of destruction. In this section, we construct a qPKE scheme based on PRFSPD. For that, we need the following result that builds _symmetric_-key encryption from such an assumption. **Proposition 1** ([1]).: _If quantum-secure PRFSPD exists, then there exists a quantum CPA symmetric encryption with classical ciphertexts._ **Theorem 3**.: _If quantum-secure PRFSPD with super-logarithmic input size exists, then there exists a public-key encryption with classical ciphertexts which is IND-CPA-EO secure._ Proof.: Our construction is given in Fig. 2. It uses a PRFSPD, as well as a quantum CPA _symmetric_ encryption with classical ciphertexts. Such symmetric encryption is known to exist, based on PRFSPD: In order to prove our results, we define the following Hybrids. **Hybrid \(H_{0}\).** This is the original security game. **Assumes:** A PRFSPD family \(\{|\psi_{\textsf{dk},x}\rangle\}_{\textsf{dk},x}\) and a quantum symmetric encryption scheme with classical ciphers \(\{\textsf{Enc},\textsf{Dec}\}\). \(\textsf{Gen}(1^{\lambda})\) 1. \(\textsf{dk}\leftarrow_{R}\{0,1\}^{\lambda}\). \(\textsf{QPXGen}(\textsf{dk})\) 1. Output \(|\textsf{\emph{4pt}}\xi\rangle=\bigotimes_{i\in[\lambda]}\frac{1}{\sqrt{2^{ \lambda}}}\sum_{x^{(i)}\in\{0,1\}^{\lambda}}|x^{(i)}\rangle|\psi_{\textsf{dk}, x^{(i)}}\rangle\). \(\textsf{Enc}(|\textsf{\emph{4pt}}\xi\rangle,m)\) for \(m\in\{0,1\}\) 1. Let \(|\textsf{\emph{4pt}}\xi_{i}\rangle\coloneqq\frac{1}{\sqrt{2^{\lambda}}}\sum_{x ^{(i)}\in\{0,1\}^{\lambda}}|x^{(i)}\rangle|\psi_{\textsf{dk},x^{(i)}}\rangle\), and write \(|\textsf{\emph{4pt}}\xi\rangle\) as \(|\textsf{\emph{4pt}}\xi\rangle=\bigotimes_{i\in[\lambda]}|\textsf{\emph{4pt}} \xi_{i}\rangle\). 2. Measure the left registers of \(|\textsf{\emph{4pt}}\xi_{i}\rangle\) to obtain classical strings \(x^{(i)}\). Denote the post-measurement states as \(|\psi_{i}^{\prime}\rangle\). 3. Set \(y^{(i)}\leftarrow\textsf{\emph{2pt}}(|\psi^{\prime}\rangle)\). 4. Pick \(k=\{0,1\}^{\lambda}\) and \(r^{(i)}=\{0,1\}^{|y^{(i)}|}\) uniformly at random. 5. Set \(\tilde{y}^{(i)}=\begin{cases}r^{(i)}&\text{,if }k_{i}=0\\ y^{(i)}&\text{,if }k_{i}=1\end{cases}\) 6. Output \(\Big{(}\textsf{Enc}(k,m),\Big{(}(x^{(i)},\tilde{y}^{(i)})\Big{)}_{i}\Big{)}\) \(\textsf{Dec}_{\textsf{dk}}(c)\) 1. Interpret \(c\) as \(\Big{(}c^{\prime},\Big{(}(x^{(i)},\tilde{y}^{(i)})\Big{)}_{i}\Big{)}\) 2. Let \(k^{\prime}_{i}=\textsf{\emph{4pt}}(x^{(i)},\tilde{y}^{(i)})\). 3. Output \(\textsf{Dec}(k^{\prime},c^{\prime})\) **Hybrid \(H_{1}\).** Same as Hybrid \(H_{0}\), except that the challenger picks \({x^{(i)}}^{*}\) uniformly at random, instead of providing \(|\textsf{\emph{4pt}}\xi\rangle\) to the adversary, she provides \[|\textsf{\emph{4pt}}\xi^{\prime}\rangle=\bigotimes_{i\in[\lambda]}\frac{1}{ \sqrt{2^{\lambda}-1}}\sum_{x^{(i)}\in\{0,1\}^{\lambda},x^{(i)}\neq x^{(i)}}|x^ {(i)}\rangle|\psi_{\textsf{dk},x^{(i)}}\rangle.\] The challenger uses the states \(|{x^{(i)}}^{*}\rangle|\psi_{\textsf{dk},x^{(i)}}{}^{*}\rangle\) to encrypt the challenge. **Hybrid \(H_{2}\).** Same as Hybrid \(H_{1}\), but to answer the encryption queries, the challenger picks each \(\tilde{y}^{(i)}\) uniformly at random and answers the encryption queries with \(\Big{(}\textsf{Enc}(k,m),\Big{(}(x^{(i)},\tilde{y}^{(i)})\Big{)}_{i}\Big{)}\) We will now show that, given our assumptions, Hybrids \(H_{i}\) and \(H_{i+1}\) are indistinguishable except with probability at most \(\textsf{negl}(\lambda)\) and that every polynomial-time adversary wins \(H_{3}\) with advantage at most \(\textsf{negl}(\lambda)\). We can then use triangle inequality to prove the security of Theorem 3. **Lemma 7**.: _No adversary can distinguish Hybrid \(H_{0}\) and Hybrid \(H_{1}\) with non-zero advantage._ Proof.: The proof follows analogously to Lemmas 2 and 3 and the triangle inequality. **Lemma 8**.: _No adversary can distinguish Hybrid \(H_{2}\) and Hybrid \(H_{3}\) with non-negligible advantage._ Proof.: Suppose that there exists an adversary \(\mathcal{A}\) such that \(\Pr[F_{1}]-\Pr[F_{2}]\geq\frac{1}{p(\lambda)}\) for some polynomial \(p\), where \(F_{1}\) is the event where \(\mathcal{A}\) outputs \(1\) on \(H_{2}\) and \(F_{2}\) is the event where \(\mathcal{A}\) outputs \(1\) on \(H_{3}\). Then, we show that we can construct an adversary \(\mathcal{A}^{\prime}\) that can break the PRFSPD family scheme. \(\mathcal{A}^{\prime}\) will behave as the challenger in the CPA-EO game and instead of computing \(|\textsf{\emph{4pt}}\xi^{\prime}\rangle\) by herself, she queries the oracle \(O\) (that on input \(|x\rangle|0\rangle\) answers either a PRFSPD \(|x\rangle|\psi_{\textsf{dk},x}\rangle\) or a Haar random state \(|x\rangle|\vartheta_{x}\rangle\)). Then, \(\mathcal{A}^{\prime}\) picks a random bit \(b\) and performs as follows: Figure 2: An encryption scheme with quantum public keys. * if \(b=0\), \(\mathcal{A}^{\prime}\) answers the encryption queries as Hybrid \(H_{2}\) * if \(b=1\), \(\mathcal{A}^{\prime}\) picks each \(\tilde{y}^{(i)}\) uniformly at random and answers the encryption queries with \(\left(Enc_{k}(m),\left((x^{(i)},\tilde{y}^{(i)})\right)_{i}\right)\) \(\mathcal{A}^{\prime}\) then outputs \(1\) iff \(\mathcal{A}\) outputs \(1\). Notice that if \(b=0\) and \(O\) answers a PRFSPD, then the experiment is the same as Hybrid \(H_{2}\). On the other hand, if \(b=0\) and \(O\) is a random oracle or if \(b=1\), the experiment is the same as Hybrid \(H_{3}\). Let us define the event \(E_{1}\) be the event where \(\mathcal{A}^{\prime}\) outputs \(1\) if \(O\) is a PRFSPD, and \(E_{2}\) be the event where \(\mathcal{A}^{\prime}\) outputs \(1\) if \(O\) answers with Haar random states. In this case, we have that \(\mathcal{A}^{\prime}\) distinguishes the PRFS family from Haar random states with probability \[\Pr[E_{1}]-\Pr[E_{2}]\] \[=\frac{1}{2}\left(\Pr[E_{1}\wedge b=0]+\Pr[E_{1}\wedge b=1]-\Pr[ E_{2}\wedge b=0]-\Pr[E_{2}\wedge b=1]\right)\] \[=\frac{1}{2}\left(\Pr[F_{1}]+\Pr[F_{2}]-\Pr[F_{2}]-\Pr[F_{2}]\right)\] \[\geq\frac{1}{2p(\lambda)}.\] **Lemma 9**.: _Any polynomially-bounded adversary wins the game in Hybrid \(H_{3}\) with an advantage at most \(\mathsf{negl}(\lambda)\)._ Proof.: Suppose that there exists an adversary \(\mathcal{A}\) such that wins the game in Hybrid \(H_{3}\) with advantage \(\frac{1}{p(\lambda)}\) for some polynomial \(p\). Then, we show that we can construct an adversary \(\mathcal{A}^{\prime}\) that can break IND-CPA security of the symmetric-key encryption scheme. \(\mathcal{A}^{\prime}\) will simulate \(\mathcal{A}\) and for that, she picks \(\mathsf{dk}\) and creates \(\left\lvert\mathpzc{qpk}^{\prime}\right\rangle\) and in order to answer the encryption queries, they use the encryption oracle provided by the IND-CPA game. \(\mathcal{A}^{\prime}\) will output \(1\) iff \(\mathcal{A}\) outputs \(1\). We have that the winning probability of \(\mathcal{A}^{\prime}\) in its IND-CPA game is the same of \(\mathcal{A}\) in its IND-CPA-EO game. ### QPKE with Quantum Ciphertexts from PRFSs We finally present our third scheme for qPKE, whose security is based on the existence of PRFS with super-logarithmic input size. **Theorem 4**.: _The construction in Fig. 3 is IND-CPA secure (see Definition 4), assuming \(\{|\psi_{k,x}\rangle\}_{k,x}\) is a PRFS with super-logarithmic input-size._ The proof of this theorem uses the same proof strategy of Theorem 2 and Theorem 3, the only difference is that here the scheme is only IND-CPA secure, while the previous ones are IND-CPA-EO secure. **Assumes:** A PRFS family \(\{|\psi_{\mathsf{dk},x}\rangle\}_{\mathsf{dk},x}\) with super-logarithmic input-size. \(\mathsf{Gen}(1^{\lambda})\) 1. \(\mathsf{dk}\gets_{R}\{0,1\}^{\lambda}\). \(\mathcal{QPXGen}(\mathsf{dk})\) 1. Output \(|\mathpzc{qpK}\rangle=\frac{1}{\sqrt{2\lambda}}\sum_{x\in\{0,1\}^{\lambda}}|x \rangle|\psi_{\mathsf{dk},x}\rangle\). \(\mathcal{Enc}(|\mathpzc{qpK}\rangle,m)\) for \(m\in\{0,1\}\) 1. Measure left register, denoted by \(x\). Let \(|\phi\rangle=|\psi_{\mathsf{dk},x}\rangle\) if \(m=0\), and a maximally mixed state otherwise. 2. Output \(c=(x,|\phi\rangle)\). \(\mathsf{Dec}_{\mathsf{dk}}(c)\) 1. Interpret \(c\) as \((x,|\phi\rangle)\) 2. Output \(0\) if \(|\phi\rangle=|\psi_{\mathsf{dk},x}\rangle\), otherwise output \(1\). ### Acknowledgments We wish to thank Prabhanjan Ananth and Umesh Vazirani for related discussions. ABG and QHV are supported by ANR JCJC TCS-NISQ ANR-22-CE47-0004, and by the PEPR integrated project EPiQ ANR-22-PETQ-0007 part of Plan France 2030. OS was supported by the Israeli Science Foundation (ISF) grant No. 682/18 and 2137/19, and by the Cyber Security Research Center at Ben-Gurion University. OS has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 756482).
2301.05511
Quadrature-PT symmetry: Classical-to-quantum transition in noise fluctuations
While gain-loss-coupled photonic platforms have achieved significant success in studying classical parity-time (PT) symmetry, they encounter challenges in demonstrating pure quantum effects due to incompatible operator transformations and Langevin noise. Here, we present compelling evidence that a non-Hermitian (NH) twin-beam system, undergoing phase-sensitive amplification (PSA) and balanced loss, not only enables observing the usual eigenvalue-associated PT phase transition but also exhibits distinctive features absent in classical NH or Hermitian quantum scenarios, encompassing quadrature PT symmetry, anomalous loss-induced quadrature squeezing, and dynamical and stationary classical-to-quantum transitions in noise fluctuations. Furthermore, our proposed bipartite open system promises optimal sensing, showcasing an improved signal-to-noise ratio and sensitivity, constrained by quantum Cram\'{e}r-Rao bound or Fisher information. These findings deepen the comprehension of authentic quantum optical PT symmetry involving both gain and loss, addressing contentious issues and illuminating new facets of the subject.
Wencong Wang, Yanhua Zhai, Dongmei Liu, Xiaoshun Jiang, Saeid Vashahri Ghamsari, Jianming Wen
2023-01-13T12:25:58Z
http://arxiv.org/abs/2301.05511v2
# Quantum-to-classical transition enabled by quadrature-PT symmetry ###### Abstract Quantum Langevin noise makes experimental realization of genuine quantum-optical parity-time (PT) symmetry in a gain-loss-coupled open system elusive. Here, we challenge this puzzle by exploiting twin beams produced from a nonlinear parametric process, one undergoing phase-sensitive linear quantum amplification (PSA) and the other engaging balanced loss merely. Unlike all previous studies involving phase-insensitive amplification (PIA), our PSA-loss scheme allows one quadrature pair to experience PT symmetry, a unique quantum effect without any classical counterpart. Such symmetry showcases many radical noise behaviors beyond conventional quantum squeezing and inaccessible to any PIA-based platform. Importantly, it is the only non-Hermitian system hitherto that enables the emergence of non-Hermiticity-induced quantum-to-classical transition for the same quantum observable when crossing exceptional point. Utilizing this quadrature-PT structure, we have further studied its potential in quantum sensing by exploring the quantum Cramer-Rao bound or Fisher information. Besides, the proposed quadrature PT symmetry also sheds new light on protecting continuous-variable (CV) qubits from decoherence in lossy transmission, a long-standing conundrum for various CV-based quantum technologies. _Introduction.--_In canonical quantum mechanics, the system Hamiltonian as a physical observable is required to be Hermitian to ensure the realness of associated eigenspectra. Yet, it has long been known that the Hermiticity is just a sufficient but not necessary condition for a Hamiltonian to have real eigenvalues. This radical change of view stems from the seminal work by Bender and Boettcher in 1998, where a large class of non-Hermitian quantum Hamiltonians enjoying the joint parity-time (PT) symmetry was discovered to possess entirely real eigenvalues below a phase-transition point or exceptional point (EP) [1]. However, it remains elusive to probe such a non-Hermitian but PT-symmetric quantum Hamiltonian experimentally due to the lack of complex quantum potential in reality. Nevertheless, the notion of PT symmetry [2-9] has successfully survived in many other physical branches. Thanks to the mathematical equivalence between the quantum Schrodinger and paraxial light propagation equation, classical optics was first suggested to simulate the wave properties of PT-symmetric quantum mechanics in synthesized settings. By incorporating linear gain and loss, optics has become a fertile ground for exploring PT symmetry [2-17] with an iconic feature of pair of eigenvalues phase transitioning from purely real to complex conjugate when a parameter crosses the EP. In this regard, a plethora of intriguing phenomena have been uncovered by utilizing various linear and nonlinear optical materials to control and engineer light for practical applications [2-17]. Despite the impressive progress, to date the PT studies have been mostly limited to a mean-field approach that encapsulates all quantum dissipation in an 'effective Hamiltonian' [2-17]. This method treats light essentially as a (semi) classical electromagnetic (EM) field and only retains the minimum number of degrees of freedom to describe an open system. As a result, it fails to yield valid results when the nonclassicality of light are of interest. Notably, if the EM field is quantized, one must introduce the Langevin noise operator to preserve its corresponding commutation relation [18], though the introduction of quantum Langevin noise is generally thought to prevent the system from approaching quantum optical PT symmetry [19]. This is especially true when a phase-insensitive linear quantum amplifier (PIA) serves as an optical gain resource. Unfortunately, so far the research on PT optics has all been confined to PIA-based systems, thereby rendering the observation of quantum signatures highly challenging. The gain involvement also encounters another fundamental issue from the famous quantum noncloning theorem [20], namely how to maintain the integrity of the signal state, let alone the limitation further dictated by the Kramers-Kronig relation [21]. Therefore, it becomes a legitimate question whether gain-loss-coupled PT symmetry is viable quantum optically [19]. We notice that two alternative means have recently been implemented by using either a passive scheme or a non-Hermitian subset Hamiltonian in a large Hermitian system [22-27] to bypass the noise issues. These efforts have unearthed some quantum features, but they are incapable of providing a conclusive picture to the problem. We are also aware that a distinct trajectory is devoted to exploiting anti-PT symmetry [28-34], a counterpart of PT, to avoid the adverse effect of Langevin noise [35]. By overcoming the aforementioned obstacles, here we propose a novel and experimentally feasible platform utilizing twin beams generated from a nonlinear optical parametric process such as parametric down conversion (PDC) and four-wave mixing (FWM) [36], with the signal arm experiencing pure loss while the idler channel undergoing phase-sensitive linear quantum amplification (PSA). Thanks to PSA-empowered noiseless amplification [37], our architecture not only makes the observation of true quantum optical PT a reality, but also displays distinctive features unapproachable to any previous scheme. Opposite to PIA equally amplifying paired quadratures with additive noise, PSA maximally amplifies some of them but inversely attenuates the rest without adding extra noise. This asymmetric amplification naturally gives rise to the so-called quadrature PT symmetry, a unique quantum effect without any classical counterpart. Given various never-before-seen attributes, our design further opens a door to uncovering the stunning classical-to-quantum transition for the same physical observable when a non-Hermitian parameter passes through the EP. _Theoretical model._--For simplicity, let us focus on the quadrature-PT setup schematic in Fig. 1, where under perfect phase matching, nondegenerate paired signal-idler waves are parametrically created from vacuum in a counterpropagating geometry by driving an FWM medium of length \(L\). During their propagation, the idler and signal channels are respectively subject to PSA and loss with the rates of \(g\) and \(\gamma\). For nondepleted and classical input pump lasers, the system evolves along the \(\mp\)z direction under the influence of the non-Hermitian Hamiltonian \[H=\frac{i\hbar g\big{(}a_{i}^{2}-a_{i}^{\dagger 2}\big{)}}{2}-i\hbar\gamma a_{s }^{\dagger}a_{s}+\hbar\kappa\big{(}a_{i}^{\dagger}a_{s}^{\dagger}+a_{i}a_{s} \big{)}\,, \tag{1}\] and the signal-idler field operators \((\mathrm{a_{s},a_{i}})\) obey the Heisenberg-Langevin equations, Figure 1: PT symmetry undergone by twin beams from a backward-FWM process, where the \(-\)z idler mode experiences PSA and the \(+\)z signal faces equal loss. \[\frac{da_{i}}{dx}=ga_{i}^{\dagger}+i\kappa a_{s}^{\dagger}.\frac{da_{s}}{dz}=- \gamma a_{s}-i\kappa a_{i}^{\dagger}+f_{s}\,, \tag{2}\] with \(\dagger\) denoting Hermitian conjugate, \(\kappa\) the parametric conversion strength, and \(f_{s}\) the quantum Langevin noise of zero mean satisfying \(\langle f_{s}(z)f_{s}^{\dagger}(z^{\prime})\rangle=2\gamma\delta(z-z^{\prime})\) and \(\langle f_{s}^{\dagger}(z)f_{s}(z^{\prime})\rangle=0\). At first glance, the dynamics (2) seems PT-irrelevant even if \(g=\gamma\). Surprisingly, the hidden PT arises if one transforms Eq. (2) into the corresponding quadrature-operator evolution by defining \(q_{j}=\big{(}a_{j}^{\dagger}+a_{j}\big{)}/2\) and \(p_{j}=i\big{(}a_{j}^{\dagger}-a_{j}\big{)}/2\) (\(j=i,s\)) with \(\big{[}q_{j},p_{j}\big{]}=i/2\). That is \[\frac{d}{dx}\begin{bmatrix}q_{i}\\ p_{s}\end{bmatrix} =\begin{bmatrix}g&\kappa\\ -\kappa&-\gamma\end{bmatrix}\begin{bmatrix}q_{i}\\ p_{s}\end{bmatrix}+\begin{bmatrix}0\\ p_{s}\end{bmatrix}, \tag{3a}\] \[\frac{d}{dx}\begin{bmatrix}p_{i}\\ q_{s}\end{bmatrix} =\begin{bmatrix}-g&\kappa\\ -\kappa&-\gamma\end{bmatrix}\begin{bmatrix}P_{i}\\ q_{s}\end{bmatrix}+\begin{bmatrix}0\\ Q_{s}\end{bmatrix}, \tag{3b}\] where \(P_{s}=i\big{(}f_{s}^{\dagger}-f_{s}\big{)}/2\) and \(Q_{s}=\big{(}f_{s}^{\dagger}+f_{s}\big{)}/2\) are the Langevin-noise quadrature operators. The underlying physics now becomes apparent. The \(g\)-\(\gamma\) introduction fundamentally intervenes the evolution of the usual two-mode squeezing. Specifically, for \(g=\gamma\), albeit the impact of \(P_{s}\), \(\{q_{i},p_{s}\}\) in Eq. (3a) become a PT-quadrature pair and adhere to PT-manifested noise reduction with the advent of nontrivial phase transition at the EP (\(\gamma=\kappa\)) for the pair of eigen-propagation constants ( \(\beta=\sqrt{\kappa^{2}-\gamma^{2}},-\beta\) ) transiting from purely real to conjugate imaginary. In contrast, the conjugate pair \(\{p_{i},q_{s}\}\) in Eq. (3b) simply follow \(Q_{s}\)-mediated two-mode quadrature squeezing, with their propagation decoupled from \(\{q_{i},p_{s}\}\). Such asymmetric and contrasting dynamics are unavailable to any existing setting. More strikingly, the system will facilitate dual opposing quadrature PT symmetry-\(\{q_{i},p_{s}\}\) for active while \(\{p_{i},q_{s}\}\) for passive, if without \(f_{s}\) and \(\gamma\). The general solutions to Eqs. (3a) and (3b) are \[\begin{bmatrix}q_{i}(0)\\ p_{s}(L)\end{bmatrix} =sec(\beta L-\epsilon)\begin{bmatrix}cos\,\epsilon&-sin(\beta L) \\ -sin(\beta L)&cos\,\epsilon\end{bmatrix}\begin{bmatrix}q_{i}(L)\\ p_{s}(0)\end{bmatrix}\] \[+sec(\beta L-\epsilon)\int_{0}^{L}dzP_{s}(z)\begin{bmatrix}-sin( \beta(L-2))\\ cos(gz-\epsilon)\end{bmatrix}, \tag{4a}\] \[\begin{bmatrix}p_{i}(0)\\ q_{s}(L)\end{bmatrix} =sec(\kappa L)\begin{bmatrix}e^{\nu L}&-sin(\kappa L)\\ -sin(\kappa L)&e^{-\nu L}\end{bmatrix}\begin{bmatrix}p_{i}(L)\\ q_{s}(0)\end{bmatrix}\] \[+sec(\kappa L)\int_{0}^{L}dzQ_{s}(z)\begin{bmatrix}-e^{\nu z}sin( \kappa(L-2))\\ e^{\nu(z-L)}cos(\kappa z)\end{bmatrix}, \tag{4b}\] with \(\epsilon=arctan(\gamma/\beta)\). From these solutions, indeed, \(\{q_{i}(0),p_{s}(L)\}\) but not \(\{p_{i}(0),q_{s}(L)\}\) carry on the PT-adjusted squeezing and anti-squeezing. _Homodyne detection.--_To disclose quadrature PT symmetry, one straightforward way is to analyze the noise behaviors across the phase transition by homodyne detecting quadrature variances in comparison with the ideal squeezed vacuum. This in turn encourages us to look at the following four variances: \(\big{\langle}\Delta q_{j}^{2}\big{\rangle}=\big{\langle}q_{j}^{2}\big{\rangle}- \big{\langle}q_{j}\big{\rangle}^{2}\) and \(\big{\langle}\Delta p_{j}^{2}\big{\rangle}=\big{\langle}p_{j}^{2}\big{\rangle}- \big{\langle}p_{j}\big{\rangle}^{2}\). Using Eqs. (4a) and (4b), after some lengthy algebra, one reaches \[\big{\langle}\Delta q_{i,0}^{2}\big{\rangle} =\frac{h(L)-2sin^{2}\epsilon-sec\,ecos(2\beta L-\epsilon)}{8cos^{2} (\beta L-\epsilon)}, \tag{5a}\] \[\big{\langle}\Delta p_{s,L}^{2}\big{\rangle} =\frac{h(L)\,cos\,\epsilon-cos(2\beta L+\epsilon)-2sin^{2} \epsilon\,cos(2\beta L-\epsilon)}{8\,cos\,ecos^{2}(\beta L-\epsilon)},\] (5b) \[\big{\langle}\Delta q_{s,L}^{2}\big{\rangle} =\frac{2+cos^{2}\rho e^{-2\nu L}-cos\,\varphi\,cos(2\kappa L+ \varphi)}{8\,cos^{2}(\kappa L)},\] (5c) \[\big{\langle}\Delta p_{i,0}^{2}\big{\rangle} =\frac{(2+cos^{2}\varphi)e^{2\nu L}-cos\,\varphi\,cos(2\kappa L- \varphi)}{8\,cos^{2}(\kappa L)}, \tag{5d}\] where \(h(L)=3+2\nu L\) and \(\varphi=\tan^{-1}(\gamma/\kappa)\). As expected, the PT-inherited variances (5a) and (5b) differentiate themselves from the rest two (5c) and (5d). A hallmark of such is the appearance of the argument \(\beta L\) in \(\big{\langle}\Delta q_{i,0}^{2}\big{\rangle}\) and \(\big{\langle}\Delta p_{s,L}^{2}\big{\rangle}\). To have an intuitive picture, we exemplify these variances in Figs. 2(a)-(d) for some typical \(\gamma/\kappa\). From Figs. 2(a) and (b), we observe a few extraordinary traits absent from all past studies on non-Hermitian physics as well as quantum squeezing. First, in the PT-phase intact region (\(\gamma<\kappa\)), different from the two-mode squeezed vacuum (TMSV) with an oscillation period of \(2\pi\), both \(\log_{4}\big{\langle}\Delta q_{i,0}^{2}\big{\rangle}\) and \(\log_{4}\big{\langle}\Delta p_{s,L}^{2}\big{\rangle}\) generally display increased classical fluctuations with a period \(T\approx 2\pi\kappa/\beta\), except that the former shows a little sub-vacuum-noise suppression at a very short distance range due to the insufficient competition between PT and squeezing. In contrast, both cease to oscillate in the phase broken regime (\(\gamma>\kappa\)) and are upper bounded by their respective variance curves at the EP. Moreover, \(\log_{4}\big{\langle}\Delta p_{s,L}^{2}\big{\rangle}\) always grows monotonically above the vacuum-noise level; while \(\log_{4}\big{\langle}\Delta q_{i,0}^{2}\big{\rangle}\) invariably exhibits quantum squeezing, and the larger \(\gamma/\kappa\) the larger the squeezing and \(L\). What's more, the completely incompatible nature, quantum versus classical, of the same physical observable \(q_{i}(0)\) before and after the PT phase transition renders our system a unique candidate to study the transition between these two different worlds, whose boundary is physically defined by the EP curve. Contrarily, since \(\{p_{i}(0),q_{s}(L)\}\) are decoupled from \(\{q_{i}(0),p_{s}(L)\}\), their variances fluctuate periodically, akin to the TMSV case. As shown in Figs. 2(c) and (d), though notably affected by \(Q_{s}\) and \(\gamma\), \(\log_{4}\bigl{(}\Delta q_{s,L}^{2}\bigr{)}\) resembles the regular quadrature squeezing, but not \(\log_{4}\bigl{(}\Delta p_{i,0}^{2}\bigr{)}\). From the above analysis, we learned that for the same single-mode quadrature, PT results in a nontrivial fundamental transition from quantum to classical when the non-Hermitian parameter \(\gamma\) oversteps a threshold. One may wonder whether this exotic phenomenon can also take place in a two-mode quadrature measurement. The answer is affirmative. To see how this works, we pay attention to \(d_{1}=[q_{i}(0)+q_{s}(L)]/\sqrt{2}\) and \(d_{2}=[p_{i}(0)+p_{s}(L)]/\sqrt{2}\), which satisfy \([d_{1},d_{2}]=i/2\). For the vacuum input, it is easy to check that their variances are simply the sum of the single-mode ones (5a)-(5d), \[\langle\Delta d_{1}^{2}\rangle=\frac{\bigl{\langle}\Delta q_{i,0}^{2}\bigr{\rangle} +\bigl{\langle}\Delta q_{s,L}^{2}\bigr{\rangle}}{2},\langle\Delta d_{2}^{2} \rangle=\frac{\bigl{\langle}\Delta p_{i,0}^{2}\bigr{\rangle}+\bigl{\langle} \Delta p_{s,L}^{2}\bigr{\rangle}}{2}. \tag{6}\] Based on Figs. 2(b) and (d), \(\langle\Delta d_{2}^{2}\rangle\) is expected to be distributed above the vacuum noise all the time. Moreover, because \(\bigl{\langle}\Delta p_{s,L}^{2}\bigr{\rangle}\) and \(\bigl{\langle}\Delta p_{i,0}^{2}\bigr{\rangle}\) have different fluctuation periods before the phase transition, we envision that \(\langle\Delta d_{2}^{2}\rangle\) will exhibit interleaved dual periodic oscillations but reduce to a single period after the phase breaking. Though the situation becomes somewhat subtle for \(\langle\Delta d_{1}^{2}\rangle\), its layout can be deduced similarly by compromising Figs. 2(a) and (c). To be specific, in the PT-phase unbroken region, it is a double-cycle growth fluctuation staggered on top of the vacuum noise (except the very short distance case). When PT symmetry spontaneously breaks down, counterintuitively, the single-period oscillating \(\langle\Delta d_{1}^{2}\rangle\) will always return certain squeezing at some effective distances, and these distances will be extended for a bigger \(\gamma/\kappa\). Same as \(q_{i}(0)\), \(d_{1}\) can serve as another physical probe to visualize the quantum-to-classical transition induced by quadrature PT symmetry, too, with the boundary defined by the EP curve. All these statements excellently agree with our numerical simulations given in Figs. 3(a) and (b). _RISM.--_Other than homodyne detection, there is one additional means to explore quadrature PT, the so-called relative intensity squeezing measurement (RISM). Traditionally, this method enables the shot-noise of one beam to be measured and subtracted from the other so as to attain lower-noise differential measurement of a signal of interest. To this end, we begin with our own relative-intensity operator, \(N_{i,0}-N_{s,L}=a_{i}^{\dagger}(0)a_{i}(0)-a_{s}^{\dagger}(L)a_{s}(L)\). The degree of squeezing is then characterized by the noise figure (NF), which is determined by the relative-intensity variance. Mathematically, it takes the form [38] Figure 3: PT-symmetric \(\log_{4}\bigl{(}\Delta d_{1}^{2}\bigr{)}\) (a) and \(\log_{4}\bigl{(}\Delta d_{2}^{2}\bigr{)}\) (b) with account of quantum noise. Again, as the references, the black solid and dashed curves are respectively the ideal TMSV (\(\gamma/\kappa=0\)) and vacuum noise. \[\text{NF}=\frac{\text{Var}[N_{i,0}-N_{s,L}]}{\langle N_{i}(0)\rangle+\langle N_{s} (L)\rangle}. \tag{7}\] Here, the average photon numbers are computed by plugging Eqs. (4a) and (4b) to \(\langle N_{i,0}\rangle=\langle q_{i}^{2}(0)\rangle+\langle p_{i}^{2}(0)\rangle- 1/2\) and \(\langle N_{s,L}\rangle=\langle q_{s}^{2}(L)\rangle+\langle p_{s}^{2}(L)\rangle -1/2\). In stark contrast to the quadrature variances discussed earlier, the NF, while bringing about some alike characteristics, clearly reveals some quite opposite peculiarities. For \(\gamma/\kappa\geq 1\), as demonstrated in Fig. 4(a), in addition to the incremental single-period fluctuation \(\log_{10}(\text{NF}_{\geq 0}+1)\) grows along with the increment of \(2\kappa L\), and the larger \(\gamma/\kappa\) is, the noisier it is. From the plot, it is not difficult to conclude that in the PT-phase broken region, NF is essentially occupied by the noise anti-squeezing. However, NF behaves highly complex as \(\gamma/\kappa<1\). Although it is still an interleaved double-period oscillation within this range, the EP curve is no longer the partition to separate the classical and quantum fluctuations. In line with the numerical simulations, we find that quantum squeezing materializes when \(\gamma/\kappa<0.52\). Some representative examples are depicted in Fig. 4(b) by plotting \(-\log_{10}(\text{NF}_{<0}+1)\) for different \(\gamma/\kappa\). Their comparison suggests that the smaller the value of \(\gamma/\kappa\), the more pronounced the achievable squeezing over a longer distance \(2\kappa L\). As a matter of fact, the RISM obviously supplies certain sharp signatures unreachable to the homodyne detection, regardless of the two highly unbalanced channels. Before proceeding, a few remarks are ready here. First, even in the presence of Langevin noise, utilizing PSA instead of PIA is practicable to accomplish quantum optical PT under fair sampling measurement. Second, contrary to PIA, PSA arouses the unusual quadrature PT and licenses the singular quantum-to-classical transition accompanied by the PT phase transition. Last but not least, quadrature PT sheds new light on protecting continuous-variable (CV) qubits from decoherence in inevitable lossy transmission, a long-standing conundrum for various CV-based quantum technologies [39]. _Quantum sensing._--Being a discipline of practical application, quantum sensing [40, 41, 42] exploits quantum properties, effects, or systems to fulfill high-resolution and super-sensitive measurements of physical parameters over the similar measurements performed within a classical framework. For this, quantum squeezing has long been recognized as one of the indispensable nonclassical resources for ultra-precision estimations. Among them, one far-reaching example is its recent adoption by the Laser Interferometer Gravitational-Wave Observatory (LIGO) for gravitational wave detection. Nevertheless, the inevitable propagation loss often degrades the available squeezing and compromises the promised sensitivity. We note that in recent non-Hermitian studies, the abrupt change near EP has been capitalized for enhanced sensing in classical settings [43, 44, 45, 46, 47]. Yet, its extension to the quantum level turns out to be problematic because of quantum noise [48]. To avoid such noise, one usually resorts to either ideal anti-PT systems or post-selection measurement [25, 34, 35]. Unlike these studies, here we directly confront Langevin noise and explore the opportunity of quadrature PT in quantum sensing under fair sampling measurement. We are particularly interested to know whether the system could have any advantage in improving sensitivity. As shown below, the PT-quadrature observables can yield the best performance of classical sensing before the phase transition but departing far from the EP; while the non-PT-quadrature observables are capable of optimal quantum sensing by noise-mediated squeezing for \(\gamma/\kappa\) less than 1. This distinguishes our work from the previous anti-PT-, squeezing-, or EP-based proposals. Our analysis is carried out by estimating \(\kappa\) (or \(\gamma\)) and comparing the achievable precision with the quantum Cramer-Rao bound (set by the quantum Fisher information of the Figure 4: (a) PT-regulated noise figure (NF) via relative intensity squeezing measurement. (b) Representative examples of quantum PT-symmetric NF. quantum state). Suppose that the two bosonic modes are initially prepared in a coherent state \(|\psi_{i}\rangle=|\alpha_{1},\alpha_{2}\rangle\). An optimal homodyne measurement is then implemented on, say, \(q_{i}(0)\)after an evolution distance \(L\). Using Eq. (4a), one can readily have its mean value and variance, \[\langle q_{i}(0)\rangle = \frac{\cos e\left(q_{i}(L)\right)-sin(\beta L)\langle p_{\rm S}(0 )\rangle}{cos(\beta L-\epsilon)}, \tag{8}\] \[\left\langle\Delta q_{i,0}^{2}\right\rangle = \frac{3+w|2\gamma L-\text{tane}\sin(2\beta L)|-\cos(2\beta L)-2 \sin^{2}\epsilon}{8\cos^{2}(\beta L-\epsilon)}, \tag{9}\] where \(w=2n_{th}+1\) and \(n_{th}\) is the average thermal boson number. From Eqs. (8) and (9), it is clear that the Langevin noise shifts the peaks of \(\left\langle\Delta q_{i,0}^{2}\right\rangle\) away from the troughs of \(\left\langle q_{i}(0)\right\rangle\), so that they do not coincide at all. To ease the subsequent derivations, hereafter we assume \(\alpha_{i}=i\alpha_{s}^{*}=\sqrt{2}\alpha e^{i\pi/4}\). The estimation precision relies on measuring the change of \(\langle q_{i}(0)\rangle\) due to a tiny perturbation \(\delta\kappa\) on a preset \(\kappa\). This alternatively suggests to examine the system response to \(\left\langle q_{i}(0)\right\rangle\) for a small variation \(\delta\kappa\) around \(\kappa\). We thus define the susceptibility to capture such a response, \[\chi_{\kappa}^{q_{i}(0)}\equiv\frac{\partial\langle q_{i}(0) \rangle}{\partial\kappa}=\] \[\frac{\alpha\{2\beta L[\sin(\beta L)-1]+\sin\epsilon[2\sin(\beta L )+\cos(\beta L-\epsilon)-\cos\epsilon]\}}{2\beta\cos^{2}(\beta L-\epsilon)}. \tag{10}\] When \(\kappa\rightarrow\gamma\) or \(\beta\to 0\), \(\chi_{\kappa}^{q_{i}(0)}\rightarrow\alpha L(3+\kappa^{2}L^{2})/\)\([3(1+\kappa L)^{2}]\) is a curvetureless constant, implying the loss of the sensing ability in the EP vicinity for the chosen observable. On the other hand, the \(\kappa\)-estimation is jointly determined by the variance and susceptibility, \(\Delta_{\kappa,q_{i,0}}^{2}=\left\langle\Delta q_{i,0}^{2}\right\rangle/ \left[\chi_{\kappa}^{q_{i}(0)}\right]^{2}\), whose inverse dictates the accuracy, \[\Delta_{\kappa,q_{i,0}}^{-2}=\] \[\frac{2\alpha^{2}[2\beta L[\sin(\beta L)-1]+\sin\epsilon[2\sin( \beta L)+\cos(\beta L-\epsilon)-\cos\epsilon]]^{2}}{\beta^{2}\cos^{2}(\beta L -\epsilon)(3+w|2\gamma L-\text{tane}\sin(2\beta L)]-\cos(2\beta L)-2\sin^{2} \epsilon)}. \tag{11}\] The sensing fulfillment is to compare \(\Delta_{\kappa,q_{i,0}}^{-2}\),0with the quantum Fisher information, \(F_{\kappa}\), which sets the ultimate precision, i.e., the lower quantum Cram' er-Rao bound for any optimal measurement, \(F_{\kappa}\geq\Delta_{\kappa,q_{i,0}}^{-2}\). Fscan be accordingly derived from \[F_{\kappa} = 4L^{2}\left\langle\left\langle\psi_{f}\middle|\partial_{\kappa}H ^{\dagger}\partial_{\kappa}H\middle|\psi_{f}\right\rangle-\right. \tag{12}\] \[\left.\left\langle\psi_{f}\middle|\partial_{\kappa}H^{\dagger} \middle|\psi_{f}\right\rangle\langle\psi_{f}\middle|\partial_{\kappa}H\middle| \psi_{f}\right\rangle\right\rangle.\] for the final state \(\left|\psi_{f}\right\rangle\) of the system in the Schrodinger representation. As detailed in Supplementary Information, one can perform similar sensing evaluations to the rest three quadratures. Basing on the calculations shown in Figs. 5(a) and (b), we note that in the current arrangement, \(q_{i}(0)\) and \(p_{\rm S}(L)\) can only permit optimal classical sensing for a moderate medium length in the phase-unbroken regime but away from the EP, as revealed by the ratios of \(\Delta_{\kappa,q_{i,0}}^{-2}/F_{\kappa}\) and \(\Delta_{\kappa,p_{\rm S},L}^{-2}/F_{\kappa}\). On the contrary, \(q_{\rm S}(L)\) and \(p_{\rm I}(0)\) behave distinctively in the sense that, despite the impacts of the photon loss and Langevin noise, they are still able to offer optimal quantum sensing over a relatively longer distance for smaller \(\gamma/\kappa\), as sketched in Figs. 5(c) and (d). In short, fundamentally different from all previous research, the PSA-induced quadrature PT enables a unique way to observe a quantum-to-classical transition for a physical observable at the breakdown of symmetry. Such quadrature PT radically reshapes the dynamics of two-mode squeezing with a striking phase transition never seen before. Unfortunately, optical loss and Langevin noise hinder the PT quadrature pair to confer a quantum advantage in improving sensitivity, although the non-PT pair can still offer optimal quantum sensing. Besides of nonlinear wave mixing, our model can be also achieved in other platforms such as superconducting circuits. Most importantly, our work forges a new avenue to explore the long-sought, nontrivial quantum-to-classical transition utilizing non-Hermitian physics. **Acknowledgements.--**This work was supported by NSF 1806519 and NSF EFMA-1741693. X.J. acknowledges the support by the National Key R&D Program of China (2021YF A1400803). D.L. was supported by the Nature Science Foundation of Guangdong Province (2019A1515011401). **AUTHOR CONTRIBUTIONS.--**J.W. conceived the theoretical scheme and supervised the whole project with the help of D.L. and X.J. W.W., supervised by J.W., carried out the whole calculations with the assistance of Y.Z. and S.V.G. All authors contributed to the discussions and writing of the manuscript. * [email protected] \(\dagger\) [email protected] \(\ddagger\) [email protected]
2308.06513
A Study of MEV Extraction Techniques on a First-Come-First-Served Blockchain
Maximal Extractable Value (MEV) has become a significant incentive on blockchain networks, referring to the value captured through the manipulation of transaction execution order and strategic issuance of profit-generation transactions. We argue that transaction ordering techniques used for MEV extraction in blockchains where fees can influence the execution order do not directly apply to blockchains where the order is determined based on transactions' arrival times. Such blockchains' First-Come-First-Served (FCFS) nature can yield different optimization strategies for entities seeking MEV, known as searchers, requiring further study. This paper explores the applicability of MEV extraction techniques observed on Ethereum, a fee-based blockchain, to Algorand, an FCFS blockchain. Our results show the prevalence of arbitrage MEV getting extracted through backruns on pending transactions in the network, uniformly distributed to block positions. However, on-chain data do not reveal latency optimizations between specific MEV searchers and Algorand block proposers. We also study network clogging attacks and argue how searchers can exploit them as a viable ordering technique for MEV extraction in FCFS networks.
Burak Öz, Filip Rezabek, Jonas Gebele, Felix Hoops, Florian Matthes
2023-08-12T09:37:11Z
http://arxiv.org/abs/2308.06513v3
# A First Study of MFV on an Up-and-Coming Blockchain: Algorand ###### Abstract. Maximal Extractable Value (MEV) significantly influences network incentives, consensus safety, and economic dynamics, and has been extensively studied within the Ethereum blockchain domain. However, MEV is not specific to Ethereum, and extends to other blockchain platforms with differing properties, such as Algorand. Algorand, a smart-contract-based blockchain employing a Byzantine-Fault Tolerant consensus mechanism and Pure-Proof-of-Stake, is characterized by a First-Come-First-Serve transaction ordering mechanism and minimal fixed transaction fees. This paper provides the first exploration of the MEV landscape on Algorand, focusing on arbitrage MEV patterns, key actors, their strategic preferences, transaction positioning strategies, and the influence of Algorand's network infrastructure on MEV searching. We observed 1 142 970 arbitrage cases, with a single searcher executing 653 001. Different searchers demonstrated diverse strategies, reflected in the varied distribution of profitable block positions. Nonetheless, the even spread of arbitrage positions across a block indicates an emphasis on immediate backrunning executions. Furthermore, we identified 265 637 instances of Batch Transaction Issuances, where an address occupied over 80 % of a block with a singular transaction type. blockchain, maximal extractable value, distributed systems, incentives + Footnote †: journal: Computer Science ## 1. Introduction The dynamics of Maximal Extractable Value (MEV) is dictated by properties of the underlying domain, such as consensus and transaction ordering mechanisms (Dian et al., 2017). While Daian et al.'s study (Daian et al., 2017) demonstrated how spots compete in Priority Gas Auctions (PGANs) through escalating gas fees, not all blockchains facilitate transaction prioritization through fees. In Algorand, nodes implement a First-Come-First-Serve (FCFS)-based transaction ordering by default. Only when block space demand causes congestion, fee-based prioritization is used. This divergence introduces unique dynamics in the MEV extraction game that influence MEV searchers' strategies, necessitating further exploration. To comprehend the MEV dynamics, we first need to examine existing extraction activities. Though extensive MEV quantification has occurred on Ethereum through both research (Grover et al., 2017; Grover et al., 2017) and services like MEV Explore (Grover et al., 2017) and EigenPhi (Dai et al., 2017), studies on blockchains employing non-fee-based ordering mechanisms remain rare, aside from (Daian et al., 2017). We thus focus on arbitrages, as Algorand's FCFS-based ordering mechanism does not naturally permit frontrunning-requiring strategies like sandwiching. Additionally, we uncover a potential novel searcher strategy via Batch Transaction Issuance (BTI) events. This strategy, by congesting block space, can potentially shift transaction ordering mechanics to a fee-based system, enabling frontrunning strategies. This paper aims to offer an exhaustive descriptive analysis of arbitrages and an in-depth examination of the MEV searchers who execute them. We also investigate strategies related to transaction positioning and latency relationships with proposers, alongside BTIs, to gain insights into how Algorand's FCFS-based transaction ordering mechanism and network infrastructure could impact the MEV extraction landscape. ContributionsOur contributions can be summarized as follows: * We conducted the first study of MEV within the Algorand blockchain domain. * We detected 1 142 970 arbitrages, with 653 001 executed by a single searcher. * We discovered that different searchers adopt varied strategies, leading to diverse profitable block positions. However, an even distribution of arbitrage positions beyond a block's head suggests that searchers prioritize immediate backrunning opportunities. * We identified 265 637 instances of BTIs, where an address filled more than 80 % of a block with a single type of transaction. ## 2. Background We focus on the Algorand blockchain (Dian et al., 2017; Dian et al., 2017; Dian et al., 2017), which was introduced by Silvio Micali in 2017. It uses a new consensus mechanism called Algorand Byzantine Fault Tolerance Protocol (BA), which offers instant finality, scalability (in the number of nodes), and avoids soft forks (Dian et al., 2017; Dian et al., 2017). Algorand relies on the Pure Proof-of-Stake (PPoS) for the Sybil resistance, allowing for anyone with at least one ALG (the native token of Algorand) to join the consensus. Unlike other blockchains such as Ethereum, Algorand does not reward the consensus participants. When it comes to the high-level specifications, the system can handle around 7000 transaction/s and publishes blocks every 3.4 s following the v3.16.0 upgrade1. The size of the network comprises roughly 1400 nodes (relay and participation nodes)2. The participation nodes are interconnected via the relays and, by default, each is connected to at least four relay nodes for redundancy purposes. Footnote 2: [https://metrics.algorand.org/](https://metrics.algorand.org/) Algorand achieves scalability with the number of participants (i.e., nodes) in the system by selecting quorum subcommittees from the total number of active participation nodes. One BA round consists of three steps - a block proposer selection, soft vote, and certify vote, after which a block is appended to the ledger. For each step, a new quorum is selected at the beginning of it. The proposer selection step plays a significant role for MEV, as the selected proposer's transaction order determines what value will get extracted. Since the order of transactions in a block is determined on an FCFS-basis, for optimized MEV extraction, having a fast connection to the relays can potentially help as they distribute the transactions to the memory pools of block proposers. Based on the current specifications, there are a maximum of 20 proposers involved in step one, while only a single proposer is selected for the further steps. This proposer must receive at least the threshold number of votes expected in the given step and have the lowest value of the computed Verifiable Random Function (VRF). Algorand uses three main types of transactions - payments (for native token transfers), Algorand Standard Asset (ASA) transfers (other token types), and Algorand Smart Contract (ASC1) application calls. Unlike Ethereum, assets are not managed through smart contracts but by ASA transactions. ASC1, known as applications, on the other hand, deploy functions on the layer-1 and are written in Transaction Execution Approval Language (TEAL) and interpreted in the Algorand Virtual Machine (AVM). Each application has a unique ID once deployed to the chain. Depending on the complexity of the application, a transaction might be split into up to 256 inner transactions depending on the opcode budget. The whole bundle is called a group transaction with its own ID and all transactions must be present on a node for successful execution of the application logic. The transaction cost is fixed at a minimum value of 0.001 ALGO per transaction and only gets charged when a transaction is successful. When the network is congested, the fee strategy changes to a dynamic cost model per byte. ## 3. Related Work Daian et al. (Daian et al., 2017) started the discourse on MEV with their publication outlining what were then theoretical strategies to extract value from the Ethereum blockchain. Since then, MEV has arrived on production blockchain networks. Qin et al. (Qin et al., 2018) made a significant contribution by quantifying MEV extraction on Ethereum and providing a taxonomy of different transaction ordering manipulations, extending (Dai et al., 2018). Their analysis focuses on sandwich attacks, Decentralized Exchange (DEX)-to-DEX arbitrages, liquidations, and replay attacks, processing a total of approximately 6 million blocks. Interestingly, they found that profitable MEV extractions tend to be located towards the end of a block, suggesting backruns. This finding inspired us to also investigate the positioning of arbitrages for our study on Algorand. The rise of the Flashbots group declaring it their mission to solve the issues MEV created for the Ethereum ecosystem is examined by Weintraub et al. (Weintraub et al., 2019). While Ethereum was still using Proof-of-Work (PoW), Flashbots offerred a private relay service allowing MEV searchers to submit bids on a certain transaction ordering, called a bundle. Any miner could incorporate a bundle into their block in return for a cut of the profits. The system quickly reached almost 100 % adoption measured in mining hash rate. Weintraub et al. collected data not just from Ethereum blocks but also snapshots of pending transactions and Flashbots' block metadata from their official API. Their findings indicate that MEV on Ethereum is a massive industry dominated by Flashbots. Furthermore, the distribution of extracted MEV is heavily favoring the miners, contrasting Flashbots' declared goal of democratizing MEV. While Algorand has no private relay services like Flashbots, it shows some similarities, as there are native bundles in Algorand called group transactions. One recent paper by Carillo and Hu (Carillo and Hu, 2018) has worked on quantifying MEV on Terra Classic, a blockchain with a fixed gas price where MEV searchers compete on optimizing latency. In a dataset of almost 3 million blocks, they identified a significant number of arbitrages, of which half are conducted with less than 1000 USD. In contrast to other works, Carillo et al. also spends time identifying specific searchers behind accounts and benchmarking their performance. When diving deeper into arbitrage transaction specifics, they conclude that each searcher sends several failed transactions for every successful one, stressing the network in the process. Finally, they show a relation between transaction propagation latency and geographic node location, demonstrating how latency optimizations can be useful for searcher strategies. This work is perhaps the one most closely related to ours as Terra Classic and Algorand both share a fixed price, FCFS-based transaction ordering, suggesting they influence MEV extraction similarly. ## 4. Data Collection and Processing We developed a specialized pipeline for collecting relevant on-chain data to conduct our study. We focused on blocks and transactions, which we gathered using the Algorand indexer and obtained block proposer details through our Algorand client node. The data collection took place over three days, facilitated by our indexer deployment and the Algonode service provider3 to fulfill throughput needs. During this time, we collected data from block 16 500 000 (on Tue, 28 Sep 2021 at 12:55:52) to 30 235 000 (on Mon, 03 Jul 2023 at 21:22:22). The start date coincides with the dawn of the first Decentralized Finance (DeFi) activities on Algorand, signaled by the emergence of the Tinyman V14 DEX on block 16 518 736. Overall, our examination spanned 13 735 000 blocks, which included 4557 empty blocks (0.033 % of total) and incorporated a total of 745 767 520 transactions. On a median, each block contained 40 transactions, with block 23 593 602 containing the maximum number of 26 197 transactions. Footnote 3: [https://algonode.io/](https://algonode.io/) ### Detecting Arbitrages Our study aims to quantify atomic arbitrage trades on the Algorand blockchain. Algorand's ability to execute groups of transactions atomically - where all transactions in a group either succeed or fail - underpins our approach. We started by assembling transactions within the same block into groups based on their group ID. We then processed each group and individual transaction separately, viewing grouped transactions as internal calls of a single transaction. However, we factored transaction fees individually, aggregating them at the end. Initially, we processed transactions based on their type field. Our primary focus was on the _pay_ type for simple ALGO transfers, _asfer_ for ASA token transfers, and _appl_ for Algorand smart contract calls which we investigated further for their inner transactions. After creating swap objects from the processed transactions, we used a heuristic approach, similar to (Han et al., 2017), to detect potential arbitreges. We employed the following heuristics, given transaction \(t\) comprising \(n\) swaps \(s^{1},...,s^{n}\): 1. [noitemsep] 2. Transaction \(t\) includes multiple swaps (\(n\geq 2\)). 3. The tokens involved in the swaps form a cycle, such that the input token of \(s^{i}\) is the output of \(s^{i-1}\). This requires the input token of the first swap to match the output of the last swap. 4. The input amount of \(s^{i}\) should be less than or equal to the output of \(s^{i-1}\). This means the input amount of the first swap should be less than or equal to the output of the last swap, suggesting a profitable arbitrage. ### Detecting Batch Transaction Issuance We suspect Algorand's low transaction fees may make it susceptible to suppression attacks (Gilman et al., 2017; Han et al., 2017). However, using an FCFS-based transaction ordering mechanism complicates such attacks. Unlike Ethereum, where inclusion can be influenced by fees, on Algorand, attackers can only orchestrate suppression attacks by issuing transaction batches, which accumulate in the proposers' mempool and execute simultaneously or in quick succession based on their arrival time. We have identified such actions as Batch Transaction Issuance (BTI) and defined heuristics to detect them without assuming any specific intent. Given BITs cannot be enforced by fees and their execution timing relies on network latency, BTI instances might not occur in consecutive blocks. Thus, unlike the work of Qin et al. on Ethereum (Gin et al., 2017), we initially set no duration constraints for our BTI detection heuristics. Moreover, our focus was on blocks filled by a single type of transaction from the same sender, with no constraints on maximum block space consumption. To limit unintentionally occurring BITs, we only investigated blocks larger than the median size. Hence, given a block \(b\), we applied the following heuristics: 1. [noitemsep] 2. \(lEn(b)\succ 40\) (median block size). 3. The same transaction or group pattern from the same sender(s) make up \(\geq\) 80 % of \(b\). ### Validation and Limitations As the first study of its kind on the Algorand blockchain, we lack comparative results to validate our findings. Consequently, our heuristics are designed with a focus on minimizing False Positives (FPs), while being aware that we may overlook certain False Negatives (FNs), such as non-atomic arbitrages. Before the introduction and widespread adoption of inner transactions that enabled calls to DEXs from an application in a single transaction, arbitrages were exclusively conducted using grouped transactions. During this phase, we identified non-atomic arbitrages happening through multiple groups, where each group represents a single DEX swap5. However, such arbitrages are prone to the risk of not getting executed in the desired order due to latency effects. We leave the identification of these multi-group arbitrages for future work. Footnote 5: Multi-group arbitrage profiting approximately 22 ALGO: Swap-1 -> Swap-2 -> Swap-3 ## 5. Analysis This section presents our comprehensive investigation of arbitrages, MEV searcher strategies under the influence of Algorand's transaction ordering mechanics and network effects, and BITs on the Algorand blockchain, based on our measurement studies. ### Overview Our heuristics detected approximately 1 142 970 exploited arbitrages across 401 679 blocks, making up 2.92 % of all analyzed blocks. The earliest arbitrage was noted in block 19 293 106 (on Thu, 17 Feb 2022, at 05:56:20), with MEV searchers collectively earning over 251 650.15 USD thus far. This amount, however, is a lower-bound estimate, derived using heuristics and only accounting for arbitrages profiting in ALGO (or ALGO-pegged tokens) and stablecoins. For the calculation of profits in USD, we utilized the daily price data provided by the CoinGecko API (Cornell et al., 2017), with stablecoins assigned a fixed 1 USD price. The remaining tokens' arbitrages were negligible, making up only 0.38 % of all, and were not considered for this study. The most lucrative arbitrage was found in block 25 712 503, granting the searcher a 2738.57 USD profit. When we analyzed the distribution of arbitrages across blocks, we discovered that 297 479 blocks (74.05 %) included 1-2 arbitrages, while only 4.41 % of blocks contained more than ten arbitrage trades. This can be attributed to Algorand's quick block time and the relatively low median transaction count per block. Hence, opportunities are rare in every block, but they occur frequently. A noteworthy observation was that 525 blocks (0.14 %) had over 50 arbitrages (peaking at 613), with 384 of these occurring in June 2023 -- the month with the highest arbitrage count (349 835). Timeline analysis of arbitrage distribution is detailed in Section 5.4. Further examination revealed 12 BTI blocks with 50+ arbitrages, where a single type of arbitrage executed by the same searcher accounted for \(\geq\)80 % of the transactions. Such instances are further detailed in Section 5.5. In examining arbitrage attributes, we found that approximately 75 % of arbitrages involved \(\leq\)3 swaps and tokens, with the most complex instances involving up to nine swaps and eight tokens. Of the 26 unique profit tokens identified, ALGO was by far the most popular (used in \(\sim 97\%\) of cases), followed by the USDC stable-coin and AF-BANK-ALGO from the AlgoFi platform. Interestingly, while the top two pools utilized in arbitrages were ALGO/COOP and ALGO/PEPE, the associated COOP and PEPE tokens were not utilized as profit tokens. Detailed overviews of profit tokens, as well as popular pools and platforms, are respectively provided in Table 4 and Table 5 in Appendix A. ### Searchers In our analysis of MEV searchers conducting arbitrages, we initially recognized 45 unique addresses. However, further scrutiny revealed inter-relations between some addresses due to common funding sources, as shown in Table 6 in appendix A. The address MDC5 was the leading funder, backing 12 addresses. Notably, AACC, the address associated with the highest number of arbitrages, also financed the second most active address, J4BJ. Following the consolidation of searchers sharing the same funding source, we are left with 32 unique players. #### 5.2.1. Top Players To identify the most active and profitable MEV searchers, we compiled a list merging the top 10 searchers with the highest number of arbitrages and the most profitable ones. This led to a consolidated list of 12 top searchers due to an overlap of eight searchers. Table 1 highlights these top searchers, indicating their number of arbitrages, profits in USD and ALGO, and profit rate, which represent the median profit relative to the input across all arbitrages by the searcher. Currently, the searcher AACC overwhelmingly dominates the arbitrage extraction market, leading in both total arbitrages and profits, accounting for 57 % and 44 % respectively of all arbitrages and profits. Note that the number of arbitrages and profits do not scale linearly. However, this observation requires caution as we only convert profits made in ALGO-based tokens and stablecoins to USD. For example, searcher G4X2, despite ranking sixth in the number of arbitrages, falls short in profits, potentially due to a primary profit source in tokens we do not convert to USD. The maximum profit rate from a single arbitrage is an astounding 1 464 042 %, achieved by URKF, signaling the existence of highly profitable arbitrage opportunities, albeit rare, given the significant gap between the maximum and the 99th percentile (14.49 %). The median profit rate for arbitrageurs stands at approximately 0.47 %, with a mean around 11.21 %. A high standard deviation of roughly 2548.98 points to a large disparity in profit rates among top arbitrageurs, implying a broad range of profitability. Figure 1 displays the monthly distribution of arbitrages by top searchers. Most searchers show activity confined to specific periods, with only a few, such as AACC and HS2Y, demonstrating sustained activity. AACC almost consistently leads (except February 2022) when we observe the monthly distribution, with a notable increase in dominance during the last two months of our analysis, accounting for 69 % and 72 % of all arbitrages in those periods respectively. We attribute this rise to AACC adopting applications that execute atomic arbitrages in single transactions (see Figure 5 in Appendix A). Aside from them, URKF, HS2Y, and ODKH are the only searchers executing arbitrages through their applications. The apps they most frequently use are 1099380935, 1052848269, 1104000629, and 1002599007 respectively. Despite being consistently profitable, AACC was occasionally surpassed by searchers like ODKH, EVES, TEIC, and URKF, especially between January and April 2023 (see Figure 4 in Appendix A). ### Arbitrage Strategies To better comprehend how the strategies for arbitrage MEV extraction are formed concerning Algorand's FCFS-based transaction ordering mechanism and network infrastructure, we analyze the positioning of arbitrage transactions in the blocks and potential latency relations between searchers and block proposers. #### 5.3.1. Transaction Positioning This section analyzes the distribution of arbitrage transaction positions within a block. Instead of defining ranges, we calculated position octiles. We opted for octiles as a measure to achieve finer granularity than quartiles. If even more granularity is required, deciles could be chosen, but we believe octiles suffice to reflect the searcher strategy patterns we are interested in. As shown in Figure 2, the first octile (O1) contains the most arbitrages, with the remaining ones evenly distributed (median: 142 871.50 ; std: 8952.31). Such distribution indicates that while certain arbitrages only happen at the top of the block due to dependency on the last confirmed state, most are network-level backruns spread uniformly across octiles. Profits, however, peak in the final octile (O8). Based on prior studies (Han et al., 2017; Han et al., 2018) and our observations, the uniform spread of arbitrages on Algorand stems from the way searchers exploit backruns. Searchers actively monitor the mempool for large trades, such as those by W2IZ, a potential Centralized Exchange (CEX)-DEX arbitrageur, who we found to be involved in 29.8 % of all blocks containing arbitrages. These searchers aim to be positioned right after the arbitrage-triggering transaction, indifferent to their absolute position in block. With such transactions entering the mempool at random, the octile placement of following backruns is equally unpredictable. Neither the initiating party nor the backrunner can reliably influence their transaction's position due to Algorand's FCFS-based ordering and the absence of private relay \begin{table} \begin{tabular}{l r r r r} \hline \hline **Arber** & **\# Authorages** & **Profit (USD)** & **Profit (ALGO)** & **Profit Rate (\%)** \\ \hline AACC & 653,001 & 110,967.67 & 491,541.65 & 0.56 \\ URKF & 133,594 & 31,201.12 & 174,681.03 & 0.95 \\ T2Y3U & 80,396 & 14,391.43 & 14,155.33 & 0.07 \\ HS2Y & 70783 & 2,947.39 & 16,874.87 & 0.28 \\ TEIC & 57,222 & 253,504.40 & 128,010.05 & 0.52 \\ G4X2 & 38,516 & 861.02 & 4,673.21 & 1.99 \\ EATS & 36,500 & 10,484.67 & 21,691.85 & 0.09 \\ ODKH & 30,797 & 2,897.54 & 121,210.04 & 0.17 \\ EVES & 17,528 & 18,241.63 & 75,312.27 & 0.59 \\ HH6 & 12,411 & 2,756.26 & 3,688.96 & 0.24 \\ XEF & 10,055 & 5,953.95 & 18,523.89 & 0.16 \\ MDC & 222 & 3,374.57 & 8.19 & 0.37 \\ \hline \hline \end{tabular} \end{table} Table 1. Top MEV searchers on the Algorand Blockchain Figure 1. Monthly distribution of arbitrages executed by top MEV searchers from February 2022 to June 2023. Each searcher is depicted with a unique color. services like the ones on Ethereum6. This results in a consistent distribution of backruns across octiles. Footnote 6: Flashbots. [https://docs.flashbots.net/flashbots-auction/overview](https://docs.flashbots.net/flashbots-auction/overview) In our quest to detect individual searcher strategies, we examine the arbitrage positioning of each searcher. Table 2 displays the distribution of arbitrages across predefined octiles for each top searcher, highlighting the two octiles having the most number of arbitrages and profits. Additionally, we compute a correlation coefficient (\(\rho\)) to identify any potential link between the placement of arbitrages within the block (octiles) and their profitability. Our analysis reveals varying strategies. Several searchers, like EVES, TEIC, and ZHB6, increase their profits by occupying lower octiles, while only URKF statistically significantly enhances profits from higher octile positions. Interestingly, URKF has only 0.6 % of their arbitrages at block's first position (P1), in contrast to ODKH, with the most significant proportion of P1 arbitrages, comprising about \(\sim\) 14 % of their total. For searchers with no significant correlation between octiles and profits, their profitable arbitrages are either evenly distributed, like AACC, or they profit predominantly from extreme positions but not middle ones, like TZSU. These findings confirm our initial assumptions that while some searchers like URKF and ODKH potentially profit from last confirmed state arbitrages at top of the blocks, most exploit network state arbitrages through backruns. #### 5.3.2. Latency Games While various arbitrage MEV extraction strategies were noted in our study, the critical point of all strategies lies in ensuring prompt transaction delivery to Algorand block proposers for desired positioning. However, achieving deterministic latency optimization is challenging due to the probabilistic selection of the next round's proposer (like the hash puzzle process in PoW chains) and the intricacies of the relay network. Consequently, we hypothesize that searchers may operate multiple nodes to reduce latency with high-stakers, issuing duplicate arbitrage transactions without risk due to the exclusion of failing transactions on-chain. Our study, however, is limited to on-chain data and lacks empirical latency data, which would necessitate a global network of nodes. Instead, we evaluated whether certain searchers performed significantly better with specific block proposers, which could indicate latency effects. Under the assumption of an equal playing field, i.e., identical geographical locations and no arbitrage withholding attempts, each proposer should converge to the same set of top searchers over time. However, if latency effects are present, some searchers might rank higher with certain proposers despite lower overall ranks. To investigate this, we analyzed the searcher and proposer activity over the last two months, which had the highest number of arbitrages. We identified each month's top five searchers for each proposer and compared them with the aggregated monthly list. The results showed a near-unanimous consensus among top proposers on top searchers, with a minor variation in May 2023 where a proposer switched a single searcher's ranking. This observation suggests that searchers might either utilize multiple nodes to issue duplicate transactions, thereby mitigating latency with different high-staker proposers, or the relay-based network infrastructure of Algorand may limit the latency optimization. To definitively determine whether latency games are in play, or feasible, empirical network data collection is required. ### Timeline Analysis The period of our arbitrage analysis from February 2022 until June 2023, outlined in Figure 3, is characterized by a steady upswing in the number of arbitrages, peaking with 351 394 arbitrages in June 2023. Despite exhibiting 59 % fewer arbitrages, March 2023 accounted for the highest USD profits at 43 131 USD. On a monthly average, approximately 27 220 arbitrages were observed, generating profits around 11 250 USD. In the beginning of our timeline, DEX platforms like AlgoFi and Tinyman already demonstrated notable Total Value Locked (TVL) and volume. This landscape expanded with the rise of Pact's TVL from mid-April 2022 and Humbleswap coming into significance by the end of June 20227, which fostered an increase in the number of arbitrages throughout Q2 2022. November 2022 stands out with the occurrence of the FIFA World Cup, where Algorand served as the official blockchain platform of FIFA8. The event ignited an increase in volume across all DEXs and a noticeable rise in arbitrages. Figure 3. Monthly distribution of number of arbitrages (orange plot), profits in USD (blue bars), and profits in AIGO (green bars) from February 2022 to June 2023. Figure 2. Distribution of arbitrages across position octiles. The green bars reflect the cumulative profit in USD from arbitrages in each octile, while the blue bars represent the total count of arbitrages per octile. In March 2023, the stablecoin USDC deviated from its peg. This incident, coinciding with the day of the highest arbitrage profits within the analyzed time period, notably influenced the statistics. During the Algorand governance period in April 2023, there was a sharp decrease of around 50 % in TVL across all platforms starting from March 31st, 2023. Despite the recovery within one week after the initial rewards were dispensed, a dip in volume across all DEXs was observable for April compared to the previous months. In early May, searcher AACC deployed their applications (1097349178 and 1099380935) to execute atomic arbitrages (see Figure 5 in Appendix A). This deployment can explain the observed surge in arbitrage activities during the latter part of May and June. The strategies employed by AACC may serve as a reference point for evaluating the tactics of future major searchers. ### Batch Transaction Issuance Our heuristics identified a total of 265 637 BPIs across the 13 735 000 blocks we analyzed, approximately one BTI every 50 blocks. We found that 75 % of BPIs accounted for between 80 % and 90 % of all transactions within their respective blocks, with 348 BPIs constituting all transactions in a block. Regarding the duration of the BPIs, we recorded 397, 289, 127, and 22 BTIs that lasted for 20-30, 30-50, 50-100, and over 100 blocks respectively. The longest BTI spanned 364 consecutive blocks, nearly 23 minutes, issued by ZW3L. As presented in Table 3, we classified notable BPIs based on their issuer and purpose. While most were issued to facilitate services like reward distribution, we also noted instances of token distribution via faucets, and even the logging of results from world chess tournaments9. Notably, 53 BPIs were instigated by searchers executing arbitrages throughout a block. Block number 28 328 225 exemplifies this, with 603 arbitrage transactions by searcher HSZY, accounting for approximately 95 % of the block's transactions. Footnote 9: [https://fidewerldchampionship.com/partners/](https://fidewerldchampionship.com/partners/) #### a.5.1. A Novel Searcher Strategy BITs occur frequently, yet their strategic use remains largely unexplored outside of searchers filling blocks with their arbitrage transactions. A potential novel application of BITs might be to congest the blockchain, forcing nodes to transition to fee-based transaction prioritization. Once this shift occurs, a searcher could monitor the mempool and implement strategies, such as sandwiching or replay attacks (Bartos et al., 2017), which require frontrunning. Based on the block size limit, we estimate that creating such congestion would require around 3000 _pay_ transactions at a cost of about 3 ALGO (approximately 0.3 USD). Upon issuing such a batch of transactions, an MEV searcher could leverage frontrunning-based MEV strategies to exploit the information exposed by other searchers who, assuming they are unaware of the incoming BTI, might issue transactions with a minimum fee to carry out their strategies. ## 6. Conclusion and Future Work This paper dissects the MEV landscape within the Algorand blockchain, characterized by an FCFS-based transaction ordering mechanism and minimal fixed fees. We explore arbitrages to identify prominent MEV searchers, understand their strategies through transaction positioning, and assess the impact of Algorand's network infrastructure and transaction ordering mechanism on MEV searching. Our study uncovers a dominant searcher, AACC, along with other searchers employing a diverse range of strategies concerning arbitrage positioning. We observe an even distribution of arbitrages across most block positions, suggesting immediate network state backruns upon spotting opportunities, while top of the block positions reflect higher activity due to arbitrages exploiting last confirmed state. Although we could not infer strong evidence about latency games played, we leave for future work empirically collecting network data and running tests to observe latency effects. Further, we research the prevalence and potential exploitation of BITs, identifying their high occurrence, indicating extreme block \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c c c c} \hline \hline **Abr** & **P1** & **Orelic 1** & **Orelic 2** & **Orelic 3** & **Orelic 4** & **Orelic 5** & **Orelic 6** & **Orelic 7** & **Orelic 8** & **Achurigross** & **Total Profit** & **Orelic 9** & **Orelic 10** & **Orelic 2** & **Orelic 3** & **Orelic 10** & **Orelic 11** & **Orelic 12** & **Orelic 13** & **Orelic 14** & **Orelic 15** & **Orelic 16** & **Orelic 7** & **Orelic 8** & **Achurigross** & **Total Profit** \\ & \(\bullet\) & [2] & [3] & [4] & [5] & [6] & [7] & [8] & [9] & [10] & [11] & [12] & [13] & [14] & [15] & [16] & [17] & [18] & [19] & [10] & [11] & [13] & [15] & [16] & [17] & [18] \\ \hline AACC & 25,639 & 11.01 & 11,981.18 & 12.34 & 15,655.97 & 12.77 & **16,604.04** & **13.78** & 11,318.64 & 12.72 & 12.028.18 & **14.34** & 12,585.93 & 12.63 & **16,772.82** & 10.42 & 12,298.19 & 653.001 & 110,067.68 & -0.07 \\ URDF & 827 & 10.69 & 4,254.34 & **17.35** & **6,818.57** & **16.32** & **6,976.56** & 15.38 & 4,688.71 & 1.89 & 2,723.24 & 11.16 & 2,382.18 & 8.51 & 1,408.94 & 8.69 & 1,900.37 & 133,594 & 31,201.12 & -0.80 \\ TZSU & 5,241 & **39.27** & **2,758.02** & **17.94** & 12,985.35 & 11.04 & 12.278 & 8.228 & 8.258 & 5.39 & 88.70 & 6.292 & 1,085.92 & 5,164.47 & **1,37** & **3,277.18** & 89,396 & 14,914.24 & -0.03 \\ STBY & **34.83** & **14.42** & 27.255 & 10.02 & 150.41 & 8.956 & 16.653 & 8.593 & 17.14 & **8.40** & 17.42 & 11.34 & 2,983.12 & **14.67** & **455.75** & **10.42** & **1,283.32** & 70,783.72 & 2,947.39 & -0.03 \\ TTC & 3,548 & **18.88** & 14.987 & 10.46 & 20.392 & 17.632 & **6,976.56** & 15.38 & 4,688.87 & 11.89 & 2,732.42 & 10.94 & 9,943.12 & 13.764 & **9,41.03** & **19.41** & **6,032.39** & 7,222.20 & 3,043.79 & 0.77 \\ GCR & 2,420 & 13.82 & 8.627 & 9.37 & 6.37 & 6.33 & 8.31 & 9.25 & 7.10 & 7.16 & 6.52 & 12.16 & **20.120** & **14.86** & 12,156.35 & **19.288** & **17.79** & 33,516.66 & **10.73** \\ EAPS & 1,413 & 12.42 & 57.58 & 9.00 & 52.79 & 8.90 & 42.25 & 8.26 & 52.77 & 27.70 & 51.25 & 11.531 & **15.351** & **16.360** & **17.383** & **10.738** & **10.738** & **10.738** & 9,523.78** & 9,560 & 10,846.46 & 0.71 \\ OCRM & 4,355 & 25.62 & **7,832.42** & **22.41** & 2,711.46 & 9.95 & 2,746.24 & 7.77 & 18.938 & 6.68 & 2,900.83 & 7.97 & 1,691.69 & 8.71 & 2,228.73 & 1,014 & **6,546.55** & 30,797 & 24,937.54 & -0.38 \\ EVTS & 117 & 4.35 & 424.77 & 10.43 & 117.84 & 12.79 & 1,233.66 & 14.69 & 1,513.73 & 12.77 & 2,018.018 & **15.592** & 4,875.90 & **15.00** & **4,657.03** & 12.77 & **3,944.71** & 17,528.58 & 18,241.62 & 0.93 \\ J space occupation by a single party. Our findings illuminate the impact of domain-specific properties of Algorand on MEV searching, presenting a different set of optimization dynamics for searchers compared to fee-based blockchains like Ethereum, setting the stage for future research on latency games and refined searcher strategies.
2305.09804
A latent process model for monitoring progress towards hard-to-measure targets, with applications to mental health and online educational assessments
The recent shift to remote learning and work has aggravated long-standing problems, such as the problem of monitoring the mental health of individuals and the progress of students towards learning targets. We introduce a novel latent process model with a view to monitoring the progress of individuals towards a hard-to-measure target of interest, measured by a set of variables. The latent process model is based on the idea of embedding both individuals and variables measuring progress towards the target of interest in a shared metric space, interpreted as an interaction map that captures interactions between individuals and variables. The fact that individuals are embedded in the same metric space as the target helps assess the progress of individuals towards the target. We demonstrate, with the help of simulations and applications, that the latent process model enables a novel look at mental health and online educational assessments in disadvantaged subpopulations.
Minjeong Jeon, Michael Schweinberger
2023-05-16T20:58:51Z
http://arxiv.org/abs/2305.09804v2
A latent process model for monitoring progress towards hard-to-measure targets, with applications to mental health and online educational assessments ###### Abstract The recent shift to remote learning and work has aggravated long-standing problems, such as the problem of monitoring the mental health of individuals and the progress of students towards learning targets. We introduce a novel latent process model with a view to monitoring the progress of individuals towards a hard-to-measure target of interest, measured by a set of variables. The latent process model is based on the idea of embedding both individuals and variables measuring progress towards the target of interest in a shared metric space, interpreted as an interaction map that captures interactions between individuals and variables. The fact that individuals are embedded in the same metric space as the target helps assess the progress of individuals towards the target. We pursue a Bayesian approach and present simulation results along with applications to mental health and online educational assessments. + Footnote †: _Keywords and phrases_: latent space models, measurement models, item response models ## 1 Introduction The recent shift to remote learning and work has aggravated long-standing problems, such as the problem of monitoring the mental health of individuals (e.g., Daly et al., 2020; Holmes et al., 2020) and the progress of students towards learning targets (e.g., Engzell et al., 2021; Kuhfeld and et al., 2020; Bansak and Starr, 2021). We introduce a novel approach to monitoring the progress of individuals towards a hard-to-measure target of interest. Examples are measuring the progress of individuals with mental health problems or the progress of students towards learning targets. Both examples have in common that there is a target of interest (e.g., improving mental health or the understanding of mathematical concepts) and measuring progress towards the target is more challenging than measuring changes in physical quantities (e.g., temperature) or medical conditions (e.g., cholesterol levels), but a set of variables is available for measuring progress towards the target. If, e.g., the goal is to monitor the progress of students towards learning targets, measurements can be collected by paper-and-pencil or computer-assisted educational assessments, whereas progress in terms of mental health can be monitored by collecting data on mental well-being by using surveys along with physical measurements related to stress by using wearable devices. We propose a novel latent process model with a view to monitoring the progress of individuals towards a target of interest, measured by a set of variables. The latent process model is based on the idea of embedding both individuals and variables measuring progress towards the target of interest in a shared metric space and can be considered as a longitudinal extension of the Jeon et al. (2021) model. The fact that individuals are embedded in the same metric space as the target helps capture * interactions between individuals and variables arising from unobserved variables, such as cultural background, upbringing, and mentoring of students, which may affect responses; * whether individuals make progress towards the target; * how much progress individuals make towards the target; * how much more progress individuals can make in the future. ### Motivating example To demonstrate the proposed latent process model, we assess the progress of \(257\) mothers with infants in low-income communities towards improving mental health. The data are taken from Santos et al. (2018) and are described in more detail in Section 5. Figure 1 presents an interaction map, based on a Bayesian approach to the proposed latent process model. The interaction map embeds individuals (mothers) and items (questions about depression) into Euclidean space \((\mathbb{M},\,d)\). The interaction map offers at least three insights: * Out of the \(10\) items used to assess the mental health of the \(257\) mothers, some of the items (e.g., items \(3\) and \(5\)) deviate from the bulk of the items. * There are interactions between individuals (mothers) and items (questions about depression): e.g., mother \(B\) is closest to item \(5\). It turns out that mother \(B\) agreed with item \(5\) at the first assessment ("feeling hopeful"), whereas mothers \(A\), \(C\), and \(D\) did not. * Mothers \(B\) and \(C\) have made strides towards improving mental health, whereas mothers \(A\) and \(D\) may need to make more progress in the future. Figure 2 clarifies that the progress of mother \(A\) is unclear, but confirms the conclusions regarding mothers \(B\), \(C\), and \(D\). We describe the results in more detail in Section 5. ### Existing approaches A classic approach to longitudinal educational assessments is based on Andersen's (1985) model. Andersen's (1985) model assumes that binary responses \(Y_{i,j,t}\in\{0,1\}\) are independent Bernoulli\((\mu_{i,j,t})\) random variables with \(\mathrm{logit}(\mu_{i,j,t})\coloneqq\alpha_{i,t}+\beta_{j}\), where \(\alpha_{i,t}\in\mathbb{R}\) can be interpreted as the ability of student \(i\) at time \(t\) and \(\beta_{j}\in\mathbb{R}\) can be interpreted as the easiness of item \(j\). Based on Andersen's (1985) model, the progress of student \(i\) between time \(t=1\) and \(t=2\) can be quantified by \(\alpha_{i,2}-\alpha_{i,1}\). Embretson (1991) reparameterized Andersen's (1985) model by modeling changes in abilities via \(\delta_{i}\coloneqq\alpha_{i,2}-\alpha_{i,1}\). Andersen's (1985) model has been extended with a view to capturing temporal dependence (e.g., Cai, 2010), and addressing multiple learning topics at each time point (e.g., Wang and Nydick, 2020; Huang, 2015). Other approaches model a linear change in abilities as a function of time (e.g., Pastor and Beretvas, 2006; Wilson et al., 2012) and incorporate first-order autoregressive structure (Jeon and Rabe-Hesketh, 2016; Segawa, 2005). In summary, Andersen's (1985) model and its extensions help educators estimate the progress of students \(i\) by estimating differences in abilities \(\alpha_{i,2}-\alpha_{i,1}\) (model estimation). That said, these models do not help educators determine whether students make progress at all (model selection), which would enable educators identify students who need more support than others. In addition, these models assume that all students \(i\) with the same ability \(\alpha_{i,t}\) have the same response probability for all items \(j\) with the same easiness \(\beta_{j}\). Such assumptions may well be violated in applications, because some students \(i\) with the same ability \(\alpha_{i,t}\) may respond to items \(j\) with the same easiness \(\beta_{j}\) differently, owing to unobserved variables such as cultural background, upbringing, and mentoring. By contrast, the proposed latent process captures interactions among individuals (students) and variables (items) and provides a visual interaction map to reveal such interactions, in addition to helping assess whether individuals make progress; how much progress individuals make; and how much more progress individuals can make in the future. A more detailed comparison of the proposed latent process model and Andersen's (1985) model Figure 2: Mental health: Marginal posteriors of the rates of progress \(\lambda_{A}\), \(\lambda_{B}\), \(\lambda_{C}\), \(\lambda_{D}\in[0,\,1]\) of mothers \(A\), \(B\), \(C\), \(D\) towards target \(\mathcal{T}\) (improving mental health). The progress of mother \(A\) is unclear, but the marginal posteriors of the rates of progress \(\lambda_{B}\) and \(\lambda_{C}\) of mothers \(B\) and \(C\) have modes close to \(1\), confirming that mothers \(B\) and \(C\) have made strides towards improving mental health. By contrast, the marginal posterior of the rate of progress \(\lambda_{D}\) of mother \(D\) has a mode close to \(0\), underscoring that mother \(D\) may need additional assistance. can be found in Section 2.4. ### Outline We introduce the proposed latent process model in Section 2 and outline a Bayesian approach to statistical inference in Section 3. Simulation results and applications can be found in Sections 4, 5, and 6, respectively. ## 2 Latent process model We consider responses \(Y_{i,j,t}\in\mathcal{Y}_{i,j,t}\) of individuals \(i\in\{1,\ldots,n\}\) (\(n\geq 1\)) to variables \(j\in\{1,\ldots,p\}\) (\(p\geq 1\)) at times \(t\in\{1,\ldots,T\}\) (\(T\geq 2\)). To accommodate data from multiple sources (e.g., self-reported mental health assessments collected by surveys and physical measurements related to stress collected by wearable devices), we allow responses \(Y_{i,j,t}\) to be binary (\(\mathcal{Y}_{i,j,t}=\{0,1\}\)), count-valued (\(\mathcal{Y}_{i,j,t}=\{0,1,\ldots\}\)), or real-valued (\(\mathcal{Y}_{i,j,t}=\mathbb{R}\)). To monitor the progress of individuals towards a target of interest \(\mathcal{T}\), we assume that the individuals and the variables measuring progress towards target \(\mathcal{T}\) have positions in a shared metric space (\(\mathbb{M}\), \(d\)), consisting of a set \(\mathbb{M}\) and a distance function \(d:\mathbb{M}^{2}\mapsto[0,+\infty)\). We assume that the set \(\mathbb{M}\) is convex and allow the metric space (\(\mathbb{M}\), \(d\)) to be Euclidean or non-Euclidean. In the domain of statistical network analysis (Hunter et al., 2012; Smith et al., 2019), two broad classes of latent space models can be distinguished, based on the geometry of the underlying metric space: Euclidean latent space models (Hoff et al., 2002) and latent models with intrinsic hierarchical structure, based on ultrametric space (Schweinberger and Snijders, 2003) or hyperbolic space (Krioukov et al., 2010). The proposed probabilistic framework can accommodate these and other metric spaces. A discussion of the non-trivial issue of choosing the geometry of the metric space (\(\mathbb{M}\), \(d\)) can be found in Section 2.3. Given a metric space (\(\mathbb{M}\), \(d\)), we assume that individuals \(i\) have positions \(\boldsymbol{a}_{i,t}\in\mathbb{M}\) at time \(t\) and move towards the target of interest \(\mathcal{T}\in\mathbb{M}\), measured by variables \(j\) with positions \(\boldsymbol{b}_{j}\in\mathbb{M}\). The position of the target \(\mathcal{T}\) is assumed to be time-invariant. It is possible to extend the proposed latent process model to time-varying targets, provided that the data at hand warrant the resulting increase in model complexity, but we do not consider such extensions here. We assume that the responses \(Y_{i,j,t}\in\mathcal{Y}_{i,j,t}\) are independent conditional on the positions \(\boldsymbol{a}_{i,t}\) of individuals \(i\) at time \(t\) and the positions \(\boldsymbol{b}_{j}\) of variables \(j\) measuring progress towards target \(\mathcal{T}\), and are distributed as \[Y_{i,j,t}\mid\boldsymbol{\theta},\,\boldsymbol{a}_{i,t},\,\boldsymbol{b}_{j} \stackrel{{\text{ind}}}{{\sim}}\,\,\,\,\mathbb{P}_{\boldsymbol{ \theta},\boldsymbol{a}_{i,t},\boldsymbol{b}_{j}},\] where \(\mathbb{P}_{\boldsymbol{\theta},\boldsymbol{a}_{i,t},\boldsymbol{b}_{j}}\) is a probability distribution with support \(\mathcal{Y}_{i,j,t}\) and \(\boldsymbol{\theta}\in\boldsymbol{\Theta}\) is a vector of parameters. We divide the description of the probabilistic framework into * the data model: the model that generates the responses \(Y_{i,j,t}\) conditional on the positions \(\boldsymbol{a}_{i,t}\) of individuals \(i\) at time \(t\) and the positions \(\boldsymbol{b}_{j}\) of variables \(j\) measuring progress towards the target of interest \(\mathcal{T}\) (Section 2.1); * the process model: the process that determines whether and how much progress individuals \(i\) make towards the target of interest \(\mathcal{T}\) (Section 2.2). The non-trivial issue of selecting the geometry of the metric space is discussed in Section 2.3. We compare the proposed latent process model to Andersen's (1985) classic model in Section 2.4 and mention other possible approaches to assessing progress in Section 2.5. Priors are reviewed in Section 2.6, and identifiability issues are discussed in Section 2.7. ### Data model The data model describes how the responses \(Y_{i,j,t}\) are generated conditional on the positions \(\mathbf{a}_{i,t}\) of individuals \(i\) at time \(t\) and the positions \(\mathbf{b}_{j}\) of variables \(j\) measuring progress towards the target of interest \(\mathcal{T}\). To leverage data from multiple sources (e.g., binary, count-, and real-valued responses), we assume that the responses \(Y_{i,j,t}\) are generated by generalized linear models (Sundberg, 2019; Efron, 2022). Let \[\mu_{i,j,t}(\mathbf{\theta},\,\mathbf{a}_{i,t},\,\mathbf{b}_{j}) \coloneqq \mathbb{E}_{\mathbf{\theta},\,\mathbf{a}_{i,t},\,\mathbf{b}_{j}}\ Y_{i,j,t}\] be the mean response of individual \(i\) to variable \(j\) at time \(t\) and \(\eta_{i,j,t}\) be a link function, which links the mean response \(\mu_{i,j,t}\) to a linear predictor: \[\eta_{i,j,t}(\mu_{i,j,t}(\mathbf{\theta},\,\mathbf{a}_{i,t},\,\mathbf{b}_{j})) \coloneqq \begin{cases}\alpha_{i}+\beta_{j}-\gamma\ d(\mathbf{a}_{i,t},\mathbf{b} _{j})&\text{if }t=1\\ \alpha_{i}+\beta_{j}-\gamma\ d(\mathbf{a}_{i,t},\mathcal{T})&\text{if }t=2, \ldots,T,\end{cases}\] where \(\mathcal{T}\coloneqq(1/p)\sum_{j=1}^{p}\mathbf{b}_{j}\) is the target of interest, measured by variables \(j\) with positions \(\mathbf{b}_{j}\), and \(\mathbf{\theta}\in\mathbf{\Theta}\) is the vector of weights \(\alpha_{i}\in\mathbb{R}\) (\(i=1,\ldots,n\)), \(\beta_{j}\in\mathbb{R}\) (\(j=1,\ldots,p\)), and \(\gamma\in[0,+\infty)\). The fact that the weights \(\alpha_{i}\) and \(\beta_{j}\) do not depend on time \(t\) implies that the distance term \(d(\mathbf{a}_{i,t},\mathcal{T})\) captures the progress of individual \(i\) towards target \(\mathcal{T}\). The proposed data model can be viewed as an extension of the Rasch (1960) model and the Jeon et al. (2021) model to longitudinal data. The Rasch (1960) model and the Jeon et al. (2021) model consider binary responses \(Y_{i,j}\in\{0,1\}\) observed at \(T=1\) time point and assume that \(Y_{i,j}\mid\mu_{i,j}\stackrel{{\text{ind}}}{{\sim}}\text{Bernoulli }(\mu_{i,j})\), where \(\text{logit}(\mu_{i,j})\coloneqq\alpha_{i}+\beta_{j}\)(Rasch, 1960) and \(\text{logit}(\mu_{i,j})\coloneqq\alpha_{i}+\beta_{j}-\gamma\ d(\mathbf{a}_{i},\mathbf{b }_{j})\)(Jeon et al., 2021). The proposed data model can be viewed as an extension of these models to binary and non-binary responses \(Y_{i,j,t}\) observed at \(T\geq 2\) time points \(t\in\{1,\ldots,T\}\), and reduces to * the Rasch (1960) model when binary responses \(Y_{i,j}\in\{0,1\}\) are observed at \(T=1\) time point, the link function is the logit link, and \(\gamma=0\); * the Jeon et al. (2021) model when binary responses \(Y_{i,j}\in\{0,1\}\) are observed at \(T=1\) time point, the link function is the logit link, and \(\gamma\in[0,+\infty)\). As a result, the proposed data model inherits the advantages of the Jeon et al. (2021) model: e.g., in educational assessments, * \(\alpha_{i}\) can be interpreted as the ability of student \(i\); * \(\beta_{j}\) can be interpreted as the easiness of item \(j\); * the metric space \((\mathbb{M},\,d)\) can be interpreted as an interaction map that captures interactions between students \(i\) and items \(j\). It is worth noting that the latent space should not be interpreted as an ability map, because the ability of student \(i\) is captured by \(\alpha_{i}\). Instead, the latent space should be interpreted as an interaction map, capturing interactions between students \(i\) and items \(j\): e.g., a large distance between student \(i\) and item \(j\) indicates that the mean response of student \(i\) to item \(j\) is lower than would be expected based on the ability \(\alpha_{i}\) of student \(i\) and the easiness \(\beta_{j}\) of item \(j\). In addition to inheriting the advantages of the Jeon et al. (2021) model, the proposed data model helps assess * whether student \(i\) makes progress towards learning target \(\mathcal{T}\), based on changes of the distance \(d(\boldsymbol{a}_{i,t},\mathcal{T})\) as a function of time \(t\); * how much progress student \(i\) makes towards learning target \(\mathcal{T}\), based on changes of the distance \(d(\boldsymbol{a}_{i,t},\mathcal{T})\), and how much uncertainty is associated with such assessments; * how much more progress student \(i\) can make in the future. ### Process model The process model determines whether and how much progress individuals \(i\) make towards the target of interest \(\mathcal{T}\). The process model assumes that a metric space \((\mathbb{M},\,d)\) has been chosen. We discuss the non-trivial issue of selecting the geometry of the metric space \((\mathbb{M},\,d)\) in Section 2.3 and review special cases of metric spaces in Section 2.2.1 (normed vector spaces), Section 2.2.2 (Euclidean space), and 2.2.3 (hyperbolic space). The latent process model assumes that the variables \(j\) measuring progress towards target \(\mathcal{T}\) are located at positions \[\boldsymbol{b}_{j}\;\;\stackrel{{\text{\tiny iid}}}{{\sim}}\;G,\] where \(G\) is a distribution with support \(\mathbb{M}\). The target \(\mathcal{T}\) is defined as \(\mathcal{T}\coloneqq(1/p)\sum_{j=1}^{p}\boldsymbol{b}_{j}\). The positions \(\boldsymbol{a}_{i,t}\) of individuals \(i\) at time \(t\) are generated as follows. First, the position \(\boldsymbol{a}_{i,1}\) of individual \(i\) at time \(t=1\) is generated by sampling \[\boldsymbol{a}_{i,1}\;\;\stackrel{{\text{\tiny iid}}}{{\sim}}\;H,\] where \(H\) is a distribution with support \(\mathbb{M}\). The position \(\boldsymbol{a}_{i,t}\) of individual \(i\) at time \(t\in\{2,\ldots,T\}\) is a convex combination of \(i\)'s position \(\boldsymbol{a}_{i,t-1}\) at time \(t-1\) and the target's position \(\mathcal{T}\coloneqq(1/p)\sum_{j=1}^{p}\boldsymbol{b}_{j}\): \[\boldsymbol{a}_{i,t}\;\;\coloneqq\;\;(1-\lambda_{i,t})\;\boldsymbol{a}_{i,t- 1}+\lambda_{i,t}\;\mathcal{T}, \tag{2.1}\] where \(\boldsymbol{a}_{i,t}\in\mathbb{M}\) provided that \(\boldsymbol{a}_{i,t-1}\in\mathbb{M}\) and \(\mathcal{T}\in\mathbb{M}\), because the set \(\mathbb{M}\) is convex. The quantity \(\lambda_{i,t}\in[0,1]\) can be interpreted as the rate of progress of individual \(i\) towards target \(\mathcal{T}\) between time \(t-1\) and \(t\). In other words, if individual \(i\) makes progress, \(i\) moves towards target \(\mathcal{T}\) on the shortest path between \(\boldsymbol{a}_{i,t-1}\) and \(\mathcal{T}\). A random term can be added to the right-hand side of (2.1) to allow individuals \(i\) to deviate from the shortest path between \(\boldsymbol{a}_{i,t-1}\) and \(\mathcal{T}\). That said, we prefer to keep the model simple and do not consider deviations of individuals \(i\) from the shortest path between \(\boldsymbol{a}_{i,t-1}\) and \(\mathcal{T}\), because the moves of individuals \(i\) in \(\mathbb{M}\) are unobserved. Covariates can be incorporated into the rates of progress \(\lambda_{i,t}\) of individuals \(i\) between time \(t-1\) and \(t\) by using a suitable link function. #### 2.2.1 Special cases In general, the process model makes two assumptions. First, the set \(\mathbb{M}\) is convex, so that the positions \(\boldsymbol{a}_{i,t}\coloneqq(1-\lambda_{i,t})\;\boldsymbol{a}_{i,t-1}+ \lambda_{i,t}\;\mathcal{T}\) of individuals \(i\) at time \(t\) are contained in the set \(\mathbb{M}\), provided that \(\boldsymbol{a}_{i,t-1}\in\mathbb{M}\) and \(\mathcal{T}\in\mathbb{M}\). Second, the set \(\mathbb{M}\) is equipped with a distance function \(d\), so that the distances between individuals \(i\) and variables \(j\) measuring progress towards target \(\mathcal{T}\) can be quantified. Despite the fact that the process model--at least in its most general form--does not require more than two assumptions, the interpretation and application of the process model is facilitated by additional assumptions. For example, the interpretation of the process model is facilitated if the set \(\mathbb{M}\) is endowed with a norm \(\|.\|\) (Euclidean or non-Euclidean) and \(d(\boldsymbol{a}_{i,t},\mathcal{T})\coloneqq\|\boldsymbol{a}_{i,t}-\mathcal{ T}\|\), because the distance \(d(\boldsymbol{a}_{i,t},\,\mathcal{T})\) can then be expressed as a function of the distance \(d(\boldsymbol{a}_{i,t-1},\,\mathcal{T})\): \[d(\boldsymbol{a}_{i,t},\,\mathcal{T}) = \|((1-\lambda_{i,t})\,\boldsymbol{a}_{i,t-1}+\lambda_{i,t}\, \mathcal{T})-\mathcal{T}\|\;\;=\;\;(1-\lambda_{i,t})\;d(\boldsymbol{a}_{i,t-1}, \,\mathcal{T}).\] As a consequence, the rate of progress \(\lambda_{i,t}\) of individual \(i\) between time \(t-1\) and \(t\) can be expressed as a function of the distances \(d(\mathbf{a}_{i,t-1},\mathcal{T})\) and \(d(\mathbf{a}_{i,t},\mathcal{T})\): \[\lambda_{i,t} = 1-\frac{d(\mathbf{a}_{i,t},\mathcal{T})}{d(\mathbf{a}_{i,t-1},\mathcal{T })},\] provided that \(d(\mathbf{a}_{i,t-1},\mathcal{T})>0\). In other words, the rate of progress \(\lambda_{i,t}\) of individual \(i\) between time \(t-1\) and \(t\) reveals how much the distance between individual \(i\)'s position and the target's position \(\mathcal{T}\) is reduced between time \(t-1\) and \(t\). #### 2.2.2 Special case 1: Euclidean space In the special case \(\mathbb{M}\coloneqq\mathbb{R}^{q}\) (\(q\geq 1\)) and \(d(\mathbf{a}_{i,t},\mathcal{T})\coloneqq\|\mathbf{a}_{i,t}-\mathcal{T}\|_{2}\), it is convenient to choose \(G\) and \(H\) to be multivariate Gaussians. A demonstration of the process model in the special case \(\mathbb{M}\coloneqq\mathbb{R}^{2}\) and \(d(\mathbf{a}_{i,t},\mathcal{T})\coloneqq\|\mathbf{a}_{i,t}-\mathcal{T}\|_{2}\) is provided by Figure 2. #### 2.2.3 Special case 2: hyperbolic space An alternative to Euclidean space is a space with an intrinsic hierarchical structure, such as hyperbolic space. For example, consider the two-dimensional Poincare disk with radius \(\rho\in(0,+\infty)\), that is, \(\mathbb{M}\coloneqq\{\mathbf{x}\in\mathbb{R}^{2}:\,\|\mathbf{x}\|_{2}<\rho\}\). The distance between the position \(\mathbf{a}_{i,t}\in\mathbb{M}\) of individual \(i\) at time \(t\) and target \(\mathcal{T}\in\mathbb{M}\) on the Poincare disk with radius \(\rho\) is defined by \[d(\mathbf{a}_{i,t},\mathcal{T}) \coloneqq \text{arcosh}\left(1+\frac{2\ \rho^{2}\ \|\mathbf{a}_{i,t}-\mathcal{T}\|_{2}^{2}}{\left(\rho^{2}-\|\mathbf{a}_{i,t}\|_{2}^{2} \right)\ \left(\rho^{2}-\|\mathcal{T}\|_{2}^{2}\right)}\right).\] It is then convenient to choose \(G\) and \(H\) to be the Uniform distribution on \(\{\mathbf{x}\in\mathbb{R}^{2}:\,\|\mathbf{x}\|_{2}<\rho\}\). Figure 2: Interaction map: Two individuals \(A\) and \(B\) with positions \(\mathbf{a}_{A,1}\) and \(\mathbf{a}_{B,1}\) at time \(1\) and positions \(\mathbf{a}_{A,2}\) and \(\mathbf{a}_{B,2}\) at time \(2\) make progress towards a target of interest \(\mathcal{T}\coloneqq(1/p)\sum_{j=1}^{p}\mathbf{b}_{j}\), measured by variables \(j\) with positions \(\mathbf{b}_{j}\) (unlabeled points). The interaction map represents interactions between individuals and variables along with the progress of individuals towards target \(\mathcal{T}\). The rates of progress \(\lambda_{A,2}\) and \(\lambda_{B,2}\) determine how much the distances of \(A\) and \(B\) to target \(\mathcal{T}\) are reduced between time \(1\) and \(2\), respectively. ### Selecting the geometry of the metric space An open issue is how to select the geometry of the metric space \((\mathbb{M},\,d)\). In the statistical analysis of network data, recent work by Lubbold et al. (2023) introduced a promising approach to selecting the geometry of latent space models of network data, although the approach of Lubbold et al. (2023) falls outside of the Bayesian framework considered here. That said, we expect that the pioneering work of Lubbold et al. (2023) paves the way for Bayesian approaches to selecting the geometry of the underlying space, for models of network data and models of educational data. In the special case \(\mathbb{M}\coloneqq\mathbb{R}^{q}\), an additional issue may arise, in that the dimension \(q\) of \(\mathbb{R}^{q}\) may be unknown. Since we are interested in dimension reduction (embedding both individuals and variables in a low-dimensional space) and helping professionals (e.g., educators, medical professionals) assess the progress of individuals by using an easy-to-interpret interaction map, it is tempting to choose a low-dimensional Euclidean space, e.g., \(\mathbb{R}^{2}\) or \(\mathbb{R}^{3}\). In the simulations and applications in Sections 4, 5, and 6, we compare metric spaces \((\mathbb{M},\,d)\) with \(\mathbb{M}\coloneqq\mathbb{R}^{q}\) and \(q\in\{1,2,3,4\}\) by using the Watanabe-Akaike information criterion (Watanabe, 2013) along with interaction maps in \(\mathbb{R},\ \mathbb{R}^{2}\), and \(\mathbb{R}^{3}\). ### Comparison with existing approaches A classic approach to measuring progress of students in educational assessments is based on Andersen's (1985) model. Andersen's (1985) model assumes that binary responses \(Y_{i,j,t}\in\{0,1\}\) are independent Bernoulli\((\mu_{i,j,t})\) random variables with \(\mathrm{logit}(\mu_{i,j,t})\coloneqq\alpha_{i,t}+\beta_{j}\). To compare Andersen's (1985) model with the proposed latent process model, consider binary responses \(Y_{i,j,t}\in\{0,1\}\) by students \(i\) to items \(j\) at \(T=2\) time points \(t\in\{1,2\}\) along with the following reparameterization of Andersen's (1985) model: \[\alpha_{i,1} \coloneqq -\gamma\,d(\mathbf{a}_{i,1},\,\mathcal{T})\] \[\alpha_{i,2} \coloneqq -\gamma\,d(\mathbf{a}_{i,2},\,\mathcal{T})\ \ =\ \ -\gamma\,(1-\lambda_{i,2})\,d(\mathbf{a}_{i,1},\,\mathcal{T})\ \ =\ \ (1-\lambda_{i,2})\ \alpha_{i,1},\] where \(\gamma\in[0,+\infty)\) and \(d(\mathbf{a}_{i,t},\mathcal{T})\coloneqq\|\mathbf{a}_{i,t}-\mathcal{T}\|_{2}\) (\(t\in\{1,2\}\)). Then the rate of progress \(\lambda_{i,2}\) of student \(i\) between time \(1\) and \(2\) is a function of the distances \(d(\mathbf{a}_{i,1},\,\mathcal{T})\) and \(d(\mathbf{a}_{i,2},\,\mathcal{T})\): \[\lambda_{i,2} = 1-\frac{\alpha_{i,2}}{\alpha_{i,1}}\ \ =\ \ 1-\frac{d(\mathbf{a}_{i,2},\, \mathcal{T})}{d(\mathbf{a}_{i,1},\,\mathcal{T})},\] provided that \(d(\mathbf{a}_{i,1},\,\mathcal{T})>0\). In other words, the higher the ability \(\alpha_{i,2}\) of student \(i\) at time \(2\) relative to the ability \(\alpha_{i,1}\) of student \(i\) at time \(1\) is, the higher is the rate of progress \(\lambda_{i,2}\) of student \(i\) between time \(1\) and \(2\). While it may be comforting to know that the abilities \(\alpha_{i,1}\) and \(\alpha_{i,2}\) of student \(i\) at time \(1\) and \(2\) can be interpreted in terms of distances \(d(\mathbf{a}_{i,1},\,\mathcal{T})\) and \(d(\mathbf{a}_{i,2},\,\mathcal{T})\) between student \(i\) and learning target \(\mathcal{T}\) in an underlying metric space \((\mathbb{M},\,d)\) and that \(1-\alpha_{i,2}\,/\alpha_{i,1}\) can be interpreted as the rate of progress of student \(i\) towards learning target \(\mathcal{T}\), Andersen's (1985) model has multiple drawbacks: 1. Andersen's (1985) model assumes that \[\mathrm{logit}(\mu_{i,j,t})\ \ \coloneqq\ \ \alpha_{i,t}+\beta_{j}\] is additive in the ability \(\alpha_{i,t}\) of student \(i\) at time \(t\) and the easiness \(\beta_{j}\) of item \(j\) and therefore cannot capture interactions between students \(i\) and items \(j\) arising from unobserved variables, such as cultural background, upbringing, and mentoring. As a result, the underlying metric space \((\mathbb{M},\,d)\) of Andersen's (1985) reparameterized model cannot be interpreted as an interaction map, but should be interpreted as an ability map: In fact, the distances \(d(\boldsymbol{a}_{i,1},\,\mathcal{T})\propto\alpha_{i,1}\) and \(d(\boldsymbol{a}_{i,2},\,\mathcal{T})\propto\alpha_{i,2}\) of individual \(i\) to target \(\mathcal{T}\) at time \(1\) and \(2\) are proportional to the abilities \(\alpha_{i,1}\) and \(\alpha_{i,2}\). By contrast, the proposed latent process model assumes that \[\mathrm{logit}(\mu_{i,j,t}) \coloneqq \begin{cases}\alpha_{i}+\beta_{j}-\gamma\ d(\boldsymbol{a}_{i,1},\boldsymbol{b}_{j})&\text{if }t=1\\ \alpha_{i}+\beta_{j}-\gamma\ (1-\lambda_{i,2})\,d(\boldsymbol{a}_{i,1}, \mathcal{T})&\text{if }t=2\end{cases}\] and can hence capture interactions between individuals (e.g., students) and variables (e.g., items): e.g., in educational assessments, a large distance between student \(i\) and item \(j\) indicates that the probability of a correct response of student \(i\) to item \(j\) is lower than would be expected based on the ability \(\alpha_{i}\) of student \(i\) and the easiness \(\beta_{j}\) of item \(j\). As a consequence, the underlying metric space \((\mathbb{M},\,d)\) of the proposed latent process model should be interpreted as an interaction map rather than an ability map. 2. The fact that the metric space \((\mathbb{M},\,d)\) underlying Andersen's (1985) reparameterized model represents an ability map gives rise to an additional limitation of Andersen's (1985) model: If, e.g., \(\mathbb{M}\coloneqq\mathbb{R}^{q}\) (\(q\geq 1\)) and \(d(\boldsymbol{a}_{i,t},\mathcal{T})\coloneqq|\boldsymbol{a}_{i,t}-\mathcal{T} |_{2}\), then \(q=1\) dimension is sufficient for representing the distances \(d(\boldsymbol{a}_{i,t},\mathcal{T})\) between individuals \(i\) at time \(t\) and learning target \(\mathcal{T}\) in \((\mathbb{M},\,d)\). A Euclidean space \(\mathbb{R}^{q}\) of dimension \(q\geq 2\) would provide a less parsimonious representation than \(q=1\) and therefore \(q=1\) is preferable to \(q\geq 2\) according to Occam's razor (Jeffreys and Berger, 1992). By contrast, the metric space \((\mathbb{M},\,d)\) underlying the proposed latent process model represents an interaction map rather than an ability map. To capture interactions between individuals and variables, a Euclidean space \(\mathbb{R}^{q}\) of dimension \(q\geq 2\) may be required. The applications in Sections 5 and 6 demonstrate that. 3. Andersen's (1985) reparameterized model does not separate the abilities \(\alpha_{i,1}\) and \(\alpha_{i,2}\) of individual \(i\) at time \(1\) and \(2\) from the rate of progress \(\lambda_{i,2}\) of individual \(i\) between time \(1\) and \(2\), but assumes that the rate of progress \(\lambda_{i,2}=1-\alpha_{i,2}\,/\,\alpha_{i,1}\) is a function of the abilities \(\alpha_{i,1}\) and \(\alpha_{i,2}\). By contrast, the proposed latent process model separates the ability \(\alpha_{i}\) of individual \(i\) from the rate of progress \(\lambda_{i,2}\) of individual \(i\) between time \(1\) and \(2\). 4. The classic approach to assessing progress based on Andersen's (1985) model and its extensions helps estimate how much progress individuals \(i\) make (model estimation), but does not determine whether individuals \(i\) make progress (model selection). By contrast, the proposed latent process model does double duty: It helps determine whether individuals \(i\) make progress (model selection), in addition to estimating how much progress individuals \(i\) make (model estimation), and how much more progress individuals \(i\) can make in the future. ### Other possible approaches to measuring progress There are other possible approaches to measuring progress, both within and outside of the proposed statistical framework. To demonstrate, recall that the proposed statistical framework builds on generalized linear models. In other words, the responses \(Y_{i,j,t}\) have exponential-family distributions \(\mathbb{P}_{\boldsymbol{\theta},\boldsymbol{a}_{i,t},\boldsymbol{b}_{j}}\)(Sundberg, 2019; Efron, 2022). In the language of exponential families, the proposed statistical framework measures progress based on the canonical parameterization of the exponential family. An alternative would be to measure progress based on the mean-value parameterization of the exponential family. To compare these alternative approaches to assessing progress, let \(\mathbf{Y}_{i}\coloneqq(Y_{i,j,t})_{1\leq j\leq p,\,1\leq t\leq T}\) be the vector of responses \(Y_{i,j,t}\) of individual \(i\) and let the distributions \(\mathbb{P}_{\mathbf{\theta},\mathbf{a}_{i},t,\mathbf{b}_{j}}\) of responses \(Y_{i,j,t}\) be one-parameter exponential-family distributions (e.g., \(Y_{i,j,t}\mid\mu_{i,j,t}\stackrel{{\text{\tiny ind}}}{{\sim}} \text{Bernoulli}(\mu_{i,j,t})\) with mean \(\mu_{i,j,t}\in(0,1)\), \(Y_{i,j,t}\mid\mu_{i,j,t}\stackrel{{\text{\tiny ind}}}{{\sim}} \text{Poisson}(\mu_{i,j,t})\) with mean \(\mu_{i,j,t}\in(0,+\infty)\), or \(Y_{i,j,t}\mid\mu_{i,j,t},\,\sigma_{i,j,t}^{2}\stackrel{{\text{ \tiny ind}}}{{\sim}}N(\mu_{i,j,t},\,\sigma_{i,j,t}^{2})\) with mean \(\mu_{i,j,t}\in\mathbb{R}\) and known variance \(\sigma_{i,j,t}^{2}\in(0,+\infty)\)). Then the probability density function of an exponential-family probability measure \(\mathbb{P}_{\mathbf{\theta},\mathbf{a}_{i},t,\mathbf{b}_{j}}\) dominated by a \(\sigma\)-finite measure \(\nu\) can be represented as \[f(y_{i,j,t}\mid\eta_{i,j,t}) = a_{i,j,t}(y_{i,j,t})\,\exp(\eta_{i,j,t}\,s_{i,j,k}(y_{i,j,t})- \psi_{i,j,t}(\eta_{i,j,t})),\] where \(a_{i,j,t}(y_{i,j,t})\in[0,+\infty)\) is a function of \(y_{i,j,t},\ \eta_{i,j,t}\equiv\eta_{i,j,t}(\mu_{i,j,t}(\mathbf{\theta},\,\mathbf{a}_{i,t},\,\mathbf{b }_{j}))\in\mathbb{R}\) is a canonical parameter, \(s_{i,j,t}(y_{i,j,t})\in\mathbb{R}\) is a sufficient statistic, and \[\psi_{i,j,t}(\eta_{i,j,t}) \coloneqq \int\limits_{y_{i,j,t}}\exp(\eta_{i,j,t}\,s_{i,j,k}(y_{i,j,t})) \,\mathsf{d}\,\nu(y_{i,j,t})\] ensures that \(f(y_{i,j,t}\mid\eta_{i,j,t})\) integrates to \(1\); note that the general formulation above covers both the discrete setting (in which case \(\nu\) may be counting measure) and the continuous setting (in which case \(\nu\) may be Lebesgue measure). As a result, the probability density function of the response vector \(\mathbf{Y}_{i}\) of individual \(i\) can be represented as \[f(\mathbf{y}_{i}\mid\mathbf{\eta}_{i}) = \prod_{j=1}^{p}\,\prod_{t=1}^{T}f(y_{i,j,t}\mid\eta_{i,j,t})\] where \(a_{i}(\mathbf{y}_{i})\coloneqq\prod_{j=1}^{p}\,\prod_{t=1}^{T}a_{i,j,t}(y_{i,j,t})\), \(\psi_{i}(\mathbf{\eta}_{i})\coloneqq\sum_{j=1}^{p}\sum_{t=1}^{T}\psi_{i,j,t}(\eta_ {i,j,t})\), and \(\langle\mathbf{\eta}_{i},s_{i}(\mathbf{y}_{i})\rangle\) denotes the inner product of the vector of canonical parameters \(\mathbf{\eta}_{i}\coloneqq(\eta_{i,j,t})_{1\leq j\leq p,\,1\leq t\leq T}\) and the vector of sufficient statistics \(s_{i}(\mathbf{y}_{i})\coloneqq(s_{i,j,t}(y_{i,j,t}))_{1\leq j\leq p,\,1\leq t\leq T}\). In other words, if the responses \(Y_{i,j,t}\) have exponential-family distributions, so does the response vector \(\mathbf{Y}_{i}\) of individual \(i\). The parameter vector \[\mathbf{\mu}_{i}(\mathbf{\eta}_{i}) \coloneqq \mathbb{E}_{\mathbf{\eta}_{i}}\,s_{i}(\mathbf{Y}_{i})\] with coordinates \(\mu_{i,j,t}(\mathbf{\eta}_{i})\coloneqq\mathbb{E}_{\mathbf{\eta}_{i}}\,s_{i,j,t}(Y_{i,j,t})\) is known as the mean-value parameter vector of the exponential family. Since the map \(\mathbf{\eta}_{i}\mapsto\mathbf{\mu}_{i}(\mathbf{\eta}_{i})\) is a homeomorphism and is hence one-to-one (Brown, 1986, Theorem 3.6, p. 74), one can parameterize the model by using either the canonical parameter vector \(\mathbf{\eta}_{i}\) or the mean-value parameter vector \(\mathbf{\mu}_{i}(\mathbf{\eta}_{i})\). As a consequence, one can measure progress based on either the canonical parameterization or the mean-value parameterization: e.g., if the responses \(Y_{i,j,t}\) of individual \(i\) to variables \(j\) have been recorded at \(T=2\) time points, one could measure progress based on differences in mean-value parameters \(\mu_{i,j,2}(\mathbf{\eta}_{i})\) and \(\mu_{i,j,1}(\mathbf{\eta}_{i})\), summed over all variables \(j\): \[\text{progress of individual }i \coloneqq \sum_{j=1}^{p}\left[\mu_{i,j,2}(\mathbf{\eta}_{i})-\mu_{i,j,1}( \mathbf{\eta}_{i})\right].\] In addition, one could assess how much more progress individual \(i\) can make in the future: \[\text{possible progress of individual }i\text{ in the future} \coloneqq \sum_{j=1}^{p}\left[\sup_{\mathbf{\eta}_{i}}\,\mu_{i,j,2}(\mathbf{\eta}_{i})- \mu_{i,j,2}(\mathbf{\eta}_{i})\right].\] That being said, the mean-value parameterization is less convenient for the purpose of model selection, e.g., the problem of determining whether students make progress. In fact, it is common practice to base model selection of generalized linear models on the canonical parameterization rather than the mean-value parameterization of exponential families. Therefore, we assess whether and how much progress individuals \(i\) make towards the target of interest \(\mathcal{T}\) based on the canonical parameterization rather than the mean-value parameterization. The resulting statistical framework comes with the benefit of model selection (e.g., determining whether individuals make progress) and inherits the advantages of the Jeon et al. (2021) model, which is likewise based on the canonical parameterization rather than the mean-value parameterization. ### Priors We assume that \(\alpha_{i}\mid\mu_{\alpha},\,\sigma_{\alpha}^{2}\stackrel{{\text{ iid}}}{{\sim}}N(\mu_{\alpha},\,\sigma_{\alpha}^{2})\), \(\beta_{j}\mid\mu_{\beta},\,\sigma_{\beta}^{2}\stackrel{{\text{ iid}}}{{\sim}}N(\mu_{\beta},\,\sigma_{\beta}^{2})\), and \(\gamma\mid\sigma_{\gamma}^{2}\sim\text{Half-Normal}(0,\,\sigma_{\gamma}^{2})\). The hyperparameters \(\mu_{\alpha}\in\mathbb{R}\) and \(\mu_{\beta}\in\mathbb{R}\) are set to \(0\) to address identifiability issues (see Section 2.7), while hyperparameter \(\sigma_{\alpha}^{2}\in(0,+\infty)\) is assigned hyperprior \(\sigma_{\alpha}^{2}\mid a_{\sigma_{\alpha}},\,b_{\sigma_{\alpha}}\sim\text{ Inverse-Gamma}(a_{\sigma_{\alpha}},\,b_{\sigma_{\alpha}})\). The hyperparameters \(\sigma_{\beta}^{2}\in(0,+\infty)\), \(\sigma_{\gamma}^{2}\in(0,+\infty)\), \(a_{\sigma_{\alpha}}\in(0,+\infty)\) and \(b_{\sigma_{\alpha}}\in(0,+\infty)\) need to be specified by users. A flexible prior of \(\lambda_{i,t}\) can be specified by \[\text{logit}(\lambda_{i,t})\mid\pi_{i,t},\,\mu_{0},\,\sigma_{0}^{2},\,\mu_{1}, \,\sigma_{1}^{2}\stackrel{{\text{ iid}}}{{\sim}}\,\,\,\,(1-\pi_{i,t})\,\,N(\mu_{0},\,\sigma_{0}^{2})+\pi_{i,t}\,\,N(\mu_{1},\, \sigma_{1}^{2}),\] where \(\pi_{i,t}\in(0,\,1)\). The hyperparameters \((\mu_{0},\,\sigma_{0}^{2})\in\mathbb{R}\times(0,\,+\infty)\) and \((\mu_{1},\,\sigma_{1}^{2})\in\mathbb{R}\times(0,\,+\infty)\) are chosen so that the distribution of \(\lambda_{i,t}\) is a mixture of two distributions, one of them placing 95% of its mass between.01 and.21 and the other placing 95% of its mass between.04 and.96. The resulting two-component mixture prior of \(\lambda_{i,t}\) is motivated by the observation that often some individuals make negligible progress while others make non-negligible progress. The two-component mixture prior of \(\lambda_{i,t}\) is shown in Figure 3. The mixing proportions \(\pi_{i,t}\) have priors \(\pi_{i,t}\mid a_{\pi},\,b_{\pi}\stackrel{{\text{ iid}}}{{\sim}}\text{Beta}(a_{\pi},\,b_{\pi})\), with hyperparameters \(a_{\pi}\in(0,+\infty)\) and \(b_{\pi}\in(0,+\infty)\). Figure 3: The two component distributions of the two-component mixture prior for the rates of progress \(\lambda_{i,t}\), with 5% and 95% percentiles indicated by dashed vertical lines. ### Identifiability issues A well-known issue of the classic Rasch (1960) model and its extensions (including the proposed latent process model) is that the weights \(\alpha_{i}\) and \(\beta_{j}\) cannot be all estimated, unless additional restrictions are imposed. We follow Gelman and Hill (2007, p. 316) by impose the constraints \(\mu_{\alpha}\coloneqq 0\) and \(\mu_{\beta}\coloneqq 0\). In the special case \(\mathbb{M}\coloneqq\mathbb{R}^{2}\) and \(d(\mathbf{a}_{i,t},\mathcal{T})\coloneqq|\mathbf{a}_{i,t}-\mathcal{T}|_{2}\), the latent process model has the same identifiability issues as other Euclidean latent space models (e.g., Hoff et al., 2002), in that the distances \(d(\mathbf{a}_{i,t},\mathcal{T})\) are invariant to reflection, translation, and rotation of the positions \(\mathbf{a}_{i,t}\) of individuals \(i\) at time \(t\) and target \(\mathcal{T}\). Such identifiability issues can be addressed by Procrustes matching (Hoff et al., 2002). An additional identifiability issue arises from the fact that, for all \(c\in(0,1)\cup(1,+\infty)\), \[\eta_{i,j,t}(\mu_{i,j,t}(\mathbf{\theta},\,\mathbf{a}_{i,t},\,\mathbf{b}_{j})) \coloneqq \alpha_{i}+\beta_{j}-\gamma\,\|\mathbf{a}_{i,t}-\mathcal{T}\|_{2}\ \ =\ \ \alpha_{i}+\beta_{j}-\frac{\gamma}{c}\,\|c\,\mathbf{a}_{i,t}-c\, \mathcal{T}\|_{2},\] where \(\mathcal{T}\coloneqq(1/p)\sum_{j=1}^{p}\mathbf{b}_{j}\). Such identifiability issues can be addressed by constraining either the positions \(\mathbf{a}_{i,t}\) of individuals \(i\) at time \(t\) or the positions \(\mathbf{b}_{j}\) of variables \(j\) (but not both). We constrain the positions \(\mathbf{b}_{j}\) of variables \(j\), adapting the constraint of Handcock et al. (2007): \[\sqrt{\frac{1}{p}\sum_{j=1}^{p}\|\mathbf{b}_{j}\|_{2}^{2}} = 1.\] We do not constrain the positions \(\mathbf{a}_{i,t}\) of individuals \(i\) at time \(t\). ## 3 Bayesian inference Define \(\mathbf{y}\coloneqq(y_{i,j,t})_{1\leq i\leq n,\,1\leq j\leq p,\,1\leq t\leq T}\), \(\mathbf{\alpha}\coloneqq(\alpha_{i})_{1\leq i\leq n}\), \(\mathbf{\beta}\coloneqq(\beta_{j})_{1\leq j\leq p}\), \(\mathbf{\lambda}\coloneqq(\lambda_{i,t})_{1\leq i\leq n,\,2\leq t\leq T}\), \(\mathbf{\pi}\coloneqq(\pi_{i,t})_{1\leq i\leq n,\,2\leq t\leq T}\), \(\mathbf{A}\coloneqq(\mathbf{a}_{i,t})_{1\leq i\leq n,\,1\leq t\leq T}\), and \(\mathbf{B}\coloneqq(\mathbf{b}_{j})_{1\leq j\leq p}\). Then the posterior is proportional to \[f(\mathbf{\alpha},\,\mathbf{\beta},\,\gamma,\,\mathbf{A},\,\mathbf{B},\,\mathbf{ \lambda},\,\mathbf{\pi}\mid\mathbf{y}) \propto \left[\prod_{i=1}^{n}\prod_{j=1}^{p}\prod_{t=1}^{T}f(y_{i,j,t}\mid \alpha_{i},\,\beta_{j},\,\gamma,\,\mathbf{a}_{i,t},\,\mathbf{b}_{1},\ldots,\mathbf{b}_{p})\right]\] \[\times \left[\prod_{i=1}^{n}\,f(\alpha_{i}\mid\sigma_{\alpha}^{2}) \right]\times\,f(\sigma_{\alpha}^{2})\,\times\,\left[\prod_{j=1}^{p}\,f(\beta _{j})\right]\times\,f(\gamma)\] \[\times \left[\prod_{i=1}^{n}\,f(\mathbf{a}_{i,1})\,\prod_{t=2}^{T}\,f(\mathbf{a }_{i,t}\mid\mathbf{a}_{i,t-1},\,\mathbf{b}_{1},\ldots,\mathbf{b}_{p},\,\lambda_{i,t})\right]\] \[\times \left[\prod_{j=1}^{p}\,f(\mathbf{b}_{j})\right]\times\left[\prod_{i=1} ^{n}\,\prod_{t=2}^{T}\,f(\lambda_{i,t}\mid\pi_{i,t})\,\,f(\pi_{i,t})\right].\] Here, in an abuse of notation, \(f\) denotes a probability mass or density function with suitable support. It is worth noting that the conditional densities \(f(\mathbf{a}_{i,t}\mid\mathbf{a}_{i,t-1},\,\mathbf{b}_{1},\ldots,\mathbf{b}_{p},\,\lambda_{i,t})\) are point masses, because the positions \(\mathbf{a}_{i,t}\coloneqq(1-\lambda_{i,t})\)\(\mathbf{a}_{i,t-1}+\lambda_{i,t}\)\(\mathcal{T}\) of individuals \(i\) at time \(t\) are non-random functions of \(\mathbf{a}_{i,t-1}\), \(\mathcal{T}\coloneqq(1/p)\sum_{j=1}^{p}\mathbf{b}_{j}\), and \(\lambda_{i,t}\) for fixed \(\mathbf{a}_{i,t-1}\), \(\mathbf{b}_{1},\ldots,\mathbf{b}_{p}\), and \(\lambda_{i,t}\). We approximate the posterior by Markov chain Monte Carlo: e.g., if all responses \(Y_{i,j,t}\in\{0,1\}\) are binary, we use the Polya-Gamma sampler of Polson et al. (2013) for updating unknown quantities in the data model and Gibbs samplers and Metropolis-Hasting steps for updating unknown quantities in the process model. Details are provided in Supplement B. ## 4 Simulations We investigate whether the proposed statistical framework can capture interesting features of real-world data that have not been generated by the model. In other words, we investigate the behavior of the proposed statistical framework under model misspecification. Throughout Sections 4, 5, and 6, we focus on binary responses \(Y_{i,j,t}\in\{0,1\}\) at \(T=2\) time points \(t\in\{1,2\}\) and write \(\lambda_{i}\) instead of \(\lambda_{i,2}\). All results are based on the proposed latent process model with a logit link function and the special case \(\mathbb{M}\coloneqq\mathbb{R}^{q}\) and \(d(\mathbf{a}_{i,t},\mathcal{T})\coloneqq\|\mathbf{a}_{i,t}-\mathcal{T}\|_{2}\). We compare results based on \(\mathbb{M}\coloneqq\mathbb{R},\ \mathbb{R}^{2}\), \(\mathbb{R}^{3}\), and \(\mathbb{R}^{4}\) using the Watanabe-Akaike information criterion (Watanabe, 2013) along with interaction maps based on \(\mathbb{R},\ \mathbb{R}^{2}\), and \(\mathbb{R}^{3}\). The prior and Markov chain Monte Carlo algorithm are described in Supplement C. ### Scenario \(n=300\), \(p=10\), \(T=2\) We first consider a scenario in which the progress of individuals towards a target of interest \(\mathcal{T}\in\mathbb{M}\coloneqq\mathbb{R}^{q}\) is assessed based on binary responses \(Y_{i,j,t}\in\{0,1\}\) of \(n=300\) individuals to \(p=10\) variables at \(T=2\) time points \(t\in\{1,2\}\). We assume that there are three groups of individuals, called G1, G2, and G3. Each group is comprised of 100 individuals. At time 1, the success probabilities of all individuals are.2. At time 2, individuals have success probabilities.25 (G1),.5 (G2), and.75 (G3). We generated 250 data sets and estimated the latent process model with \(\mathbb{M}\coloneqq\mathbb{R}^{q}\) and \(q\in\{1,2,3,4\}\). The Watanabe-Akaike information criterion (Watanabe, 2013) in Table 1 suggests that the latent process model with \(\mathbb{M}\coloneqq\mathbb{R}^{2}\) is more adequate than \(\mathbb{M}\coloneqq\mathbb{R},\ \mathbb{R}^{3}\), and \(\mathbb{R}^{4}\). A natural question to ask is whether the latent process model can separate groups G1, G2, and G3 based on data. Figure 4 indicates that the latent process model with \(\mathbb{M}\coloneqq\mathbb{R}^{q}\) and \(q\in\{1,2,3\}\) can distinguish groups G1, G2, and G3, despite the fact that the data were not generated by the model. That said, the latent process model with \(\mathbb{M}\coloneqq\mathbb{R}^{2}\) or \(\mathbb{R}^{3}\) can better distinguish groups G1, G2, and G3 than the latent process model with \(\mathbb{M}\coloneqq\mathbb{R}\), reinforcing the findings in Table 1. A related question is whether the latent process model can distinguish individuals with non-neglible progress from those with negligible progress, based on the two-component mixture prior for the rates of progress \(\lambda_{i}\) of individuals \(i\) described in Section 2.6. Figure 5 reveals that--among the 100 individuals in groups G1, G2, and G3--the percentage of individuals deemed to have made negligible progress is more than 80% in the low-progress group G1; is between 40% and 80% in the moderate-progress group G2; and is less than 20% in the high-progress group G3. The latent \begin{table} \begin{tabular}{l|r r r r} \hline \(\mathbb{M}\) & 10\% Percentile & Median & 90\% Percentile & Minimizer of WAIC \\ \hline \(\mathbb{M}\coloneqq\mathbb{R}\) & 17937 & 18170 & 19228 & 2.0\% \\ \(\mathbb{M}\coloneqq\mathbb{R}^{2}\) & 17639 & 17815 & 18016 & 73.2\% \\ \(\mathbb{M}\coloneqq\mathbb{R}^{3}\) & 17613 & 18129 & 18391 & 21.6\% \\ \(\mathbb{M}\coloneqq\mathbb{R}^{4}\) & 18075 & 18304 & 18512 & 3.2\% \\ \hline \end{tabular} \end{table} Table 1: Simulation results (\(n=300\)): Watanabe–Akaike information criterion (WAIC) based on 250 simulated data sets. The last column indicates how often the latent process model with \(\mathbb{M}\coloneqq\mathbb{R},\ \mathbb{R}^{2},\ \mathbb{R}^{3},\ \mathbb{R}^{4}\) minimized the WAIC. process model with \(\mathbb{M}:=\mathbb{R}^{2}\) or \(\mathbb{R}^{3}\) seems to lead to more accurate assessments, compared with the latent process model with \(\mathbb{M}:=\mathbb{R}\). ### Scenario \(n=600\), \(p=10\), \(T=2\) We double the sample size from \(n=300\) to \(n=600\) to explore the behavior of the latent process model as the sample size increases. We assume that there are four groups of individuals, called G1, G2, G3 and G4. Each group is comprised of 150 individuals. At time 1 the success probabilities of all individuals are.2, while at time 2 individuals have success probabilities.3 (G1),.5 (G2),.7 (G3), and.9 (G4). We generated 250 data sets and estimated the latent process model with \(\mathbb{M}:=\mathbb{R}^{q}\) and \(q\in\{1,2,3,4\}\). The results dovetail with the results in Section 4.1, but suggest that the one-dimensional space \(\mathbb{M}:=\mathbb{R}\) is even less appealing when \(n\) is large: First, the Watanabe-Akaike information criterion (Watanabe, 2013) in Table 2 favors \(\mathbb{M}:=\mathbb{R}^{2}\) over \(\mathbb{R},\ \mathbb{R}^{3}\), and \(\mathbb{R}^{4}\). Second, the latent process model can distinguish the groups G1, G2, G3, and G4 when \(\mathbb{M}:=\mathbb{R}^{q}\) and \(q\geq 2\) according to Figure 6, but Figure 4: Simulation results (\(n=300\)): Histogram of the posterior medians of the rates of progress \(\lambda_{i}\) of individuals \(i\) in groups G1, G2, G3. Figure 5: Simulation results (\(n=300\)): Boxplots of the number of individuals in groups G1, G2, G3 deemed to have made negligible progress. the groups G1, G2, G3, and G4 are less well-separated when \(\mathbb{M}\coloneqq\mathbb{R}\). ## 5 Application: mental health We describe the motivating example introduced in Section 1.1 in more detail. The motivating example is concerned with assessing the progress of vulnerable population members in terms of mental health. We focus on a vulnerable population of special interest: mothers with infants in low-income communities. ### Data Santos et al. (2018) conducted between 2003 and 2010 large-scale randomized clinical trials in low-income communities in the U.S. states of North Carolina and New York (Beeber et al., 2014). The data set consists of 306 low-income mothers with infants aged 6 weeks to 36 months, enrolled in the Early Head Start Program in North Carolina or New York. The mental health of these mothers was assessed four times (baseline, 14 weeks, 22 weeks, and 26 weeks). We focus on the \(n=257\) mothers who participated in the first two assessments. The Center for Epidemiological Studies Depression (CES-D) scale was used to measure depression of the mothers. We focus on \(p=10\) items: 1 ("bothered"), 2 ("trouble in concentration"), 3 ("feeling depressed"), 4 ("effort"), 5 ("not feeling hopeful"), 6 ("failure"), 7 ("not happy"), 8 ("talking less"), 9 ("people dislike"), and 10 ("get going"), described in Supplement A. To measure progress toward the target of interest (improving mental health), we recoded the \(p=10\) items by assigning "depression" \(\mapsto\) 0 and "no depression" \(\mapsto\) 1. The proportions of positive responses ("no depression") range from.436 to.747 at the pre-assessment (median.630), and from.621 to.859 at the post-assessment (median.777). \begin{table} \begin{tabular}{l|c c c c} \hline \(\mathbb{M}\) & 10\% Percentile & Median & 90\% Percentile & Minimizer of WAIC \\ \hline \(\mathbb{M}\coloneqq\mathbb{R}\) & 35703 & 36567 & 38665 & 10.0\% \\ \(\mathbb{M}\coloneqq\mathbb{R}^{2}\) & 35256 & 35567 & 35920 & 79.6\% \\ \(\mathbb{M}\coloneqq\mathbb{R}^{3}\) & 35581 & 36017 & 36639 & 10.4\% \\ \(\mathbb{M}\coloneqq\mathbb{R}^{4}\) & 36243 & 36976 & 38083 & 0.0\% \\ \hline \end{tabular} \end{table} Table 2: Simulation results (\(n=600\)): Watanabe-Akaike information criterion (WAIC) based on 250 simulated data sets. The last column indicates how often the latent process model with \(\mathbb{M}\coloneqq\mathbb{R}\), \(\mathbb{R}^{2}\), \(\mathbb{R}^{3}\), \(\mathbb{R}^{4}\) minimized the WAIC. Figure 6: Simulation results (\(n=600\)): Histogram of the posterior medians of the rates of progress \(\lambda_{i}\) of individuals \(i\) in groups G1, G2, G3, G4. ### Results We assess the progress of \(n=257\) low-income mothers towards the target of interest \(\mathcal{T}\) (improving mental health), measured by \(p=10\) items. The priors and Markov chain Monte Carlo algorithm can be found in Supplement C. To detect signs of non-convergence, we used trace plots along with the multivariate Gelman-Rubin potential scale reduction factor of Vats and Knudson (2021). These convergence diagnostics are reported in Supplement D and do not reveal signs of non-convergence. Table 3 shows the Watanabe-Akaike information criterion (Watanabe, 2013) based on the latent process model with \(\mathbb{M}\coloneqq\mathbb{R}^{q}\) and \(q\in\{1,2,3,4\}\), suggesting that \(\mathbb{M}\coloneqq\mathbb{R}^{3}\) is most appropriate. We present interaction maps in Figure 7. While the one-dimensional interaction map suggests that all items are close to target \(\mathcal{T}\), the three-dimensional interaction map (which, according to the Watanabe-Akaike information criterion, is more appropriate than the one-dimensional interaction map) reveals that there are interactions between individuals (mothers) and items (questions about depression): e.g., item \(5\) deviates from the bulk of the items, and mother \(B\) is closest to item \(5\). It turns out that mother \(B\) agreed with item \(5\) at the first assessment ("feeling hopeful"), whereas mothers \(A\), \(C\), and \(D\) did not. In addition, the interaction map suggests that mothers \(B\) and \begin{table} \begin{tabular}{l|c c c} \(\mathbb{M}\) & Minimum & Median & Maximum & Minimizer of WAIC \\ \hline \(\mathbb{M}\coloneqq\mathbb{R}\) & 14650 & 14713 & 14897 & 0\% \\ \(\mathbb{M}\coloneqq\mathbb{R}^{2}\) & 14432 & 14606 & 14679 & 0\% \\ \(\mathbb{M}\coloneqq\mathbb{R}^{3}\) & 14351 & 14450 & 14509 & 100\% \\ \(\mathbb{M}\coloneqq\mathbb{R}^{4}\) & 14499 & 14623 & 14741 & 0\% \\ \hline \end{tabular} \end{table} Table 3: Mental health: Watanabe–Akaike information criterion (WAIC) based on five Markov chain Monte Carlo runs. The last column indicates how often the latent process model with \(\mathbb{M}\coloneqq\mathbb{R},\ \mathbb{R}^{2},\)\(\mathbb{R}^{3},\)\(\mathbb{R}^{4}\) minimized the WAIC. Figure 7: Mental health: Interaction maps \((\mathbb{M},\,d)\) with \(\mathbb{M}\coloneqq\mathbb{R}^{q}\) and \(q\in\{1,2,3\}\) show the progress of low-income mothers \(A\), \(B\), \(C\), \(D\) towards target \(\mathcal{T}\) (improving mental health), measured by items \(1,\ldots,10\) (questions about depression). The positions of mothers and items are estimated by posterior medians. While the one-dimensional interaction map (\(q=1\)) suggests that all items are close to target \(\mathcal{T}\), the three-dimensional interaction map (\(q=3\)) reveals interactions between mothers and items: e.g., item \(5\) deviates from the bulk of the items, and mother \(B\) is closest to item \(5\). It turns out that mother \(B\) agreed with item \(5\) at the first assessment (“feeling hopeful”), whereas mothers \(A\), \(C\), \(D\) did not. In addition, the interaction map suggests that mothers \(B\) and \(C\) have made strides towards improving mental health, whereas mothers \(A\) and \(D\) may need to make more progress. have made strides towards improving mental health, whereas mothers \(A\) and \(D\) may need to make more progress in the future. The posterior summaries in Table 4 confirms that, more likely than not, mothers \(B\) and \(C\) have made progress, while mother \(D\) has not. To gain more insight into the uncertainty about the progress of mothers \(A\), \(B\), \(C\), and \(D\), we present the marginal posteriors of \(\lambda_{A}\), \(\lambda_{B}\), \(\lambda_{C}\), and \(\lambda_{D}\) in Figure 8. The marginal posteriors of \(\lambda_{A}\), \(\lambda_{B}\), \(\lambda_{C}\), and \(\lambda_{D}\) caution that there is non-negligible uncertainty about the progress of some of the mothers. A case in point is mother \(A\): The marginal posterior of mother \(A\)'s rate of progress \(\lambda_{A}\) resembles the Uniform\((0,1)\) distribution. As a consequence, it is unclear whether mother \(A\) has made progress, and how much. By contrast, the marginal posteriors of the rates of progress \(\lambda_{B}\) and \(\lambda_{C}\) of mothers \(B\) and \(C\) have modes close to \(1\), suggesting that mothers \(B\) and \(C\) have made strides towards improving mental health. The marginal posterior of the rate of progress \(\lambda_{D}\) of mother \(D\) has a mode close to \(0\), which underscores that mother \(D\) may need additional assistance. To assess the goodness-of-fit of the model, we generated 1,000 posterior predictions of the proportions of positive responses at the second assessment. Figure 9 shows the proportions of predicted and observed positive responses. By and large, the model predictions agree with the observations. ## 6 Application: online educational assessments We present an application to online educational assessments, using the My Math Academy data (Bang et al., 2022). ### Data The My Math Academy data set consists of 479 kindergarten children, first- and second-grade students who participated in the My Math Academy effectiveness study in 2019 (Bang et al., 2022). The students worked on pre- and post-assessments before and after working on the online learning platform My Math Academy. The pre- and post-assessments were separated by three months, and both included a common set of 31 problems. We focus on \(p=8\) items measuring numerical understanding. Three selected items are presented in Figure 10. We exclude 58 of the 479 Figure 8: Mental health: Marginal posteriors of the rates of progress \(\lambda_{A}\), \(\lambda_{B}\), \(\lambda_{C}\), \(\lambda_{D}\) of mothers \(A\), \(B\), \(C\), \(D\) towards target \(\mathcal{T}\) (improving mental health). The progress of mother \(A\) is unclear, but the marginal posteriors of the rates of progress \(\lambda_{B}\) and \(\lambda_{C}\) of mothers \(B\) and \(C\) have modes close to \(1\), confirming that mothers \(B\) and \(C\) have made strides towards improving mental health. By contrast, the mode of the marginal posterior of the rate of progress \(\lambda_{D}\) of mother \(D\) is close to \(0\), suggesting that mother \(D\) may need additional assistance. Figure 10: My Math Academy: Selected questions 24, 25, 30 corresponding to items 6, 7, 8. Each question has a single correct response. Items 6, 7, 8 are indicators of whether students choose the correct response to questions 24, 25, 30. Figure 9: Mental health: Posterior predictions of the proportions of positive responses by mothers \(A\), \(B\), \(C\), \(D\) at the second assessment. The observed proportions are indicated by red circles. children who had no responses at either the pre- or post-assessment or who reported no background information. 55% of the remaining \(n=421\) students are kindergarten children, 48% are female, 87% are low-income students eligible to receive free lunches, 87% are Hispanic, and 6% are African-American. The proportions of correct responses range from.114 to.556 at the pre-assessment (median.397), and from.216 to.755 at the post-assessment (median.544). ### Results We assess the progress of \(n=421\) students towards the target of interest (numerical understanding), measured by \(p=8\) items. The prior and Markov chain Monte Carlo algorithm are described in Supplement C. To detect signs of non-convergence, we used trace plots along with the multivariate Gelman-Rubin potential scale reduction factor of Vats and Knudson (2021). These convergence diagnostics can be found in Supplement D and do not reveal signs of non-convergence. Table 5 shows the Watanabe-Akaike information criterion (Watanabe, 2013) based on the latent process model with \(\mathbb{M}:=\mathbb{R}^{q}\) and \(q\in\{1,2,3,4\}\). The Watanabe-Akaike information criterion suggests that \(\mathbb{M}:=\mathbb{R}^{2}\) or \(\mathbb{R}^{3}\) is most appropriate. Figure 11 shows interaction maps, focusing on selected Hispanic students \(A\), \(B\), \(C\), and \(D\) in low-income families. While the one-dimensional interaction map suggests that all items are close to learning target \(\mathcal{T}\), the two- and three-dimensional interaction maps (which, according to the Watanabe-Akaike information criterion, are more appropriate than the one-dimensional interaction map) indicate that some of the items deviate from the bulk of the items: e.g., item 6 is the most \begin{table} \begin{tabular}{l c c c c} & 10\% Percentile & Median & 90\% Percentile & Probability: Progress? \\ \hline \(\mathbb{M}\coloneqq\mathbb{R}\): & & & & \\ \(\lambda_{A}\) &.060 &.290 &.790 &.550 \\ \(\lambda_{B}\) &.085 &.578 &.950 &.707 \\ \(\lambda_{C}\) &.213 &.775 &.965 &.847 \\ \(\lambda_{D}\) &.041 &.179 &.517 &.415 \\ \hline \(\mathbb{M}\coloneqq\mathbb{R}^{2}\): & & & & \\ \(\lambda_{A}\) &.055 &.255 &.757 &.518 \\ \(\lambda_{B}\) &.092 &.590 &.946 &.718 \\ \(\lambda_{C}\) &.278 &.790 &.965 &.866 \\ \(\lambda_{D}\) &.039 &.147 &.463 &.377 \\ \hline \(\mathbb{M}\coloneqq\mathbb{R}^{3}\): & & & & \\ \(\lambda_{A}\) &.058 &.258 &.806 &.527 \\ \(\lambda_{B}\) &.090 &.599 &.944 &.718 \\ \(\lambda_{C}\) &.280 &.774 &.969 &.861 \\ \(\lambda_{D}\) &.038 &.148 &.464 &.385 \\ \hline \end{tabular} \end{table} Table 4: Mental health: Marginal posteriors of the rates of progress \(\lambda_{A}\), \(\lambda_{B}\), \(\lambda_{C}\), \(\lambda_{D}\) of mothers \(A\), \(B\), \(C\), \(D\). The last column (“Probability: Progress?”) shows the posterior probability of non-negligible progress. abstract item among the counting-related items 5, 6, and 7 and the two- and three-dimensional interaction map reveal that item 6 deviates from learning target \(\mathcal{T}\) more than any other item. Student \(B\) is closest to item 6 and provided a correct response to item 6 at the pre-assessment, whereas students \(A\), \(C\), and \(D\) did not. Posterior summaries of the rates of progress \(\lambda_{A}\), \(\lambda_{B}\), \(\lambda_{C}\), and \(\lambda_{D}\) of students \(A\), \(B\), \(C\), and \(D\) are shown in Table 6. These posterior summaries confirm that, with high posterior probability, students \(A\) and \(D\) have made non-negligible progress, but there is more uncertainty about students \(B\) and \(C\). To gain more insight into the uncertainty about the progress of students \(A\), \(B\), \(C\), and \(D\), we present the marginal posteriors of the rates of progress \(\lambda_{A}\), \(\lambda_{B}\), \(\lambda_{C}\), and \(\lambda_{D}\) of students \(A\), \(B\), \(C\), and \(D\) in Figure 12. The marginal posteriors reveal non-negligible uncertainty: e.g., the marginal posteriors of the rates of progress \(\lambda_{A}\) and \(\lambda_{D}\) of students \(A\) and \(D\) have modes close to \(1\), but both of them have long tails. Assessing the progress of students \(B\) and \(C\) is harder still. To assess the goodness-of-fit of the model, we generated 1,000 posterior predictions of the proportions of correct responses at the post-assessment. Figure 13 suggests that the posterior predictions by and large match the observed proportions of correct responses. Figure 11: My Math Academy: Interaction maps \((\mathbb{M},\,d)\) with \(\mathbb{M}\coloneqq\mathbb{R}^{q}\) and \(q\in\{1,2,3\}\) show the progress of Hispanic students \(A\), \(B\), \(C\), \(D\) in low-income families towards learning target \(\mathcal{T}\) (numerical understanding), measured by items \(1,\ldots,8\). The positions of students and items are estimated by posterior medians. While the one-dimensional interaction map (\(q=1\)) suggests that all items are close to learning target \(\mathcal{T}\), the two- and three-dimensional interaction maps (\(q\in\{2,3\}\)) indicate that some of the items deviate from the bulk of the items: e.g., item 6 is the most abstract item among the counting-related items 5, 6, 7 and the two- and three-dimensional interaction map reveal that item 6 deviates from learning target \(\mathcal{T}\) more than any other item. Student \(B\) is closest to item 6 and provided a correct response to item 6 at the pre-assessment, whereas students \(A\), \(C\), \(D\) did not. Figure 12: My Math Academy: Marginal posteriors of the rates of progress \(\lambda_{A}\), \(\lambda_{B}\), \(\lambda_{C}\), \(\lambda_{D}\) of Hispanic students \(A\), \(B\), \(C\),\(D\) towards learning target \(\mathcal{T}\) (numerical understanding). Figure 13: My Math Academy: Posterior predictions of the proportions of correct responses by Hispanic students \(A\), \(B\), \(C\), \(D\) at the post-assessment. The observed proportions are indicated by red circles. \begin{table} \begin{tabular}{c c c c c} & 10\% Percentile & Median & 90\% Percentile & Probability: Progress? \\ \hline \(\mathbb{M}:=\mathbb{R}\): & & & & \\ \(\lambda_{A}\) &.196 &.817 &.970 &.863 \\ \(\lambda_{B}\) &.072 &.419 &.918 &.635 \\ \(\lambda_{C}\) &.055 &.246 &.826 &.523 \\ \(\lambda_{D}\) &.107 &.648 &.953 &.743 \\ \hline \(\mathbb{M}:=\mathbb{R}^{2}\): & & & & \\ \(\lambda_{A}\) &.352 &.837 &.970 &.900 \\ \(\lambda_{B}\) &.080 &.424 &.913 &.642 \\ \(\lambda_{C}\) &.051 &.250 &.814 &.515 \\ \(\lambda_{D}\) &.116 &.653 &.948 &.770 \\ \hline \(\mathbb{M}:=\mathbb{R}^{3}\): & & & & \\ \(\lambda_{A}\) &.322 &.814 &.970 &.885 \\ \(\lambda_{B}\) &.076 &.406 &.920 &.634 \\ \(\lambda_{C}\) &.051 &.227 &.771 &.498 \\ \(\lambda_{D}\) &.133 &.639 &.947 &.765 \\ \hline \end{tabular} \end{table} Table 6: My Math Academy: Summaries of the marginal posteriors of the rates of progress \(\lambda_{A}\), \(\lambda_{B}\), \(\lambda_{C}\), \(\lambda_{D}\) of students \(A\), \(B\), \(C\), \(D\). The last column (“Probability: Progress?”) shows the posterior probability of non-negligible progress. ## 7 Discussion We have introduced a latent process model for monitoring progress towards a hard-to-measure target of interest, with a number of possible extensions. For example, it may be of interest to monitor the progress of individuals towards two or more targets, which may or may not be related. If the targets are known to be unrelated (e.g., improving the command of English language and the understanding of geometry), the targets could be analyzed by separate latent process models, with separate latent spaces. By contrast, if the targets are related (e.g., improving the understanding of random variables and of stochastic processes, i.e., collections of random variables), it may be of interest to monitor the progress of individuals towards both targets. Extensions of the latent process model for tackling multiple targets constitute an interesting direction for future research. A second example is regress, the opposite of progress. Capturing regress is of interest in mental health applications, because the mental health of vulnerable individuals may deteriorate rather than improve.
2303.03719
Anisotropic weighted isoperimetric inequalities for star-shaped and $F$-mean convex hypersurface
We prove two anisotropic type weighted geometric inequalities that hold for star-shaped and $F$-mean convex hypersurfaces in $\mathbb{R}^{n+1}$. These inequalities involve the anisotropic $p$-momentum, the anisotropic perimeter and the volume of the region enclosed by the hypersurface. We show that the Wulff shape of $F$ is the unique minimizer of the corresponding functionals among all star-shaped and $F$-mean convex sets. We also consider their quantitative versions characterized by the Hausdorff distance between the hypersurface and a rescaled Wulff shape. As a corollary, we obtain the stability of the Weinstock inequality for star-shaped and strictly mean convex domains, which requires weaker convexity compared to \cite{Gavitone}.
Rong Zhou, Tailong Zhou
2023-03-07T08:05:59Z
http://arxiv.org/abs/2303.03719v3
# Anisotropic weighted isoperimetric inequalities for star-shaped and \(F\)-mean convex hypersurface ###### Abstract. We prove two anisotropic type weighted geometric inequalities that hold for star-shaped and \(F\)-mean convex hypersurfaces in \(\mathbb{R}^{n+1}\). The first one involves the weighted anisotropic perimeter, the anisotropic perimeter and the volume of the region enclosed by the hypersurface. The second one involves the anisotropic p-momentum, the anisotropic perimeter and the volume. We show that the Wulff shape of \(F\) is the unique minimizer of the corresponding functionals among all star-shaped and \(F\)-mean convex sets. Key words and phrases:Inverse anisotropic mean curvature flow, \(F\)-mean convex, Anisotropic isoperimetric inequality 2010 Mathematics Subject Classification: 53C44; 53C42 ## 1. Introduction The anisotropic version of the classic Euclidean isoperimetric problem is the Wulff Problem, which is also known as the Equilibrium Shape Problem: Consider \[\inf\left\{\int_{\partial E}F(\nu)d\mathcal{H}^{n}:\mathrm{Vol}(E)=m\right\}, \ m>0,\] where \(F:\mathbb{R}^{n+1}\to[0,\infty)\) is \(1\)-homogeneous, convex with \(F(x)\geq c|x|\) uniformly for some \(c>0\), \(E\subset\mathbb{R}^{n+1}\) is a bounded open set with \(C^{1}\)-boundary, and \(\nu\) is the unit outward normal of \(\partial E\). The German crystallographer G. Wulff first guessed in [16] that the unique minimizer of the Wulff Problem, up to translations and scalings, is the domain \(L\) bounded by \(\mathcal{W}\) the Wulff shape of \(F\): \[\partial L=\mathcal{W}=\partial\left(\cap_{y\in\mathbb{S}^{n}}\left\{x\in \mathbb{R}^{n+1}:x\cdot y<F(y)\right\}\right).\] When considering the Wulff Problem for all measurable \(E\) of finite perimeter, the proof of the minimality of the Wulff shape can be found e.g. in [12], while uniqueness was first obtained by Taylor in [14, 15]. Relevant uniqueness results, especially when \(F\) is a general integrand, can be found in [4, 6, 7, 10]. Besides the classic isoperimetric inequality, we are interested to prove the anisotropic version of weighted geometric inequalities for hypersurfaces in \(\mathbb{R}^{n+1}\). As a corollary of the divergence Theorem, it is known that \[\int_{\Sigma}rd\mu\geq(n+1)\mathrm{Vol}(\Omega) \tag{1.1}\] for closed orientable embedded hypersurface \(\Sigma=\partial\Omega\), where \(r\) is the distance to the origin. Equality of (1.1) holds if and only if \(\Sigma\) is a sphere centered at the origin. Girao-Rodrigues pointed out that (1.1) can't be improved to \(\int_{\Sigma}rd\mu\geq|\mathbb{S}^{n}|\left(\frac{|\Sigma|}{|\mathbb{S}^{n}|} \right)^{\frac{n+1}{n}}\) even for strictly convex \(\Sigma\). Instead, by using the inverse mean curvature flow to construct monotonous quantities, they can ## 1. Introduction In this paper we consider the \(p\)-momentum operator \(\mathcal{H}\) on \(\mathbb{R}^{n+1}\), where \(\mathcal{H}\) is a Hilbert space and \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.1}\] where \(\mathcal{H}\) is a Hilbert space and \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.2}\] where \(\mathcal{H}\) is a Hilbert space and \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.3}\] where \(\mathcal{H}\) is a Hilbert space and \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.4}\] where \(\mathcal{H}\) is a Hilbert space and \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.5}\] where \(\mathcal{H}\) is a Hilbert space and \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.6}\] where \(\mathcal{H}\) is a Hilbert space and \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.7}\] where \(\mathcal{H}\) is a Hilbert space and \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.8}\] where \(\mathcal{H}\) is a Hilbert space and \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.9}\] where \(\mathcal{H}\) is a Hilbert space and \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.10}\] where \(\mathcal{H}\) is a Hilbert space and \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.11}\] where \(\mathcal{H}\) is a Hilbert space and \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.12}\] where \(\mathcal{H}\) is a Hilbert space and \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.13}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.14}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.15}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.16}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.17}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.18}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.19}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.20}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.21}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.22}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.23}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.24}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.25}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.26}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.27}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.28}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.29}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.30}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.31}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.32}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.33}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.34}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.35}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.36}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.37}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.38}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.39}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.40}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.41}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.42}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.43}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.44}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.45}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.46}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.47}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \[\mathcal{H}=\mathcal{H}+\mathcal{H}^{\prime}, \tag{1.48}\] where \(\mathcal{H}\) is a Hilbert space. The operator \(\mathcal{H}\) is defined as \ **Theorem 1.2**.: _Under the same conditions in Theorem 1.1, let \(p\geq 1\), then (1.6) holds and equality is attained if and only if \(\Sigma\) is a rescaling of the Wulff shape \(\mathcal{W}\)._ The structure of this paper is the following. In section 2.1 and 2.2 we give some notations and basic properties for anisotropic geometry and hypersurface with anisotropic curvature. In section 2.3 we introduce the inverse anisotropic mean curvature flow and present the existence and convergence results. In section 3, we prove our Theorems. ## 2. Preliminary ### Minkowski norm and Wulff shape We call \(F:\mathbb{R}^{n+1}\to[0,\infty)\) a Minkowski norm if it is a norm in \(\mathbb{R}^{n+1}\) (i.e., convex, \(1\)-homogeneous and positive in \(\mathbb{R}^{n+1}\backslash\{0\}\)) and is smooth in \(\mathbb{R}^{n+1}\backslash\{0\}\) with \(D^{2}\left(\frac{1}{2}F^{2}\right)\) strictly positive definite. The dual norm of \(F\) is defined by \[F^{0}(x)=\sup_{y\neq 0}\frac{x\cdot y}{F(y)}=\sup_{y\in\mathcal{K}}x\cdot y, \ \mathcal{K}=\{y\in\mathbb{R}^{n+1}:F(y)\leq 1\}.\] For all \(x,\ y\in\mathbb{R}^{n+1}\backslash\{0\}\), we have the following identities (see e.g. [17]): \[F(DF^{0}(y))=1,\ F^{0}(DF(x))=1, \tag{2.1}\] \[DF\big{|}_{DF^{0}(y)}=\frac{y}{F^{0}(y)},\ DF^{0}\big{|}_{DF(x)} =\frac{x}{F(x)}. \tag{2.2}\] The definition of \(F^{0}\) implies the following anisotropic Cauchy-Schwarz inequality: \[x\cdot y\leq F(x)F^{0}(y),\ \forall x,\ y\in\mathbb{R}^{n+1}. \tag{2.3}\] Moreover, equality of (2.3) holds if and only if \(\max_{z\neq 0}\frac{z\cdot y}{F(z)}\) is attained at \(z=x\), which means \(D_{z}\left(\frac{z\cdot y}{F(z)}\right)\big{|}_{z=x}=0\). Hence one can check that equality of (2.3) holds if and only if \(x=F^{0}(x)\cdot DF(y),\ y=F(y)\cdot DF^{0}(x)\). Consider a smooth, compact and strictly convex hypersurface \(\mathcal{W}\) in \(\mathbb{R}^{n+1}\) which bounds a domain \(L\), with the origin lying inside it. Let \(F:\mathbb{S}^{n}\to\mathbb{R}^{+}\), \(F(\xi)=\sup_{x\in\mathcal{W}}x\cdot\xi\) be the support function of \(\mathcal{W}\). We can extend \(F\in C^{\infty}(\mathbb{S}^{n})\) homogeneously by \[F(x)=|x|F\left(\frac{x}{|x|}\right),\ x\neq 0\ \text{and}\ F(0)=0.\] One can check that \(F\) becomes a Minkowski norm in \(\mathbb{R}^{n+1}\). \(\mathcal{W}\) is called the Wulff shape (or indicatrix) of \(F\). The property of the support function and (2.1) imply that \[\mathcal{W}=\{F(\xi)\xi+\nabla_{\mathbb{S}^{n}}F(\xi):\xi\in\mathbb{S}^{n}\} =\{DF(\xi):\xi\in\mathbb{S}^{n}\}=\{x\in\mathbb{R}^{n+1}:F^{0}(x)=1\}.\] ### Anisotropic curvature of a hypersurface Let \((\Sigma,g)\) be a smooth hypersurface in \(\mathbb{R}^{n+1}\), \(\nu:\Sigma\to\mathbb{S}^{n}\) be its Gauss map. The anisotropic Gauss map \(\nu_{F}:\Sigma\to\mathcal{W}\) is defined as \[\nu_{F}(x)=DF(\nu(x))=F(\nu(x))\nu(x)+\nabla_{\mathbb{S}^{n}}F(\nu(x)), \tag{2.4}\] then \(\mathcal{W}\) and \(\Sigma\) share the same unit (outer) normal \(\nu(x)\) at \(\nu_{F}(x)\) and \(x\) respectively. The anisotropic principal curvatures \(\{\kappa_{i}^{F}\}_{i=1\cdots n}\) (w.r.t. \(\mathcal{W}\)) at \(x\in\Sigma\) is defined as the eigenvalues of \[d\nu_{F}:T_{x}\Sigma\to T_{\nu_{F}(x)}\mathcal{W}.\] We recall the following formulations of anisotropic curvatures in [2] by Andrews, which were reformulated by Xia in [17], see also [18, 19]. We introduce a Riemannian metric \(G\) w.r.t. \(F^{0}\) in \(T\mathbb{R}^{n+1}\): \[G(\xi)(V,W):=\frac{D^{2}\left(\frac{1}{2}\left(F^{0}(\xi)^{2}\right)\right)}{ \partial\xi^{\beta}\partial\xi^{\gamma}}V^{\beta}W^{\gamma},\ \xi\in\mathbb{R}^{n+1}\backslash\{0\},\ V,\ W\in T_{\xi} \mathbb{R}^{n+1}.\] One can check that at \(x\in\Sigma\), \(\nu_{F}=\nu_{F}(x)\) is perpendicular to \(T_{x}\Sigma\) w.r.t. \(G(\nu_{F})\) in the sense \[G(\nu_{F})(\nu_{F},\nu_{F})=1,\ G(\nu_{F})(\nu_{F},V)=0,\ \forall V\in T_{x}\Sigma.\] Then consider the Riemannian metric on \(\Sigma\) defined by \(\hat{g}(x):=G(\nu_{F}(x))\big{|}_{T_{x}\Sigma}\), denote the covariant derivative of \(\hat{g}\) and \(G\) by \(\hat{\nabla}\) and \(\hat{D}\), the first and second fundamental form of \((\Sigma,\hat{g})\subset(\mathbb{R}^{n+1},G)\) are \[\hat{g}_{ij}=G(\nu_{F})(\partial_{i}x,\partial_{j}x),\ \hat{h}_{ij}=G(\nu_{F})( \hat{D}_{\partial_{i}}\nu_{F},\partial_{j}x).\] It has been proved that the anisotropic principal curvatures \(\{\kappa_{i}^{F}\}_{i=1\cdots n}\) are exactly the eigenvalues of \(\left(\hat{h}_{i}^{j}\right)=\left((\hat{g}^{-1})^{kj}\hat{h}_{ik}\right)\) and the anisotropic mean curvature w.r.t. \(F\) is defined as \[H_{F}=\operatorname{tr}_{\hat{g}}(\hat{h}).\] We call \(\Sigma\) strictly \(F\)-mean convex if \(H_{F}\big{|}_{\Sigma}>0\), and \(F\)-mean convex if \(H_{F}\big{|}_{\Sigma}\geq 0\). It is easy to check that when \(\Sigma=\mathcal{W}\), we have \(\nu_{F}(x)=x\), \(\hat{h}_{ij}=\hat{g}_{ij}\) and \(H_{F}=n\). ### Inverse anisotropic mean curvature flow Let \(\Sigma\subset\mathbb{R}^{n+1}\) be a smooth, closed, star-shaped and strictly \(F\)-mean convex hypersurface given by the embedding \(X_{0}:\mathbb{S}^{n}\to\mathbb{R}^{n+1}\). The inverse anisotropic mean curvature flow is the smooth family of embeddings \(X:\mathbb{S}^{n}\times[0,T)\to\mathbb{R}^{n+1}\) satisfying \[\left\{\begin{aligned} \frac{\partial}{\partial t}X(\cdot,t)& =\frac{1}{H_{F}}\nu_{F},\\ X(\cdot,0)&=X_{0}(\cdot).\end{aligned}\right. \tag{2.5}\] When \(n=1\), the flow becomes a family of smooth embedded closed curves in the plane \(\mathbb{R}^{2}\) and the \(F\)-mean convexity becomes the standard convexity for curves. If a solution of (2.5) exists, the following evolution equations were obtained in [2, 19]: **Lemma 2.1**.: _Along the anisotropic curvature flow which evolves as \(\frac{\partial}{\partial t}X=\eta\nu_{F}\), the anisotropic area form \(d\mu_{F}\) and the anisotropic perimeter of \(\Sigma_{t}\) evolve under_ \[\partial_{t}d\mu_{F}=\eta H_{F}d\mu_{F}\left(=d\mu_{F}\text{ if }\eta=\frac{1}{H_{F}} \right),\] \[\frac{d}{dt}|\Sigma_{t}|_{F}=\int_{\Sigma}\eta H_{F}d\mu_{F}\left(=|\Sigma_{t }|_{F}\text{ if }\eta=\frac{1}{H_{F}}\right).\] The smooth existence and convergence results for the inverse anisotropic mean curvature flow starting from a star-shaped and strictly \(F\)-mean convex hypersurface have been proved by Andrews for \(n=1\) and Xia for \(n\geq 2\). **Theorem 2.2** ([1, 19]).: _Suppose \(\Sigma_{0}\) is smooth, closed, star-shaped and strictly \(F\)-mean convex. Then there exists a unique, smooth, strictly \(F\)-mean convex solution \(X(\cdot,t)\) to (2.5) for \(t\in[0,\infty)\). And the rescaled hypersurfaces \(e^{-\frac{t}{n}}X(\cdot,t)\) converge exponentially fast to \(\alpha_{0}\mathcal{W}\) in the \(C^{\infty}\) topology, where \(\alpha_{0}=|\Sigma_{0}|_{F}\)._ ## 3. Proof of Theorem 1.1 and 1.2 Let \(Q(\Sigma)\) be the functional for smooth, star-shaped hypersurfaces \(\Sigma=\partial\Omega\) in \(\mathbb{R}^{n+1}\), defined by \[Q(\Sigma)=\left|\Sigma\right|_{F}^{-1-\frac{1}{n}}\left(\int_{\Sigma}F^{0}(x)d \mu_{F}-\mathrm{Vol}(\Omega)\right). \tag{3.1}\] **Proposition 3.1**.: _Let \(\Sigma_{0}=\Sigma\) be a smooth, closed, star-shaped and strictly \(F\)-mean convex hypersurface, \(\Sigma_{t}\) be the solution of the inverse anisotropic mean curvature flow (2.5). Then the quantity \(Q(e^{-\frac{t}{n}}\Sigma_{t})\) is monotonically nonincreasing along the flow. Moreover, \(\frac{d}{dt}Q(e^{-\frac{t}{n}}\Sigma_{t})=0\) if and only if \(\Sigma_{t}\) is a rescaling of the Wulff shape \(\mathcal{W}\)._ Proof.: It is easy to see \(Q\) is invariant under rescaling: \(Q(\alpha\Sigma)=Q(\Sigma)\). Let \(\Sigma_{t}=\partial\Omega_{t}\). Then by Lemma 2.1, we have \[\begin{split}&\frac{d}{dt}Q(e^{-\frac{t}{n}}\Sigma_{t})=\frac{d}{ dt}Q(\Sigma_{t})\\ =&-(1+\frac{1}{n})\left|\Sigma_{t}\right|_{F}^{-1- \frac{1}{n}}\left(\int_{\Sigma_{t}}F^{0}(x)d\mu_{F}-\mathrm{Vol}(\Omega_{t}) \right)\\ &+\left|\Sigma_{t}\right|_{F}^{-1-\frac{1}{n}}\left(\int_{\Sigma_ {t}}\left(DF^{0}(x)\cdot x_{t}+F^{0}(x)\right)d\mu_{F}-\int_{\Sigma_{t}}\frac {1}{H_{F}}\nu_{F}\cdot\nu d\mu\right)\\ =&\left|\Sigma_{t}\right|_{F}^{-1-\frac{1}{n}}\left( -\frac{1}{n}\int_{\Sigma_{t}}F^{0}(x)d\mu_{F}+(1+\frac{1}{n})\mathrm{Vol}( \Omega_{t})+\int_{\Sigma_{t}}\left(DF^{0}(x)\cdot\nu_{F}-1\right)\frac{1}{H_{F }}d\mu_{F}\right).\end{split} \tag{3.2}\] By (2.1) and the anisotropic Cauchy-Schwarz inequality, we see \[DF^{0}(x)\cdot\nu_{F}-1\leq F(DF^{0}(x))F^{0}(DF(\nu))-1=0, \tag{3.3}\] \[(n+1)\mathrm{Vol}(\Omega_{t})=\int_{\Sigma_{t}}x\cdot\nu d\mu\leq\int_{\Sigma _{t}}F^{0}(x)F(\nu)d\mu=\int_{\Sigma_{t}}F^{0}(x)d\mu_{F}. \tag{3.4}\] Hence \(\frac{d}{dt}Q(e^{-\frac{t}{n}}\Sigma_{t})\leq 0\). If \(\frac{d}{dt}Q(e^{-\frac{t}{n}}\Sigma_{t})=0\), then equality is achieved for those anisotropic Cauchy-Schwartz inequalities appeared in (3.3) and (3.4), which implies \(x=F^{0}(x)DF(\nu)=F^{0}(x)\nu_{F}\). Taking the covariant derivative induced by \(\hat{g}\) on \(x=F^{0}(x)\nu_{F}\), we see \(\hat{\nabla}_{i}F^{0}(x)\nu_{F}\in T_{x}(\Sigma_{t})\), \(\forall 1\leq i\leq n\), and hence \(F^{0}(x)\) is a constant on \(\Sigma_{t}\), which means \(\Sigma_{t}\) is a rescaling of the Wulff shape \(\mathcal{W}\). Proof of Theorem 1.1.: First we consider the case when \(\Sigma\) is strictly \(F\)-mean convex. By Proposition 3.1, let \(\Sigma_{t}\) be the solution of the inverse anisotropic mean curvature flow (2.5) starting from \(\Sigma\), then \[Q(\Sigma)\geq Q(e^{-\frac{t}{n}}\Sigma_{t}),\ \forall t\geq 0.\] By the convergence result in Theorem 2.2, the rescaled flow \(e^{-\frac{t}{n}}\Sigma_{t}\) converges smoothly to a rescaling of the Wulff shape \(\mathcal{W}\). Note that on \(\mathcal{W}\) we have \(|\mathcal{W}|_{F}=\int_{\mathcal{W}}F^{0}(x)d\mu_{F}=(n+1)\mathrm{Vol}(L)\), where \(\partial L=\mathcal{W}\). Then \[Q(\Sigma)\geq\lim_{t\to\infty}Q(e^{-\frac{t}{n}}\Sigma_{t})=Q(\alpha_{0} \mathcal{W})=Q(\mathcal{W})=(n+1)^{-1-\frac{1}{n}}n\mathrm{Vol}(L)^{-\frac{1}{ n}}, \tag{3.5}\] which is the inequality we asserted. By Proposition 3.1, we also see that equality holds if and only if \(\Sigma\) is a rescaling of the Wulff shape \(\mathcal{W}\). In the case when \(\Sigma\) is only \(F\)-mean convex, the inequality follows from the approximation. To prove the equality condition, we use a similar argument in [9]: Suppose that \(\Sigma\) is a star-shaped and \(F\)-mean convex hypersurface which attains the equality of (1.3). Let \(\Sigma^{+}=\{x\in\Sigma\ :\ H_{F}(x)>0\}\). We can find the smallest \(\alpha\) such that \(\Omega\subset\alpha L\), then \(\alpha\mathcal{W}\) touches \(\Sigma\) at some point \(p\in\Sigma^{+}\) tangentially with \(F^{0}(p)=\alpha<\infty\) and hence \(\Sigma^{+}\) is open and non-empty. Pick any \(\eta\in C_{0}^{2}(\Sigma^{+})\) compactly supported in \(\Sigma^{+}\), let \(\Sigma_{\epsilon}\) be the smooth family of variational hypersurfaces generated by the vector field \(V=\eta\nu_{F}\), \(\Sigma_{0}=\Sigma\), then \(\Sigma_{\epsilon}\) remains to be \(F\)-mean convex when \(|\epsilon|\) is sufficiently small. We have \[Q(\Sigma_{\epsilon})\geq c,\ Q(\Sigma_{0})=c,\ \text{where}\ c=\frac{n}{(n+1)^{1+ \frac{1}{n}}}\text{Vol}(L)^{-\frac{1}{n}}.\] Thus by similar calculation in (3.2), \[0= \frac{d}{d\epsilon}\big{|}_{\epsilon=0}Q(\Sigma_{\epsilon})\] \[= |\Sigma|_{F}^{-1-\frac{1}{n}}\int_{\Sigma^{+}}\eta\cdot\left[ \frac{H_{F}}{n|\Sigma|_{F}}\left((n+1)\text{Vol}(\Omega)-\int_{\Sigma}F^{0}(x )d\mu_{F}\right)+\left(DF^{0}(x)\cdot\nu_{F}-1\right)\right.\] \[\left.+H_{F}\left(F^{0}(x)-\frac{\int_{\Sigma}F^{0}d\mu_{F}}{| \Sigma|_{F}}\right)\right]d\mu_{F},\] \[= |\Sigma|_{F}^{-1-\frac{1}{n}}\int_{\Sigma^{+}}\eta\cdot\left[H_{ F}\left(F^{0}(x)-\frac{|\Sigma|_{F}^{\frac{1}{n}}}{|\mathcal{W}|_{F}^{\frac{1}{n}}} \right)+\left(DF^{0}(x)\cdot\nu_{F}-1\right)\right]d\mu_{F},\] where we have used that equality of (1.3) is achieved on \(\Sigma\) and \((n+1)\text{Vol}(L)=|\mathcal{W}|_{F}\) in the last equality. Since \(\eta\) is arbitrary, we have \[H_{F}\left(F^{0}(\cdot)-\frac{|\Sigma|_{F}^{\frac{1}{n}}}{|\mathcal{W}|_{F}^{ \frac{1}{n}}}\right)+\left(DF^{0}(\cdot)\cdot\nu_{F}-1\right)=0\] everywhere on \(\Sigma^{+}\). Recall that \(p\in\Sigma^{+}\) is the point where \(\Sigma\) touches \(F^{0}(p)\mathcal{W}\) tangentially, so we have \(DF^{0}(x)\cdot\nu_{F}\big{|}_{x=p}=1\), then \[F^{0}(p)=\frac{|\Sigma|_{F}^{\frac{1}{n}}}{|\mathcal{W}|_{F}^{\frac{1}{n}}},\] hence \[H_{F}\left(F^{0}(x)-F^{0}(p)\right)\geq H_{F}\left(F^{0}(x)-\frac{|\Sigma|_{F }^{\frac{1}{n}}}{|\mathcal{W}|_{F}^{\frac{1}{n}}}\right)+\left(DF^{0}(x)\cdot \nu_{F}-1\right)=0,\ \forall x\in\Sigma^{+}.\] Since \(H_{F}>0\) and \(F^{0}(x)\leq F^{0}(p)\) on \(\Sigma^{+}\), we see \(F^{0}(x)\equiv F^{0}(p)=\alpha\) on \(\Sigma^{+}\). As this is a closed condition, and since on \(\Sigma\cap\alpha\mathcal{W}\) the anisotropic mean curvature is positively bounded from below by \(\frac{n}{\alpha}\), we conclude that \(\Sigma^{+}\) is closed. Therefore \(\Sigma=\Sigma^{+}=\{x\in\mathbb{R}^{n+1}:F^{0}(x)=\alpha\}=\alpha\mathcal{W}\). Proof of Theorem 1.2.: For \(p\geq 1\), we can use the Holder inequality and Theorem 1.1 to get \[\left(\int_{\Sigma}\left(F^{0}(x)\right)^{p}d\mu_{F}\right)^{\frac{1}{p}} \left(|\Sigma|_{F}\right)^{1-\frac{1}{p}}\geq\int_{\Sigma}F^{0}(x)d\mu_{F} \geq c|\Sigma|_{F}^{1+\frac{1}{n}}+\text{Vol}(\Omega), \tag{3.6}\] where \(c=\frac{n}{(n+1)^{1+\frac{1}{n}}}\text{Vol}(L)^{-\frac{1}{n}}\). Since \(na+b\geq(n+1)a^{\frac{n}{n+1}}b^{\frac{1}{n+1}},\ \forall\ a,b\geq 0\), we see \[\frac{\left(\int_{\Sigma}\left(F^{0}(x)\right)^{p}d\mu_{F}\right)^{\frac{1}{p} }}{|\Sigma|_{F}^{\frac{1}{p}}\text{Vol}(\Omega)^{\frac{1}{n+1}}}\geq c\frac{| \Sigma|_{F}^{\frac{1}{n}}}{\text{Vol}(\Omega)^{\frac{1}{n+1}}}+\frac{\text{ Vol}(\Omega)^{\frac{n}{n+1}}}{|\Sigma|_{F}}\geq\frac{(n+1)}{n^{\frac{n}{n+1}}}c^{ \frac{n}{n+1}}=\text{Vol}(L)^{-\frac{1}{n+1}}, \tag{3.7}\] which is the inequality for \(p\)-momentum. By the equality condition of Theorem 1.1, we see that (1.6) attains equality only for the rescalings of the Wulff shape. ### Acknowledgments The authors would like to thank Professor Yong Wei for helpful discussions. The authors were supported by National Key Research and Development Program of China 2021YFA1001800.
2302.03430
Planetary nurseries: vortices formed at smooth viscosity transition
Excitation of Rossby wave instability and development of a large-scale vortex at the outer dead zone edge of protoplanetary discs is one of the leading theories that explains horseshoe-like brightness distribution in transition discs. Formation of such vortices requires a relatively sharp viscosity transition. Detailed modelling, however, indicates that viscosity transitions at the outer edge of the dead zone is relatively smooth. In this study, we present 2D global, non-isothermal, gas-dust coupled hydrodynamic simulations to investigate the possibility of vortex excitation at smooth viscosity transitions. Our models are based on a recently postulated scenario, wherein the recombination of charged particles on the surface of dust grains results in reduced ionisation fraction and in turn the turbulence due to magnetorotational instability. Thus, the alpha-parameter for the disc viscosity depends on the local dust-to-gas mass ratio. We found that the smooth viscosity transitions at the outer edge of the dead zone can become Rossby unstable and form vortices. A single large-scale vortex develops if the dust content of the disc is well coupled to the gas, however, multiple small-scale vortices ensue for the case of less coupled dust. As both type of vortices are trapped at the dead zone outer edge, they provide sufficient time for dust growth. The solid content collected by the vortices can exceed several hundred Earth masses, while the dust-to-gas density ratio within often exceeds unity. Thus, such vortices function as planetary nurseries within the disc, providing ideal sites for formation of planetesimals and eventually planetary systems.
Zs. Regaly, K. Kadam, D. Tarczay-Nehez
2023-02-07T12:31:55Z
http://arxiv.org/abs/2302.03430v1
# Planetary nurseries: vortices formed at smooth viscosity transition ###### Abstract Excitation of Rossby wave instability and development of a large-scale vortex at the outer dead zone edge of protoplanetary discs is one of the leading theories that explains horseshoe-like brightness distribution in transition discs. Formation of such vortices requires a relatively sharp viscosity transition. Detailed modelling, however, indicates that viscosity transitions at the outer edge of the dead zone is relatively smooth. In this study, we present 2D global, non-isothermal, gas-dust coupled hydrodynamic simulations to investigate the possibility of vortex excitation at smooth viscosity transitions. Our models are based on a recently postulated scenario, wherein the recombination of charged particles on the surface of dust grains results in reduced ionisation fraction and in turn the turbulence due to magnetorotational instability. Thus, the \(\alpha\)-parameter for the disc viscosity depends on the local dust-to-gas mass ratio. We found that the smooth viscosity transitions at the outer edge of the dead zone can become Rossby unstable and form vortices. A single large-scale vortex develops if the dust content of the disc is well coupled to the gas, however, multiple small-scale vortices ensue for the case of less coupled dust. As both type of vortices are trapped at the dead zone outer edge, they provide sufficient time for dust growth. The solid content collected by the vortices can exceed several hundred Earth masses, while the dust-to-gas density ratio within often exceeds unity. Thus, such vortices function as planetary nurseries within the disc, providing ideal sites for formation of planetesimals and eventually planetary systems. keywords: accretion, accretion discs -- hydrodynamics -- methods: numerical -- protoplanetary discs ## 1 Introduction Brightness asymmetries of transitional discs (an evolved phase of protoplanetary discs, Strom et al., 1989; Skrutskie et al., 1990) observed in the past decades are a tell-tale sign of large-scale vortices, (see, e.g., Brown et al., 2009; Andrews et al., 2009; Hughes et al., 2009; Isella et al., 2010; Andrews et al., 2011; Mathews et al., 2012; Tang et al., 2012; Fukagawa et al., 2013; Casassus et al., 2013; van der Marel et al., 2013; Perez et al., 2014; Hashimoto et al., 2015; Casassus et al., 2015; Wright et al., 2015; Momose et al., 2015; Marino et al., 2015; Perez et al., 2018; Casassus & Perez, 2019; Francis & van der Marel, 2020; Facchini et al., 2020; Boehler et al., 2021; Kurtovic et al., 2021; Varga et al., 2021). As of today, we know several competing theories that can explain the origin of a large-scale vortex in protoplanetary discs: the baroclinic instability (Kliar & Bodenheimer, 2003; Lyra & Klahr, 2011; Raettig et al., 2013); the convective overstability (Kliar & Hubbard, 2014; Lyra, 2014); the vertical shear instability (Richard et al., 2016) or the zombie vortex instability (Marcus et al., 2015); and finally Rossby wave instability (see, a review in Lovelace & Romanova, 2014 and references therein). In this study, we focus on the phenomenon of Rossby wave instability (RWI) in protoplanetary discs. RWI can be excited at a steep pressure gradient, which can form at the edges of a gap opened by an embedded planet (Li et al., 2005) or at the regions where the disc viscosity changes considerably (Lovelace et al., 1999; Li et al., 2000). Gammie (1996) proposed the existence of a radially extended region in the disc midplane called the "dead zone", where the accretion is widely reduced. The temperature at a distance of a few au from the central star is not high enough to collisionally ionise the gas, while the gas is dense enough to shield the external sources of ionisation such as the Galactic cosmic rays. As a consequence of the low ionisation rate, the disc midplane is magnetically dead, resulting in low turbulence generation via magnetorotational instability (MRI). Thus, the viscous accretion takes place only in the tenuous disc atmosphere. At the inner or outer edge of such a dead zone, the mismatch in the accretion rate due to the viscosity transition can result in steep pressure bumps, which become Rossby unstable and develop anticyclonic vortices. In the past ten years, several studies have been dedicated to explaining the arc-like asymmetries observed in transitional discs (see Andrews, 2020 and references therein). According to the thorough analysis of Regaly et al. (2017) both the gap edge and dead zone edge scenarios can explain the observations. A relatively concentrated (\(\lesssim 90^{\circ}\)) strong azimuthal brightness asymmetry favours the gap edge, while an azimuthally extended brightness asymmetry (\(\gtrsim 180^{\circ}\)) dead zone edge formation scenario, respectively. Note that both formation scenarios require a relatively low disc viscosity to maintain the vortex for enough long time to be observable. The disc dead zone provides a low viscosity, however, the disc viscosity is presumably high at the gap edge due to the low gas density in the outer disc. Moreover, as recently shown by Hammer et al. (2017) the large-scale vortex formation at a planetary gap edge requires a fast planet growth. A caveat for the dead zone edge scenario is that if the width of the viscosity transition at its outer edge is wider than two times the local pressure scale height, no RWI excitation occurs (Lyra et al., 2009; Regaly et al., 2012). However, detailed modelling of disc suggests that the width of the dead zone outer edge is wider than the above criteria (Dzyurkevich et al., 2013; Kadam et al., 2019). Note, however, that a smooth change in gas resistivity does not imply a smooth transition in turbulent stress, for which case a large-scale vortex can form although the resistivity transition is smooth (Lyra et al., 2015). Thus, further investigations of the dead zone edge scenario are warranted. The Shakura-Sunyaev \(\alpha\) viscosity model postulates a simple scaling relation between the gas viscosity and the efficiency (described by the magnitude of \(\alpha\) parameter) of angular momentum transport (Shakura & Sunyaev, 1973). In summary, an accretion disc coupled dynamically to a weak magnetic field is subject to MRI turbulence and the resulting in outward angular momentum transport can be described as a diffusive process with an effective \(\alpha\)-viscosity (Balbus & Hawley, 1991). The sustenance of MRI requires a sufficient level of ionisation within the disc, however, small-sized dust grains tend to adsorb electrons and ions. This can strongly reduce the conductivity of the gas, which can inhibit MRI in the vicinity of dust particles (Sano et al., 2000; Ilgner & Nelson, 2006; Okuzumi, 2009; Johansen et al., 2011) Based on the implicit assumption that the disc viscosity depends on the ionisation fraction controlled by the amount of dust concentration, Dullemond & Penzin (2018) constructed a simple parametric-\(\alpha\) viscosity model. With 1D perturbation analysis, they demonstrated that such a parametric-\(\alpha\) model leads to ring formation via viscous ring-instability (VRI). Note that Wunsch et al. (2005) have also shown that the disc dead zone can be split into rings due to the positive feedback between the thickness of the dead zone and the mass accumulation rate. The positive feedback loop that leads to VRI is as follows: 1) an initial increase in the dust density reduces the local MRI viscosity, which in turn leads to the accumulation of gas in its vicinity due to the mismatch in the mass transfer rate; 2) as the dust particles drift towards the resulting pressure maxima, they amplify the initial perturbation, leading to spontaneous formation of concentric rings. Thus, VRI is suggested to be a potential mechanism behind the grand design ring structures often observed in the sub-millimetre emissions of protoplanetary discs (Andrews et al., 2018; Dullemond et al., 2018). However, 2D hydrodynamic simulations of Regaly et al. (2021) revealed that such rings formed via VRI are typically Rossby unstable and tend to fragment into a cascade of small-scale vortices. They also show that grand design rings can only be stable patterns (Rossby stable) if the minimum of the reduced disc viscosity is 90 per cent that of global viscosity. Considering the above-mentioned issues, we study RWI excitation for a smooth dead zone edge, where the width of the viscosity transition is above two times the local pressure scale height. We combined the standard dead zone edge \(\alpha\) model (see, e.g., Regaly et al., 2012), with a parametric-\(\alpha\) viscosity prescription wherein the disc viscosity depends on the local enhancement or depression of the dust content. We present 2D non-isothermal, gas-dust coupled hydrodynamic simulations of protoplanetary discs that propose a new possibility of RWI excitation at smooth viscosity transitions. The paper is structured as follows. In Section 2, the applied numerical methods for modelling RWI excitation in a dusty protoplanetary disc assuming four different parametric-\(\alpha\) prescriptions are presented. Results and discussions are presented in Section 3 and Section 4, respectively. The paper closes with our conclusion and some concluding remarks in Section 5. ## 2 Numerical Methods ### Coupled gas-dust Hydrodynamics To study the excitation of Rossby wave instability and subsequent vortex evolution, we run coupled gas-dust hydrodynamic global disc simulations with a adiabatic thermodynamics. The dust is assumed to be a pressureless fluid. We use GFARGO2 for this investigation, which is our extension to GFARGO, a GPU supported version of the FARGO code (Masset, 2000). The dynamics of gas and solid components are described by the following equations: \[\frac{\partial\mathbf{x}_{\rm g}}{\partial t}+\mathbf{\nabla}\cdot(\mathbf{z}_{\rm g}\mathbf{ v})=0, \tag{1}\] \[\frac{\partial\mathbf{v}}{\partial t}+(\mathbf{v}\cdot\mathbf{\nabla})\mathbf{v}=-\frac{1}{ \Sigma_{\rm g}}\mathbf{\nabla}P-\mathbf{\nabla}\mathbf{\Phi}+\mathbf{\nabla}\cdot\mathbf{T}-\frac{ 1}{\Sigma_{\rm g}}\mathbf{f}_{\rm drag}, \tag{2}\] \[\frac{\partial\mathbf{e}}{\partial t}+\mathbf{\nabla}\cdot(\mathbf{ev})=-P\mathbf{\nabla}\mathbf{v }+\mathbf{Q}_{v}+\mathbf{Q}_{\pm}, \tag{3}\] \[\frac{\partial\mathbf{x}_{\rm g}}{\partial t}+\mathbf{\nabla}\cdot\mathbf{\Sigma}_{\rm d} \mathbf{u}=-\mathbf{\nabla}\cdot\mathbf{j}, \tag{4}\] \[\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u}\cdot\mathbf{v})\mathbf{u}=-\mathbf{\nabla}\mathbf{ \Phi}+\frac{1}{\Sigma_{\rm d}}\mathbf{f}_{\rm drag}-\mathbf{u}\mathbf{\nabla}\cdot\mathbf{j}, \tag{5}\] where \(\Sigma_{\rm g}\), \(\Sigma_{\rm d}\), and \(\mathbf{v}\), \(\mathbf{u}\) are the surface mass densities and velocities of gas and solid (being either dust particles or pebbles), respectively. \(e\) is the thermal energy density of the gas (per surface area). \(\mathbf{T}\) is the viscous stress tensor of the gas, see details in Section 2.2. \(\mathbf{f}_{\rm drag}\) and \(\mathbf{j}\) are the drag force exerted by the gas on the dust and the diffusive flux, respectively, see details in Section 2.3. Note that the last term in Equation (5) accounts for the microscopical stochastic kicks from the turbulent gas onto the solid component (Benitez-Llambay et al., 2019; Weber et al., 2019). Emphasise that the dust back-reaction is taken into account as Equation (2) includes a term proportional to the drag force. The adiabatic gas pressure is given as \[P=(\gamma-1)e, \tag{6}\] where \(\gamma=1.4\) is the adiabatic index of gas assumed to be H\({}^{2}\) molecule. The treatment of thermodynamics can have significant impact on vortex evolution in a Shakura & Sunyaev (1973)\(\alpha\)-disc. As the local sound speed is connected to the internal thermal energy (\(c_{\rm s}=\sqrt{\gamma P/\Sigma}\)), the viscosity of the gas is different from that is in the locally isothermal case. This affects the onset of vortex formation, so that they tend to occur earlier, while vortex strength and lifetime are also affected (Tarczay-Nehze et al., 2020). The total gravitational potential of the disc in Equation (2) is \[\Phi(r,\phi)=-G\frac{M_{\rm s}}{r}+\Phi_{\rm mid}(r,\phi), \tag{7}\] where the first term is the gravitational potential of the star in a given cell with radial distance \(r\). Since the equations are solved in the cylindrical coordinate system centred on the star, Equation (7) includes the so-called indirect potential, \(\Phi_{\rm ind}(r,\phi)\) arising due to the displacement of the barycentre of the system caused by any disc non-axisymmetry (see its importance in, e.g., Mittal & Chiang, 2015; Zhu & Baruteau, 2016; Regaly & Vorobyov, 2017). The indirect potential is calculated as \[\Phi_{\rm ind}(r,\phi)=r\cdot G\int\frac{\mathrm{d}m(\mathbf{r^{\prime}})}{\mathbf{r^{ \prime}}^{3}}\mathbf{r^{\prime}}, \tag{8}\] which in the cylindrical coordinate system can be given as \[\Phi_{\rm ind}(r_{j},\phi_{k}) = r_{j}\cos(\phi_{k})\sum_{j^{\prime},k^{\prime}}G\frac{\mathbf{m}_{j^ {\prime},k^{\prime}}}{r_{j}^{\prime 2}}\cos(\phi_{k^{\prime}}) \tag{9}\] \[+\sin(\phi_{k})\sum_{j^{\prime},k^{\prime}}G\frac{\mathbf{m}_{j^{ \prime},k^{\prime}}}{r_{j}^{\prime 2}}\sin(\phi_{k^{\prime}}),\] where \(m_{j,k}\) and \(r_{j}\), \(\phi_{k}\), are the mass and cylindrical coordinates of the grid cell \(j\), \(k\). As one can see, the disc self-gravity is neglected in this investigation, however, its effect on RWI and subsequent vortex formation might be crucial (see, e.g., Regaly & Vorobyov, 2017; Tarczay-Nehez et al., 2022 and references therein). The heating due to the viscous stresses, \(Q_{\nu}\), are taken into to account according to D'Angelo et al. (2003). Heating and cooling mechanisms such as stellar and background irradiation are modelled implicitly via \(Q_{\pm}\). We used \(\beta\)-cooling/heating prescription of Les & Lin (2015) \[Q_{\pm}=\frac{1}{\tau_{\rm c}}\left(e-e^{(0)}\frac{\Sigma}{\Sigma^{(0)}}\right), \tag{10}\] where \(\tau_{\rm c}\) is the cooling time connected with the \(\beta\)-parameter as \[\tau_{\rm c}=\frac{\beta}{\Omega}, \tag{11}\] where \(\beta\) is a measure of the cooling time in terms of local dynamical timescale, \(\Omega^{-1}\). To let the gas release/gain its internal energy we assume \(\beta=10\), i.e., the e-folding timescale of disc cooling/heating to the initial temperature is ten orbits at all distances (see its importance on vortex evolution in Tarczay-Nehez et al., 2020; Rometsch et al., 2021). Note that the heating due to dust-gas friction is not included in our model. In Equation (10), \(\Sigma^{(0)}\) and \(e^{(0)}\) correspond to the initial density and energy state of the disc. ### Viscosity prescription The viscous stress tensor of the gas, \(\mathbf{T}\) in Equation (2) is calculated as \[\mathbf{T}=\nu\left(\nabla\mathbf{v}+\nabla\mathbf{v}^{\mathbf{T}}-\frac{2}{3}\nabla\cdot \mathbf{v}\mathbf{I}\right), \tag{12}\] where \(\nu\) is the vertically integrated effective viscosity of the disc whose components in polar coordinates are calculated according to Masset (2002) and \(\mathbf{I}\) is the identity matrix. We use \(\alpha\) prescription of Shakura & Sunyaev (1973) for the disc effective viscosity. In this case \(\nu=\alpha c_{\rm s}^{2}/\Omega\), where the sound speed is \[c_{\rm s}=\sqrt{\frac{\gamma(\gamma-1)e}{\Sigma_{\rm g}}}. \tag{13}\] \(\alpha\) characterising the strength of magneto-rotational instability depends on local dust-to-gas mass ratio. To model viscosity transition at disc dead zone the background viscosity, \(\alpha_{\rm bg}\) is defined as \[\alpha_{\rm bg}=1-\frac{1}{2}\left(1-\alpha_{\rm mod}\right)\left[1-\tanh \left(\frac{R-R_{\rm dze}}{\Delta R_{\rm dze}}\right)\right], \tag{14}\] The outer edge of the disc dead zone is set to \(R_{\rm dze}=1.5\). Excitation of RWI requires a sharp viscosity transition (\(\Delta R_{\rm dze}\leq 2H_{\rm dze}\), where \(H_{\rm dze}=R_{\rm dze}h\) is the disc scale height at the viscosity reduction), in the \(\alpha\)-prescription (Lyra et al., 2009; Regaly et al., 2012), which is sharper than it is expected to form at the outer dead zone edge (Dzyurkevich et al., 2013). In our standard models, we assume that \(\Delta R_{\rm dze}=0.2\), which corresponds to \(\Delta R_{\rm dze}=(2+2/3)H_{\rm dze}\). To extend our investigation, we run models with \(\Delta R_{\rm dze}=4H_{\rm dze}\) for certain cases. Note that the total width of the viscosity transition given by Equation (14) is about \(2\Delta R_{\rm dze}\), which corresponds to 0.4 and 0.6 in the standard and extended scenarios, respectively. Additionally, the global viscosity is modified such that a change in the dust or gas concentration alters the viscosity. According to Dullemond & Penzlin (2018) a parametric \(\alpha\)-prescription can be given as \[\alpha=\alpha_{\rm bg}\left(\frac{\Sigma_{\rm d}}{\Sigma_{\rm d}^{(0)}} \right)^{\phi_{\rm d}}\left(\frac{\Sigma_{\rm g}}{\Sigma_{\rm g}^{(0)}} \right)^{\phi_{\rm g}}, \tag{15}\] where \(\Sigma_{\rm g}^{(0)}\) and \(\Sigma_{\rm d}^{(0)}\) are the initial gas and dust densities. Values of \(\alpha\) are limited between the background viscosity, \(\alpha_{\rm bg}=10^{-2}\), and the MRI inactive viscosity, \(\alpha_{\rm dead}=10^{-4}\). The assumption of minimum effective viscosity is consistent with the vertical shear instability generated viscosity(Stoll & Kley, 2014). With regards to the values for \(\phi_{\rm d}\) and \(\phi_{\rm g}\) we investigated four cases, see Table 1 ### Dust handling The turbulent diffusion of solid is modelled by the so-called gradient diffusion approximation (Morfill & Voelk, 1984; Dubrulle et al., 1995; Takeuchi & Lin, 2002), in which the diffusive flux, \(\mathbf{j}\), is given as \[\mathbf{j}=-D\left(\Sigma_{\rm g}+\Sigma_{\rm d}\right)\nabla\frac{\Sigma_{\rm d }}{\Sigma_{\rm g}+\Sigma_{\rm d}}, \tag{16}\] where the diffusion coefficient of solids is defined according to Youdin & Lithwick (2007) as \[D=\frac{\nu}{(1+\mathrm{S}^{2})}. \tag{17}\] Equations (2) and (5) are solved by a two-step method. First, the source term, i.e. the right-hand sides are calculated then it is followed by the conventional advection calculation. For the source term, we use a fully implicit scheme (see details in Stoyanovskaya et al., 2018). With this scheme, the effect of aerodynamic drag can be modeled for dust species that have stopping time that is much smaller than the time-step (\(\tau_{\rm s}\ll\Delta t\)). For pebbles that have large \begin{table} \begin{tabular}{l c c} \hline \hline model & \(\phi_{\rm d}\) & \(\phi_{\rm g}\) \\ \hline case A & 0 & -1 \\ case B & -1 & 0 \\ case C & -1 & 1 \\ case D & -2 & 1 \\ case E & -1 & -1 \\ \hline \hline \end{tabular} \end{table} Table 1: Investigated parametric–\(\alpha\) models. stopping time (\(\tau_{\rm S}\gg\Delta t\)), the method described is applicable as long as crossing orbits are not important for the dynamics. The drag force exerted by the gas on the dust is calculated as \[f_{\rm drag}=\frac{\nu-\mu}{\tau_{\rm S}}, \tag{18}\] where \(\tau_{\rm S}={\rm St}/\Omega\) is the stopping time and \({\rm St}\) is the Stokes number of the given solid species. For simplicity, we assume that the solid has a fix Stokes number. We modeled five different solid species, whose Stokes numbers are in the range \(10^{-5}\leq{\rm St}\leq 10^{-1}\). The dust feedback can destroy a vortex when the local dust-to-gas mass ratio approaches 0.3-0.5, independent of the dust size (Crrkovic-Rubsamen et al., 2015). A vortex may be destroyed when this ratio is as low as 0.1 (Johansen et al., 2004). Moreover, dust grains concentrate differentially inside the vortex and affect the gas dynamics in different ways, including vortex morphology (Miranda et al., 2016). Because of these findings and the possibility that the dust-to-gas density ratio can reach unity near the vortex eye, taking into account the dust feedback via the drag term in Eq. (2) is essential. ### Initial and boundary conditions The numerical domain extension is such that \(0.5\leq R\leq 5\) with \(N_{\rm R}=512\) logarithmically distributed radial and \(N_{\phi}=1024\) equidistant azimuthal cells. Both the inner and outer edge the velocity components and density of gas are damped to the initial value according to the method described in de Val-Borro et al. (2006). The initial distribution of gas is set to a power-law function of \(R\) as \[\Sigma_{\rm B}^{(0)}=\Sigma_{0}R^{-1}. \tag{19}\] The disc self-gravity is neglected, therefore \(\Sigma_{0}\) is a free parameter in our model. The initial dust-to-gas mass ratio is set uniformly \(\Sigma_{\rm d}^{(0)}/\Sigma_{\rm B}^{(0)}=10^{-2}\). Since the dust back-reaction is taken into account, the radial and azimuthal velocity components of dust at the initial state are taken form Garate et al. (2020) Assuming that the disc initial temperature is proportional to \(R^{-1}\), the initial energy density is given as \[e^{(0)}=\frac{1}{\gamma-1}\Sigma_{\rm B}^{(0)}\,h_{\rm g}^{2}R^{-1}, \tag{20}\] where \(h_{\rm g}\) is the aspect ratio of the gaseous disc, which results in \(H_{\rm g}=h_{\rm g}R\) local pressure scale height. ### Methods of analysis In this section, we describe the methods for analysis and characterisation of vortices, which are used to gain insight into their formation and evolution within the disc. The first method is identical to the one applied in Regaly and Vorobyov (2017). This method is mainly suitable for the scenario of formation of large-scale vortices and is summarised as follows. First, the gas surface density on a 2D polar grid is normalised by the initial distribution (\(\Sigma_{\rm g}^{(0)}\)). Then the normalised surface density is averaged radially, taking into account rings having \(\pm 3\)H local pressure radial distance centred on the maximum density. Finally, the radially averaged azimuthal profiles, \(\delta\Sigma\), generated from each frame throughout the simulation are displayed such that the magnitude of the profile is colour-coded. As a result, the time evolution of the vortex can be inferred. The same process was applied for the dust component as well, which is normalised with respect to \(\Sigma_{\rm d}^{(0)}\). Results of the above-described analysis are shown in Fig. 2, where the time evolution (time is measured in units of the Keplerian orbit at the distance of \(R=1\), which coincides with the vortex centre) of \(\delta\Sigma\)-profiles of gas (blue) and dust (red) are displayed. Although this method was applied to all models, it primarily gives valuable information in the scenario of a single large-scale vortex. Details on the large-scale vortex evolution can be seen in Section 3.1 In order to identify multiple small-scale vortices in the disc and quantify their properties, we use another method, which is identical to that described in Regaly et al. (2021). Since a vortex forms a local maximum in gas pressure, it rapidly accumulates dust from its surroundings. This creates an azimuthal asymmetry in the surface distribution of dust-to-gas mass ratio (\(m_{\rm d}/m_{\rm g}\)). We thus used the local maxima in the field of \((m_{\rm d}/m_{\rm g})/(m_{\rm d}/m_{\rm g})\) for detecting the location of all vortices in the disc at a given time. As the denominator can get unintentionally large in the presence of a vortex, it was azimuthally averaged at the given radius only over the lowest half of the values. A region extending 10 grid cells in both radial and azimuthal directions was assumed to have a local maximum in this field if the difference between the maximum and the minimum value in this region exceeded a certain threshold. The optimal value of this threshold depends on the properties of the vortices formed in the model and needed to be set manually. If any two maxima occurred in close proximity, they were considered to be multiple detection of a single vortex. The associated vortex was then assumed to be the located at the larger \(\Sigma_{\rm d}\). The closely spaced maxima were eliminated if the distance between them was less than \(0.2\times R\) au and the radial separation \(\Delta R<0.05\) au. In order to calculate aggregate properties of the vortices, it is necessary to find the area occupied by the vortices. The shape or area of a vortex was considered to be an ellipse in cylindrical coordinate system, centred at its location (see more in e.g. Kida, 1981; Chavanis, 2000). The semi-major and semi-minor axes of such an ellipse were empirically assumed to be a function of the radial position, such that \(a_{\theta}=0.24(R/5)^{0.5}\) and \(a_{R}=0.03(5/R)^{0.8}\), where \(R\) is in the units of au. The shape thus defined was congruent with the Rossby vortices formed in the disc and the area was conservative so that all of the dust accumulated inside a typical vortex was counted. The vortices formed close to the inner disc radius, i.e., less than \(R=0.59\), were rejected because of the boundary effects. Note that the vortex shape as described here was only used for characterising certain properties, e.g., dust mass contained within a vortex. These properties are not overly sensitive to the parameters chosen for vortex detection or characterisation, and we find this method to be sufficiently accurate for our purpose. We measured the dust and gas mass (\(m_{\rm g}\) and \(m_{\rm d}\)) within the area of the vortex. Two possible disc masses were investigated for each model by re-scaling the gas surface density at 1 au (\(\Sigma_{\rm g,0}\)) to 2000 and 4500 g cm\({}^{-2}\), which covers the range for the canonical estimates for MMSN (Adams, 2010). For multiple vortex scenarios, the largest, as well as an average value for \(m_{\rm g}\) and \(m_{\rm d}\), were derived. The vortex centre was assumed to be the location of highest normalised dust-to-gas ratio. The dust-to-gas volumetric density ratio is calculated by assuming vertical equilibrium \[\frac{\rho_{\rm d}}{\rho_{\rm g}}=\frac{1}{\sqrt{2\pi}}\Sigma_{\rm d}\frac{1}{ H_{\rm d}}/\frac{1}{\sqrt{2\pi}}\Sigma_{\rm g}\frac{1}{H_{\rm g}}, \tag{21}\] where the gas pressure scale height is \[H_{\rm g}=\frac{c_{\rm s}}{\Omega_{\rm K}}. \tag{22}\] To calculate the dust scale height, we assume size dependent vertical sedimentation for the dust, in which case, \[H_{\rm d}=H_{\rm g}\sqrt{\frac{\alpha}{{\rm St}+\alpha}}. \tag{23}\] The fragmentation size, \(a_{\rm frag}\), is the maximum size of solid constituent that can be reached due to the dust growth process, before the grown particles are destroyed via mutual collision. We calculate the dust grain fragmentation according to Birnstiel et al. (2012) as \[a_{\rm frag}=\frac{2\Sigma_{\rm g}v_{\rm frag}^{2}}{3\pi\rho_{\rm s}\alpha c_{\rm s }^{2}}, \tag{24}\] with the typical assumptions of fragmentation velocity, \(v_{\rm frag}=10\) m s\({}^{-1}\), and the internal density of the dust aggregate, \(\rho_{\rm s}=1.6\) g cm\({}^{-3}\). The value of \(\alpha\) is self-consistently calculated from simulations according to Eq. (15) and \(\Sigma_{\rm g}\) is calculated using the higher bound for MMSN. In the plots, maximum values of the quantities were global maximum throughout the computational grid. The central values were averaged across the centre of all the vortices. The average value was calculated over the area of all vortices, which gives a lower bound. The simulations conducted in this study correspond to a representative disc, which extends from 0.5 au to 5 au, with the outer boundary of the dead zone set at a radial distance of \(R_{\rm dze}=R_{0}=1\) au. In a typical protoplanetary disc, the dimensions are about an order of magnitude larger, with the outer edge of the dead zone lying at about 15 au. However, since the self-gravity is not considered in our simulations, the disc can be spatially rescaled. In this process, we keep the gas surface density values at 1 au unchanged, i.e., it corresponds to the aforementioned MMSN models. With rescaling of the disc size, physical quantities presented in Figs. 3-6 scale with \(R_{0}\) for mass and the fragmentation size scales with \(\sim 1/\sqrt{R_{0}}\). An appropriate scaling for a typical protoplanetary disc is \(R_{0}\simeq 10\). Note that the total mass of the disc will also change with such a rescaling of the disc size. The strength of a vortex is characterised by its aspect ratio, \(\chi\)(see details in e.g. Kida, 1981; Goodman et al., 1987; Surville & Barge, 2015), such that the vortex reaches its maximum strength at \(\chi_{\rm min}\) and minimum at \(\chi_{\rm max}\) value, respectively. For a large-scale vortex, \(\chi\) is obtained by assuming that the density distribution inside the vortex is elliptical and taking the ratio of semi-major to semi-minor axes (Kida, 1981; Chavanis, 2000). The small-scale vortices are not always elliptical and hence the method of obtaining their aspect ratio is as follows. The maximum in dust-to-gas ratio is found on the grid, which typically corresponds to the strongest vortex in the disc. This is assumed to be the centre of the vortex and centred at this location, contours are drawn at 0.8 of maximum for gas and 0.1 of the maximum for dust. The ratio of the azimuthal to radial extent of these contours is termed as the aspect ratio. In Table 2, the minimum value of the aspect ratio is specified, which represents the strongest phase of a vortex. The last column in this table lists the ratio of total midplane volume density to the Roche density at that radius. The Roche density is the density required for an incompressible fluid in equilibrium to resist tidal disruption while in a synchronous orbit around a star (e.g., Chandrasekhar, 1987). This density may be considered as a threshold criterion for gravitational collapse and growth of dust into gravitationally bound clumps. Note that due to uncertainty in the multiplicative factor, the Roche density is calculated simply as \(M_{\rm s}/R^{3}\). ## 3 Results In summary, although a smooth viscosity transition was assumed at the outer boundary of the dead zone, RWI is excited within 2000 orbits (measured at the outer edge of dead zone) in almost all models, as listed in Table 2. Exceptions were models A1, C4, and C5. As inferred earlier, the excitation of RWI can be broadly classified as resulting in one of the two outcomes - forming either a single large-scale vortex or multiple small-scale vortices. Fig. 1 depicts these two outcomes with the help of gas and dust surface density distributions as well as the disc vortensity distributions for models C3 and C2. The normalised vortensity is calculated as \((\nabla\times u/\Sigma_{\rm g})/(\nabla\times u^{(0)}/\Sigma_{\rm g}^{(0)})\) for the gas and \((\nabla\times v/\Sigma_{\rm d})/(\nabla\times v^{(0)}/\Sigma_{\rm d}^{(0)})\) for the dust component, respectively. The vortensity field shows minima associated with the vortices for both the models, which confirms the origin of the vortices in RWI. Note that all models that exhibit RWI excitation show similar extrema in the vortensity field. We will come back \begin{table} \begin{tabular}{l c c c c c c} \hline & & Small & Large & \(t_{\rm RWI}\) & \(\chi_{\rm min}\) & \\ Mod. & St & vort. & vort. & \((N_{\rm orb})\) & (dust/gas) & \(\rho_{\rm s}/\rho_{\rm Roche}\) \\ \hline **A1** & \(10^{-1}\) & \(\times\) & \(\times\) & - & - & - & - \\ [MISSING_PAGE_POST] & - & - \\ \hline **D1** & \(10^{-1}\) & \(\times\) & \(\times\) & 50 & 3/10 & 4.4 \\ **D2** & \(10^{-2}\) & \(\times\) & \(\times\) & 100 & 9/9 & \(1.2\times 10^{-1}\) \\ **D3** & \(10^{-3}\) & \(\times\) & \(\times\) & 200 & 8/7 & \(1.3\times 10^{-2}\) \\ **D4** & \(10^{-4}\) & \(\times\) & \(\times\) & 300 & 13/13 & \(9.3\times 10^{-3}\) \\ **D5** & \(10^{-5}\) & \(\times\) & \(\times\) & 350 & 15/15 & \(8.7\times 10^{-3}\) \\ **D5*** & \(10^{-5}\) & \(\times\) & \(\times\) & 1400 & 32/32 & \(6.8\times 10^{-3}\) \\ \hline **E1** & \(10^{-1}\) & \(\times\) & \(\times\) & 50 & 5/9 & 4.4 \\ **E2** & \(10^{-2}\) & \(\times\) & \(\times\) & 100 & 6/7 & \(8.8\times 10^{-2}\) \\ **E3** & \(10^{-3}\) & \(\times\) & \(\times\) & 120 & 8/6 & \(1.1\times 10^{-2}\) \\ **E4** & \(10^{-4}\) & \(\times\) & \(\times\) & 150 & 10/10 & \(1.0\times 10^{-2}\) \\ **E5*** & \(10^{-5}\) & \(\times\) & \(\times\) & 200 & 10/10 & \(9.8\times 10^{-3}\) \\ \hline \end{tabular} \end{table} Table 2: Summary on the vortex excitation in models listed in Table 1. Marking “X” corresponds to the absent of the given vortex type being multiple small-scale or single large-scale. \(\chi_{\rm min}\) values show the minimum to this figure after discussing the outcome of the models listed in Table 2 in the next two sections. ### Single large-scale vortex formation In the majority of cases, the RWI is excited at an early stage by about 200-500 orbits at \(R=1\), i.e., at the location of the outer edge of the dead zone. The formation of a single large-scale vortex occurs via one of the two evolutionary paths. In the first case, the ring formed via viscous ring instability fractures into multiple smaller vortices, which promptly merge to form a large-scale vortex. In the second case, a large-scale vortex slowly appears at \(R=1\) at a late stage, after about 1000 orbits. The excitation of a large-scale vortex can easily be identified by the development of a single maximum in the evolution of azimuthal profiles of gas or dust in Fig. 2. Consider the path wherein multiple vortices are formed initially, which rapidly merge into a single large-scale vortex. The most unstable mode at the excitation of RWI is \(m=4\), i.e., four vortices develop initially. This typically occurs for models with \(\mathrm{St}\geq 10^{-4}\) (e.g., models A2-A5). As vortices are formed at slightly different distances and they attract dust deferentially, they approach each other and merge into a single large-scale vortex. A different evolutionary path is taken when the viscosity transition is smoother with \(\Delta R_{\mathrm{J}he}=4H_{\mathrm{J}he}\) and for a smaller Stokes number of \(10^{-5}\) (models A5\({}^{*}\), B5\({}^{*}\), C3\({}^{*}\), and D5\({}^{*}\)). This late formation of single vortex can be considered as RWI excitation with an initial mode number \(m=1\), which occurs in the dust species that are well-coupled with the gas, in combination with a smoother transition. Note that model E is an exception, in which RWI excitation occurs via fracture of a VRI ring and at an earlier time at \(N_{\mathrm{orb}}\approx 200\). The larger initial mode of RWI can be observed in Fig. 2 as a scatter in surface densities, before the merger of individual vortices. A Rossby vortex is efficient in collecting dust due to the formation of a pressure maximum within its eye. As the dust is forced to drift towards the local pressure maximum, the azimuthal contrast in the dust is stronger than in the gas. This is observed in the 2D surface density distribution in Fig. 1, as well as in Fig. 2, where the dust patterns are usually more compact as compared to the gas in the evolution profiles. As seen in Fig. 1, the dust overdensity field also shows greater minima as compared to that for the gas. This effect weakens if the dust is well-coupled with the gas, resulting in nearly identical dust and gas distributions for \(\mathrm{St}\leq 10^{-4}\). After the development of a single large-scale vortex, its azimuthal extension marginally decreases, see, e.g., panels for model A2 in Fig. 2. After reaching a minimum, the vortex starts to grow azimuthally and broaden beyond \(\phi>\pi/2\). The gradual stretching of a vortex occurs due to Keplerian shear, which causes widening of the peak in Fig. 2. This effect will manifest as an extended horseshoe-like pattern in the evolution of both gas and dust profiles (Regaly & Vorobyov, 2017). The gas density distribution of an isolated vortex has been shown to be elliptical in shape (Kida, 1981; Chavanis, 2000). The aspect ratio is a measure of its strength such that a lower value indicates higher strength in terms of vortensity (Kida, 1981; Goodman et al., 1987; Surville & Barge, 2015). A lower value of \(\chi\) represents the strongest phase of the vortex, which is listed for both dust and gas for each simulation in Table 2. In general, \(\chi_{\mathrm{min}}\) in gas and dust are the same for \(\mathrm{St}\leq 10^{-4}\). However, the vortex becomes stronger, i.e., \(\chi_{\mathrm{min}}\) is smaller in dust for \(\mathrm{St}\geq 10^{-3}\) than in gas. Except for model set A, wherein the gaseous component shows small number of \(\chi_{\mathrm{min}}\). Note that for such a low aspect ratio, vortices might be Figure 1: An example of the gas and dust surface density distribution, along with respective vortensity fields for the two cases–a single large-scale vortex (model C3 in the upper panels) and multiple small-scale vortices (model C2 in the lower panels). The gas and dust components as well as vortensity distributions are normalised with respect to their initial values. The white contours mark the vortex aspect ratio in both gas and dust. strongly unstable in 3D, see, e.g., Lesur & Papaloizou (2009). Note that in models A5*, B5*, C3* (where \(\Delta R_{\rm{Jae}}=4H_{\rm{dae}}\) is assumed), the vortex can not reach its strongest phase by the end of simulation because of the late excitation of RWI. Considering the last column in Table 2, the central density of a large-scale vortex is much less than Roche density. This suggests that these vortices can not exhibit gravitational collapse, as the self-gravity is insufficient to overcome the tidal disruption by the star. We now discuss the evolution of large-scale vortices in terms of their temporal evolution as shown in Figs. 3 and 4. For each subfigure, the uppermost panel shows the distance of vortex centre from the central star and the second panel shows the mass of dust and gas accumulated in the vortex. Considering the first panel for all models, a vortex drifts only marginally towards the star and stays close to the location of the viscosity transition near \(1R_{0}\). Note, however, that the long-term evolution of viscosity transition itself was not modelled here. Is seen that vortices only slightly drift toward the star and by reaching their maximum strength the direction of the drift changes due to vortex dynamics and erosion. Analysis of the evolution of the mass collected by the vortices shows that a Figure 2: The time evolution of the \(\delta\Sigma\) profiles in all models listed on Table 1. Blue and red panels correspond to the gas and dust azimuthal profiles, respectively. Time is measured by the number of orbits at the dead zone edge. Three patterns cam be recognised both in gas and dust profile evolution: 1) absent of any variation in the profile, because no RWI is excited; 2) chromophore-like pattern, in which case a single large-scale vortex develops whose azimuthal size increases with time; 3) scattered patterns, in which case several small scale vortices develop. In the latter case the scattering occurs because the algorithm used to generate the plots are inadequate for multiple vortices, see more details in the text. Figure 3: Analysis on vortex evolution for models A1, A2, A3, A4, A5, A5\({}^{\circ}\), B3, B4, B5, B5\({}^{\circ}\), C3, where C3*, where only a single large-scale vortex develops. Four panels are shown for each models. Panels are from top to bottom: radial distance of the vortex centre, dust (red) and gas (blue) mass inside the vortex, dust-to-gas mass ratio (green) at the vortex centre, fragmentation size, i.e., the maximum size of solid constituent that can be reached due to dust growth process inside the vortex. The shaded regions in gas-dust mass panels correspond to the maximum/minimum values calculated using the upper/lower bounds of gas density for the MMSN model. The dotted lines in panels e and f show the excitation criterion for triggering streaming instability and the metre size, respectively. Two distinct groups can be identified based on the excitation time of RWI: excitation occurs before and after \(N_{\rm orb}=500\) in the early and late onset group, respectively. large-scale vortex can accumulate several hundred \(R_{0}\times M_{\oplus}\) gas, irrespective of the Stokes number. The gas mass accumulated in a vortex is similar for all models. This value corresponds to about twice the mass of Jupiter in gas, assuming \(R_{0}\simeq 10\) scaling. The dust mass accumulated inside the vortex is inversely proportional to the Stokes number, e.g., as observed in models A2, A3, and A4. The amount of dust collected is between 1 and \(10\,R_{0}\times M_{\odot}\). With the \(R_{0}\simeq 10\) scaling, these values correspond to 10 to 100 Earth masses. For model A2, with \(\mathrm{St}=10^{-2}\) the collected dust mass can reach \(10\,R_{0}\times M_{\odot}\), while for other models less dust is accumulated. By applying \(R_{0}\simeq 10\) scaling, model A2 accumulates about a hundred or ten Earth masses. The last two panels in Figs. 3 and 4 show the ratio of central dust-to-gas volume density and the fragmentation size of the dust particles, respectively. We found that the larger the Stokes number, the larger the dust-to-gas density ratio (e.g., models A2, A3, and A4). In general, the dust-to-gas density ratio remains low between \(10^{-2}\) and \(10^{-1}\). A small spread between the maximum and minimum bounds of this ratio indicates a weak vortex, which is unable to concentrate the dust efficiently or a strong coupling between the dust and gas components. However, in models A1 and A2 the dust-to-gas density ratio can exceed unity and thus, excitation of the streaming instability is possible. Note that the highest dust density is almost (except in model A1) always at the vortex centre as the max and centre values overlap. With regards to the growth of solid species, we found that \(a_{\mathrm{frag}}\) lies in the range of \(20-40\times(1/\sqrt{R_{0}})\) cm by about 1000 orbits, independent of the model details. With the \(R_{0}\simeq 10\) scaling of a disc, it corresponds to \(6-12\) cm. The maximum value of \(a_{\mathrm{frag}}\) is measured at the vortex centre, which implies that the dust growth is most efficient at this location. A notable observation for the case E of model parameters (models E3-E5) is that \(a_{\mathrm{frag}}\) initially increases and then gradually decreases as the vortices evolve. The gradual decrease in \(a_{\mathrm{frag}}\) in general, is caused by a marginal decrease in gas density as the simulations proceed. This effect is magnified for the model set E due to its strong dependence on the gas surface density. Finally, we emphasise that all other physical quantities presented in Figs. 3 and 4, do not change significantly beyond 1000 orbits. There are three exceptions-models B5*, C3*, D5*-wherein the vortex is not fully-fledged by the end of simulation due to the late excitation of RWI. At this point we mention the anomalous behaviour of model A1. As this simulation progresses, the dust distribution near the dead zone edge resembles azimuthally elongated streaks and not well-formed vortices. In Fig. 3, the quantities are plotted at the location of the highest dust concentration. The rapid fluctuation of Figure 4: Same as Fig. 4, but for models D3, D4, D5, D5*, E3, E4, E5, E5*, where also a single large-scale vortex develops. this location as well as properties associated with it reflects the dynamic nature of the dust-gas system. We hypothesise that such evolution is caused by strongly concentrated dust. Since we do not model formation of planetesimals, the concentrated dust remains in the disc and manifests as elongated streaks. Such streaks are transiently observed in other models with a large Stokes number, although to a much lesser extent. ### Multiple vortex formation The nature and evolution of the small-scale vortices are remarkably different and much more complex as compared to the large-scale vortices. Small-scale vortices are formed when the dusty rings developed via the viscous ring instability become Rossby unstable and typically breaks up with a large azimuthal mode number (\(m\simeq 10-20\)). Although the resulting vortices show some merger, prompt formation of a single large-scale vortex does not occur. Long-term survival of multiple vortices in the disc opens up the possibility of complex vortex-vortex interactions. An individual vortex typically gathers dust, produces large-scale spiral waves in the dust and migrates inward. The constructive interference between such spiral waves in certain cases may give rise to a cascading effect such that a new generation of vortices is induced at a different radius in the disc. It is also possible that multiple rings first form via viscous ring instability, which in turn become Rossby unstable. Thus, depending on the model parameters, the process of self-sustaining vortices can repeat multiple times and cascade throughout the disc, see more details of these phenomena in Regaly et al. (2021). The \(\delta\Sigma\) profiles show non-uniformity and scattered features in Fig. 2, e.g., in models A1, A2, B1, B2, C1, C2, D1, D2, E1, and E2. These features reflect the presence of multiple vortices in the disc as well as possible shift of the maximum density on the grid from one vortex to another. Thus, we need additional methods of analysis when the disc shows formation of multiple vortices. Figs. 5 and 6 show the evolution of several quantities for the models that develop multiple vortices. These indicators are useful for gaining insight into the bulk behaviour of the vortices formed within the disc and are obtained using methods described in Section 2.5. The first panel of each subfigure in Figs. 5 and 6 shows the number of vortices identified in the disc (\(N_{\rm{vort}}\)), while the second panel shows the radial position of each vortex at a given time. Vortex-vortex mergers result in both the initial steep decline in the number of vortices and eventual gradual decline. Due to a limited vortex merging, \(N_{\rm{vort}}\) stays between 10 and 1 by the end of simulations. With regards to the radial position of small-scale vortices, a tendency to drift inwards is observed. However, as the vortices reach the global pressure maximum at the viscosity transition near 1R\({}_{0}\) and the inward drift stops almost completely. Thus, the small-scale vortices tend to get trapped at the viscosity transition at the outer boundary of the dead zone. For two of the models, C1 and D1 (Fig. 5) vortices can form well outside the dead zone. This phenomenon is similar to vortex cascade described in Regaly et al. (2021), wherein formation of multiple generations of vortices occurs due to con Figure 5: Analysis on vortex evolution for models B1, C1, D1, and E1, where multiple vortices develop. Six panel are shown for each models. Panels are from top to bottom: number of vortices in the disc, radial distances of vortex centres, dust (red) and gas (blue) mass inside the largest vortex, vortex dust (red) and gas (blue) mass content averaged through all vortices, dust-to-gas mass ratio (green) at the vortex centre, fragmentation size, i.e. the maximum size of solid constituent that can be reached due to dust growth process inside the vortex. The shaded regions in gas–dust mass panels correspond to the maximum/minimum values calculated using the upper/lower bounds of gas density for the MMSN model. In models C1 and D1, vortex generation are not concentrated to the viscosity transition (\(R=1\)), rather shows a cascade like feature. structure interference of spiral waves created by vortices that are present at a given radius. Consider the next two panels of Figs. 5 and 6. The third panel shows the cumulative mass of both dust and gas accumulated in all the vortices present in the disc at a given time (\(m_{\rm d}\) and \(m_{\rm g}\)), while the fourth panel shows the average mass accumulated by an individual vortex. Note that the upper and lower bounds are obtained from the upper and lower estimates for an MMSN disc (see Section 2.5). For the models that show formation of multiple vortices, the upper bound of material collected by an individual vortex can exceed \(100\times R_{0}\,M_{\oplus}\) for the gas very quickly, which corresponds to about twice the mass of Jupiter for \(R_{0}\simeq 10\) scaling. In all multiple vortex models, the amount of material collected by the largest (strongest) vortex can reach \(100\times R_{0}\,M_{\oplus}\) for the gas very quickly, which corresponds to a couple of Jupiter mass for \(R_{0}\simeq 10\) scaling. The dust content of a vortex typically increases rapidly in the beginning and flattens out with time. In models B1, C1, D1, and E1, a vortex can collect over \(10\times R_{0}\,M_{\oplus}\) of dust, corresponding to a value of \(100\,M_{\oplus}\) for \(R_{0}\simeq 10\) scaling by the end of the simulation. This means that the dust-to-gas mass ratio inside an individual vortex is enhanced by over ten times in general compared to the initial value in the disc. For models B2, C2, D2, and E2, the dust collected by a vortex is below \(10\times R_{0}\,M_{\oplus}\), implying a marginal increase over the initial dust-to-gas mass ratio. With regards the second to last panel in these figures showing the dust-to-gas density ratio, the models can be grouped in the same two classes. In models shown in Fig. 5), the dust-to-gas density ratio increases continuously and can stay well above unity, while the maximum can approach 1000. In models B2, C2, and D2 (shown in Fig. 6) the oscillation of dust-to-gas density reflect destruction of vortices and collection of dust the new ones. In models B2, C2, and D2 (shown in Fig. 6), the dust-to-gas density ratio shows cyclic oscillations due to the occurrence of vortex cascade. Within these oscillations, the minima reflect destruction of vortices as they evolve and the maxima correspond to formation of the next generation of vortices. Another observation of interest is that the maximum dust-to-gas density ratio is not coincident with the average value at the vortex centres. This reflects a differential gathering of dust, i.e., one vortex gathering more dust as compared to the rest. A secondary cause of this difference is the local vortex dynamics, which result in large, asymmetric vortices which show off-centre accumulation of the dust. Similar to the dust-to-gas density ratio, the maximum size that the solid can reach increases monotonically with time. Similar to the behaviour of the dust-to-gas ratio, models B2, C2, and D2, show a trend of varying \(a_{\rm frag}\) because of vortex cascade. The dust can typically grow up to \(a_{\rm frag}\approx 20\times(1/\sqrt{R_{0}})\,{\rm cm}\), which is about half the size that is achieved in the case of large-scale vortices. For \(R_{0}\simeq 10\) scaling this corresponds to about 6 cm. As seen in Table 2, the central density in small-scale vortices can exceed Roche density, but only for the case of large Stokes number. Density exceeding Roche value implies that the collected dust may undergo gravitational collapse, forming more massive, gravitationally bound structures. ## 4 Discussion It is well-established that RWI excitation is inhibited if the half-width of the viscosity transition exceeds the local pressure scale height by twofold (Lyra et al., 2009; Regaly et al., 2012). This is because of the mismatch in mass transfer rate due to varying viscosity is insufficient to sustain a strong pressure gradient. However, as we showed in the previous sections, with a dust-dependent prescription Figure 6: Same as Fig. 6, but for models B2, C2, D2, and E2, where also multiple vortices develop. for viscosity, RWI excitation occurs even when a smooth transition is assumed. We explain the formation of vortices in our simulation with the help of Fig. 7. The figure shows azimuthally averaged profiles of gas and dust surface densities as well as the effective \(\alpha\)-parameter in models C3, C5, and C3*. An enhancement in gas surface density forms in all models due to the mismatch of the accretion rate near the viscosity transition. Since such a profile creates a maximum in gas pressure, it collects dust and the local dust-to-gas mass ratio increases. According to the parametric-\(\alpha\) prescription, see Equation 15, this increasing concentration of dust results in a decrease in the viscosity. The decreased viscosity leads to further enhancement of the gas surface density and a positive feedback loop ensues. As a result, RWI is soon excited despite the initially smooth viscosity transition, with \(\Delta R_{\rm dze}=(2+2/3)H_{\rm dze}\) (model C3) or even \(R_{\rm dze}=4H_{\rm dze}\) (model C3*). The above-described mechanism works effectively in almost all cases for \(\rm St<10^{-2}\) models. However, in C4 and C5 models, the local enhancement in the dust density is weak, namely the density peak in dust is too smooth to effectively sharpen the viscosity transition, as seen in the middle panel of Fig. 7. This is because of that the dust is well coupled to the gas as \(\rm St=10^{-4}\) and \(10^{-5}\), and in model C, the parametric-\(\alpha\) prescription assumes \(\Phi_{\rm g}=1\), for which case the density enhancement in gas is against the local viscosity depression. Note that in these models RWI is not excited. In case of \(\rm St\geq 10^{-2}\) RWI excitation and subsequent vortex cascade occur in the same fashion as was identified in Regaly et al. (2012). Although the positive feedback due to dust-dependent \(\alpha\)-parameter described above acts in all models, the exact qualitative outcome, e.g., if small-scale or large-scale vortices form, depends on the model parameters. These parameters are the exponents \(\phi_{\rm g}\) and \(\phi_{\rm d}\), which determine the attenuation of the disc viscosity as well as the Stokes number, which specifies the coupling properties of the dust species. The process of dust diffusion tends to smear out sharp gradients in dust surface density and weaken a vortex. However, the process of dust drift causes its migration towards a local maximum in gas pressure and enhances its concentration. Ultimately the formation of vortices in a particular model is determined by the balance of these two processes, which work in the opposite direction. The dust particles with a relatively large Stokes number, e.g., models with \(\rm St>10^{-2}\), move rapidly towards the local pressure maxima, while they also suffer less diffusion (see Eq. 17). Since sustenance of small-scale vortices will require preservation of a strong gradients of dust surface density, these conditions are satisfied only at a relatively large Stokes number. On the other hand, small-sized dust is well coupled to the gas motion and drifts slowly within the disc. Such dust particles cannot maintain gradients required to sustain small-scale vortices and thus, only a single large-scale vortex may form. This trend for Stokes number is indeed observed in our simulations in Table 2. Thus, only a bimodal outcome is possible for the fate of evolving vortices, wherein either a cascade of multiple small-scale vortices occurs or a single large-scale vortex is formed. The phenomenon of differential drift also results in increased Roche densities for large Stokes number as listed in Table 2. If diffusion process dominates the dust dynamics, e.g., model C5 in Fig. 1, the dust is unable to concentrate efficiently at the pressure maximum and formation of a vortex is suppressed. If the dust drift is particularly strong, this may result in evolution similar to model A1. In such a case, a relatively large extent of the disc is prone to streaming instability. Since we do not resolve streaming instability or model planetesimal formation, the dust remains in the disc and this manifests as elongated streaks. Note that diffusion coefficient also depends on the local viscosity, which in turn depends on the exponents \(\phi_{\rm g}\) and \(\phi_{\rm d}\). The exact outcome of a particular combination can only be determined via conducting self-consistent simulations. Both small-scale and large-scale vortices typically originate near the smooth dead zone edge and although some inward migration is observed, they do not migrate into the dead zone. The vortex migration is most notable in models C1 and D1. Considering these two cases with \(R_{0}=10\) scaling, the radial migration speed of a vortex when it is farther away beyond 20 au is approximately 3.3-4.2 \(\rm m\,s^{-1}\). Closer to the dead zone edge at 10 au, the speed is slower at about 1.4-2.1 \(\rm m\,s^{-1}\). For comparison, the drift velocity of dust with \(\rm St=0.1\) is 3.3\(\rm m\,s^{-1}\) at 20 au and 4.7 \(\rm m\,s^{-1}\) at 10 au for an unperturbed \(\alpha\)-disc. Thus, we can conclude that away from the dead one edge, the vortex drift is almost entirely due to the dust drift. The simulations show that in the vicinity of the outer edge of the dead zone, the pressure bump formed is sufficient to slow down and eventually completely halt the inward migration of vortices in all cases. This loitering of the vortices at the dead zone edge potentially has significant consequences for planet formation. Note that since the simulations do not consider disc self-gravity, the role of vortex-disc interactions is minimal and the migration is unlike the type I migration of planets. Another observation of interest is that the dust collected in Figure 7: Radial profiles of the azimuthally averaged surface mass density of the gas (\(\Sigma_{\rm g}\)) and dust (\(\Sigma_{\rm d}\)) and the value of \(\alpha\) parameter in three models C3, C5, and C3*. The profiles are calculated from snapshots taken by 800, 1400, and 1400 orbits at \(R=1\), respectively. Dot-dashed curves show the initial profiles. \(\alpha\) profile has a steep region in models where the local dust-to-gas density ratio grows beyond the initial value of 0.01. RWI is eventually RWI excited in models C3 and C3*. vortices is typically off-centre and this is most notable in the case of small-scale vortices. Similar phenomenon of off-centre dust concentration has been reported by Hammer et al. (2019). In their case, presumably by repeated perturbations from the spiral arms of a planet that is present in the disc. For the small-scale vortices in our simulations, the perturbations from other vortices may cause similar asymmetrical dust concentrations. Although detailed behaviour of both the small and large scale vortices in our simulations warrants further exploration, which will be carried out in the future. Merger of vortices is typically observed for the small-scale vortices shortly after their formation, although in most cases, not all vortices in the disc merge to form a single vortex. Once a vortex is formed, it quickly starts attracting dust from its surrounding. A small difference in accumulated dust as compared to its peers may nudge a vortex inwards in the disc. This differential dust gathering, in addition to the fact that small-scale vortices have strong local dust concentrations as well as a small size, may explain how several vortices often co-evolve in the disc. A large-scale vortex may form in the disc in two ways. Several smaller vortices typically with \(m=4\) are formed, which shortly merge into a single large-scale vortex (e.g., D3, D4, D5) or a large-scale vortex gradually emerges at the pressure maximum (e.g., B5*, D5*). We hypothesise that the exact outcome depends on the difference between the relative strength of the dust diffusion as compared to its drift, with the former favouring gradual formation of a large-scale vortex. Figs. 3-5, some general trends can be observed. The small-scale vortices are always much stronger than the large-scale vortices with respect to the midplane ratio of volume density, \(\rho_{d}/\rho_{g}\). The value of \(\rho_{d}/\rho_{g}\) remains below unity for all large-scale vortices, except A2, where it remains constant at unity. On the other hand, for small-scale vortices, this ratio exceeds unity, and in some cases the maximum values approach 1000. The origin of this disparity can be traced back to the balance between the dust drift and its diffusion. Small-scale vortices can be sustained only in the cases where the dust drift dominates, and this results in strong concentrations of dust and an enhanced dust-to-gas mass ratio. An opposite trend in the dust fragmentation size is observed, where \(a_{\rm frag}\) is approximately twice as large inside large-scale vortices as compared to the small-scale vortices. The fragmentation size scales as \(a_{\rm frag}\sim\Sigma_{\rm g}/c_{\rm s}^{2}\) according to Eq. (24). We found no significant difference in the temperature inside the two types of vortices, which implies that the local sound speed is not responsible for observed disparity. However, the gas density in the large-scale vortices is typically about twice as large as compared to the small-scale vortices, which explains the discrepancy in \(a_{\rm frag}\) for the two scenarios. It is known that accretion discs might be a subject of axially symmetric pulsational instability similar to stellar oscillations (Kato, 1978). Since the thermal energy is supplied in part by the viscous dissipation of shear motion and the viscosity depends on the local gas compression via Equation (15), any oscillations in the disc can get amplified. This phenomenon is known as viscous overstability and Latter & Ogilvie (2006) give an analytical criterion for its occurrence in accretion discs. Since the disc viscosity in our models is confined between \(10^{-4}\leq\alpha\leq 10^{-2}\) and the disc vertical thickness is around \(H/r\approx 0.05\), viscous overstability does not occur on the long-wavelength limit. However, for short wave-length limit we have to analyse Equation (14) of Latter & Ogilvie (2006) with the assumption of \[\frac{{\rm d}(\nu\Sigma_{\rm g})}{{\rm d}\Sigma_{\rm g}}=\nu(1+\phi_{\rm g}) \left(\frac{\Sigma_{\rm g}}{\Sigma_{\rm g}^{(0)}}\right)^{\phi_{\rm g}}. \tag{25}\] In the limit of zero bulk viscosity for the gas1, the viscous overstability occurs on the wavelength Footnote 1: Being negligible small the gas bulk viscosity, we neglect it. Moreover, at the applied grid resolution, the magnitude of the numerical viscosity of the FARGO algorithm is also negligible compared to the assumed \(\nu=\alpha c_{\rm s}^{2}/\Omega\) effective viscosity. \[\frac{1}{k}\gtrsim\frac{c_{\rm s}}{\Omega}\left(\frac{4}{9(1+\phi_{\rm g})( \Sigma_{\rm g}/\Sigma_{\rm g}^{(0)})^{\phi_{\rm g}}-7}\right)^{1/2}, \tag{26}\] where we assume that \(\nu\ll c_{\rm s}^{2}/\Omega\). For an unperturbed disc (\(\Sigma_{\rm g}/\Sigma_{\rm g}^{(0)}=1\)), the above criterion gives \(\sqrt{2}H\) for \(\phi_{\rm g}=0\), i.e., viscous overstability would be active over the disc vertical scale height. For \(\phi_{\rm g}=1\), we get \(\sqrt{4/11}H\), i.e., our results may be affected by the viscous overstability. For the case of \(\phi_{\rm g}=1\), there is no valid solution for \(1/k\), meaning no viscous overstability occurs. Finally, we mention some constraints on the detectability of vortices formed in our dust-dependent \(\alpha\) model. Large-scale vortices in protoplanetary discs have been already detected in the millimetre wave-length observation by SMA or ALMA radio interferometers (e.g., Regaly et al., 2012, 2017, and references therein). On the other hand, a small-scale vortex in our model typically extends about 3 au assuming \(R_{0}=10\) au scaling. At 100 pc, this subtends 0.03 arcseconds. Since ALMA in C-10 configuration can resolve up to 0.018-0.012 arcseconds2 in band 6 and 7, respectively, these vortices are theoretically detectable. However, if such vortices indeed form in an early protoplanetary disc, the dust within them may rapidly grow into planetesimals/protoplanets on a very short timescale, making direct observations difficult. Footnote 2: See Cycle 9 Proposer’s Guide at [https://almascience.naro.edu](https://almascience.naro.edu) for details. ## 5 Conclusions In this paper, we present the results of numerical experiments that investigate the excitation of Rossby vortices in protoplanetary discs at the outer edge of the dead zone. Canonically, an abrupt transition in viscosity is required for the excitation of Rossby wave instability, with a half-width no more than twice the local pressure scale height (i.e., \(\Delta R_{\rm dze}<2H_{\rm dze}\)). However, the viscosity transition at the outer edge of the dead zone has been shown to be smooth and gradually varying. We conducted hydrodynamic simulations using a parametric-\(\alpha\) model, wherein the MRI efficiency depends on the local concentration of dust as well as gas due to adsorption of charged particles on the grain surface. With such a dust-dependent \(\alpha\) formulation, with the viscosity being a function of the local concentration of dust as well as gas, RWI can be excited in most cases despite a smooth viscosity transition. The dust-gas coupled simulations were conducted in the thin-disc limit with adiabatic disc thermodynamics. Four cases of dust-dependent parametric-\(\alpha\) models were investigated (see Table 1 for details). The dust component was assumed to have a fixed Stokes numbers in the range of \(10^{-5}\leq\rm{St}\leq 10^{-1}\). The considered viscosity transitions were smooth, specifically, with a width of \(\Delta R_{\rm dze}=(2+2/3)H_{\rm dze}\) and \(\Delta R_{\rm dze}=4H_{\rm dze}\). We found that RWI is excited in almost all models, resulting in formation of anticyclonic vortices. The excitation of RWI, despite smooth viscosity transition, can be explained by the local steepening of \(\alpha\) due to dust enhancement and a positive feedback cycle between dust accumulation and a reduction in \(\alpha\)-parameter. The non-excitation of RWI in two of the cases can be explained by a weak dust enhancement due to strong coupling of the dust species (St \(\leq 10^{-4}\)) and a positive dependence of \(\alpha\) on the gas density (\(\phi_{\rm g}=1\)). Two distinct outcomes of RWI excitation were identified - formation of a single large-scale vortex or generation of multiple small-scale vortices. Table 2 lists the excitation outcome for the considered combinations of parametric-\(\alpha\) model and Stokes number. In summary, the exact outcome of a simulation is determined by the balance between dust diffusion and its drift towards the local gas pressure maximum. A large-scale vortex develops when the dust is well coupled to the gas (St \(\leq 10^{-3}\)) or when \(\alpha\) is independent of dust concentration (model set A). In such a case, the dust particles cannot maintain gradients required to sustain small-scale vortices. The small-scale multiple vortex scenario occurs only for less coupled dust species, with St \(\geq 10^{-2}\). During their evolution, small-scale vortices may merge, however, formation of a single large-scale vortex is avoided. In a canonical MMSN disc, the gas mass accumulated in a typical large-scale vortex is approximately twice the mass of Jupiter. For St = \(10^{-1}\) and St = \(10^{-2}\) a large-scale vortex can collect about 100 and 10 Earth mass of solid material, respectively. The dust-to-gas density ratio generally remains below unity and typically increases with an increases with the Stokes number. For model A2 with St = \(10^{-2}\), the dust-to-gas density ratio exceeded unity, in which case the streaming instability can be excited. The total midplane density exceeds Roche density only for the case of largest Stokes number, St = \(10^{-1}\), indicating that a direct gravitational collapse is also feasible. With typical assumptions of dust properties, the dust can grow within a large-scale vortex to a fragmentation size of about 6-12 cm. In the case of formation of multiple small-scale vortices in the disc, an individual vortex can collect about 1 to 10 Earth masses of solid material. The dust-to-gas density ratio grows well above unity in general, and can even reach a hundred in certain cases. As a result, the criterion for streaming instability is always satisfied in small-scale vortices. As compared to a large-scale vortex, the gas density is not enhanced by a large margin by a small-scale vortex. As a result, the maximum size that the dust can reach is about 6 cm, half of that found in large-scale vortices. It is shown in Regaly et al. (2021) that small-scale vortices developed with a dust-dependent parametric-\(\alpha\) model are subject to relatively fast inward migration. However, in this study we found that the migration of small-scale as well as large-scale vortices halts at the dead zone edge. The trapping of small-scale vortices at the outer dead zone edge is particularly noticeable in models C1 and D1 (see Fig. 5). The formation of vortices via Rossby instability at the smooth outer dead zone edge in protoplanetary discs occurs for a wide range of parameters in the dust-dependent \(\alpha\) models as well as Stokes numbers. The resulting large- as well as small-scale vortices get trapped at the dead zone edge and remain stable over hundreds or thousands of orbits. Thus, the meter size barrier can be overcome within the vortices to form planetesimals and planetary embryos. The tens of Earth masses in cumulative solid material collected by the vortices is sufficient to form planetary systems similar to our own solar system (Weidenschilling, 1977). Thus, we conclude that such vortices formed at the outer dead zone edge of protoplanetary discs act as planetary nurseries, providing ideal environment for dust growth into planetesimals and beyond. ## Acknowledgements We thank the anonymous referee of improving the quality of the manuscript. The project was supported by the Hungarian OTKA Grant No. 119993. RZs acknowledges helpful discussions with V. Frohlich.. K.K. acknowledges support from NSERC of Canada. D. T.-N. acknowledges the financial support of the Lendulet Program of the Hungarian Academy of Sciences, and project No. LP2018-7/2021 and the KKP-137523 'SeismoLab' Elvonal grant of the Hungarian Research, Development and Innovation Office (NKFIH). ## Data Availability The data underlying this article obtained with GFARGO2 code can be shared at a reasonable request to the corresponding author.
2303.14265
Safe and Sample-efficient Reinforcement Learning for Clustered Dynamic Environments
This study proposes a safe and sample-efficient reinforcement learning (RL) framework to address two major challenges in developing applicable RL algorithms: satisfying safety constraints and efficiently learning with limited samples. To guarantee safety in real-world complex environments, we use the safe set algorithm (SSA) to monitor and modify the nominal controls, and evaluate SSA+RL in a clustered dynamic environment which is challenging to be solved by existing RL algorithms. However, the SSA+RL framework is usually not sample-efficient especially in reward-sparse environments, which has not been addressed in previous safe RL works. To improve the learning efficiency, we propose three techniques: (1) avoiding behaving overly conservative by adapting the SSA; (2) encouraging safe exploration using random network distillation with safety constraints; (3) improving policy convergence by treating SSA as expert demonstrations and directly learn from that. The experimental results show that our framework can achieve better safety performance compare to other safe RL methods during training and solve the task with substantially fewer episodes. Project website: https://hychen-naza.github.io/projects/Safe_RL/.
Hongyi Chen, Changliu Liu
2023-03-24T20:29:17Z
http://arxiv.org/abs/2303.14265v1
# Safe and Sample-efficient Reinforcement Learning for Clustered Dynamic Environments ###### Abstract This study proposes a safe and sample-efficient reinforcement learning (RL) framework to address two major challenges in developing applicable RL algorithms: satisfying safety constraints and efficiently learning with limited samples. To guarantee safety in real-world complex environments, we use the safe set algorithm (SSA) to monitor and modify the nominal controls, and evaluate SSA+RL in a clustered dynamic environment which is challenging to be solved by existing RL algorithms. However, the SSA+RL framework is usually not sample-efficient especially in reward-sparse environments, which has not been addressed in previous safe RL works. To improve the learning efficiency, we propose three techniques: (1) avoiding behaving overly conservative by adapting the SSA; (2) encouraging safe exploration using random network distillation with safety constraints; (3) improving policy convergence by treating SSA as expert demonstrations and directly learn from that. The experimental results show that our framework can achieve better safety performance compare to other safe RL methods during training and solve the task with substantially fewer episodes. Project website: [https://hychennaza.github.io/projects/Safe_RL/](https://hychennaza.github.io/projects/Safe_RL/). ## I Introduction Recently, reinforcement learning (RL) shows promising results in a series of artificial domains; but it's still challenging to develop applicable RL algorithms for a real system due to nine challenges discussed in [1]. Our paper focuses on the two challenges: satisfying safety constraints and learning from limited samples. Ideally, 0-safety violation should be guaranteed during both training and execution as failures are expensive and dangerous. A comprehensive survey on optimization-based and exploration-based safe RL methods can be found in [2]. Within the optimization-based methods, they redesign the reward function, change the learning objective and add soft constraints to balance the reward and the risk [3, 4, 5, 6, 7]; however, no safety guarantee can be derived from these methods. Within the exploration-based methods, they modify the exploration process to avoid risky situations by incorporating external knowledge [8, 9] and using a risk-directed exploration [10]. The probability-based methods construct shields to avoid violating the safety constraints, but they only meets the safety requirement with some probability and are hard to be generalized to continuous systems [11, 12]. Control barrier function (CBF) method is used to provide hard constraints for RL and achieve 0-safety violation in simple environments like car following and lane changing [13][14]. To handle the intersection scenario in autonomous driving, generalized control barrier function (GCBF) is used to reduce the constraints violation [15]. People evaluate the sample efficiency by measuring the amount of data necessary to achieve a certain performance threshold [1]. But collecting sample data in real world is time-consuming and expensive, and RL agents may converge to local optima when the reward is sparse and never reach the performance threshold. In a word, the sample efficiency challenge makes it hard to deploy RL algorithms quickly in real world systems. Exploration methods, like adding parameter space noise (PSN) [16] and using random network distillation (RND) [17] can help to solve sparse-reward problem but it is risky to explore freely in clustered dynamic environment as the systems would fail or break before learning the optimal controller. Leveraging expert demonstration data can accelerate the agent's learning [18], however people usually get suboptimal demonstration as the expert demonstration is hard to access. Recent model-based deep RL approaches show a lot of promise for improving sample efficiency, however, an imperfect dynamics model can degrade the performance of the learning algorithm and lead to suboptimality [19]. In this work, we aim to design a reinforcement learning framework that can learn safely and efficiently even in _clustered dynamic environments_. We use the safe set algorithm (SSA) [20] to ensure safety. SSA has similar structures as CBF, both of which belong to energy function-based safe control [21]. These methods can safe guard any reinforcement learning algorithm, but they can't help to find optimal policies directly, and sometimes make the system behave overly conservative. Thus, we first adapt the projection direction of SSA to generate more efficient control when possible. Also by combining SSA with normal exploration strategies, we can transform these originally unsafe explorations into safe explorations. Moreover, since getting safe expert demonstration is difficult in the real world, we decide to learn from the safe control generated from SSA online to speedup training. The key contributions of this paper are summarized below: * We propose the SSA based safe RL training framework and prove this framework can guarantee safety with high probability even in clustered dynamic environments, except the cases that no safe control exists. * We propose three techniques to improve the learning efficiency: adapting the SSA, exploring under safety constraints and learning from SSA demonstration. The numerical results show that we can solve the task with substantially fewer episodes and interactions. ## II Problem Formulation **Environment Dynamics** The 2D environment contains multiple dynamic obstacles, every obstacle evolves as \(\dot{x}_{E}=f_{E}(x_{E},u_{E})\), where the function \(f_{E}\) represents double integrator dynamics, state \(x_{E}\) involves the position and velocity of the obstacle and control \(u_{E}\) represents its acceleration, which is uniformly distributed on the predefined interval. **Robot Dynamics** Let \(x_{R}\in X\subset R^{n_{x}}\) be the robot state containing positions and velocities in x, y axis; \(u\in U\subset R^{n_{u}}\) be the control input, which is the accelerations in x, y axis. The robot dynamics are defined as: \[\dot{x}_{R}=f(x_{R})+g(x_{R})u=:h(x_{R},u) \tag{1}\] where \(f(x_{R})=\left[0,\mathbb{I}_{2};0,0\right]x_{R}\), \(g(x_{R})=\left[0;\mathbb{I}_{2}\right]\). We assume we have the ground truth form of \(f\) and \(g\). Given the dynamical system above, we can formulate an MDP \((X,U,\delta,r,p)\). The transition function \(p:X\times U\times X\rightarrow[0,1]\) is defined as \(p(x_{t+1}|x_{t},u_{t})=1\) when \(x_{t+1}=x_{t}+h(x_{t},u_{t})\) and \(0\) otherwise. The reward function \(r\) provides positive reward if reaching the goal state \(X^{*}\), negative penalty if collide, and zero otherwise. The discounting factor is set to \(\delta=1\). **Safety Specification** The safety specification requires the robot to stay in a closed subset of state space, called the safe set \(X_{S}\). The safe set can be presented by a zero-sublevel set of a continuous and piecewise smooth function \(\phi_{0}:R^{n_{x}}\rightarrow R\), i.e. \(X_{S}=\{x|\phi_{0}(x)\leq 0\}\). In our problem, \(\phi_{0}\) is defined as \(d_{min}^{2}-d^{2}\), where \(d_{min}\) is the user defined safety distance and \(d\) is the distance from robot to the closest obstacle. For safety metric, we evaluate the percentage of safety violations. **Sample Efficiency Specification** To evaluate the data efficiency of a particular model, we measure the amount of data necessary to achieve a certain performance threshold: \[J^{eff}=\min|D_{i}|\ s.t.R(\texttt{Train}(D_{i}))\geq R_{min}. \tag{2}\] where \(D_{i}\) is the data used for training the RL policy and \(R_{min}\) is the desired performance threshold [1]. **Problem** The core problem of this paper is to achieve safe and sample-efficient RL learning in clustered dynamic environment. The learned RL policy will map the state \((x_{R},x_{E}^{c})\) to control \(u\), where \(x_{E}^{c}\) means the closest obstacle to the robot. For safety, we need to monitor and modify the nominal control \(u\) to keep the system inside the safe set \(X_{S}\) and achieve least safety-violations. For sample efficiency, we need to ensure RL agent wouldn't converge prematurely to a local optimum and learn the optimal controller with fewer training data. ## III Review of the Safe Set Algorithm The SSA works as a safety monitor [20], which is suitable to safe guard the RL training. The key of SSA is to define a valid safety index \(\phi\) such that 1) there always exists a feasible control input in \(U\) that satisfies \(\dot{\phi}\leq-\eta\phi\) when \(\phi\geq 0\) and 2) any control sequences that satisfy \(\dot{\phi}\leq-\eta\phi\) when \(\phi\geq 0\) ensures forward invariance of the safe set \(X_{S}\) and finite time convergence to this set. The parameter \(\eta\) is a positive constant that adjusts the convergence rate. Following the safety index design rule [22] for collision avoidance with single obstacle, we define the safety index \(\phi\) as follows: \[\phi=d_{min}^{2}-d^{2}-k\cdot\dot{d}. \tag{3}\] where \(\dot{d}\) is the relative velocity of robot to obstacle and \(k\) is a constant factor. We add higher order term of \(\phi_{0}\) to the base \(\phi_{0}\) to ensure that relative degree one from \(\phi\) to the control input. As proved in [20][22], this safety index \(\phi\) will ensure forward invariance of the set \(\phi\leq 0\cap\phi_{0}\leq 0\) as well as global attractiveness to that set. With the valid safety index \(\phi\), we project the reference control \(u^{r}\) to the set of safe controls that satisfy \(\dot{\phi}\leq-\eta\,\phi\) when \(\phi\geq 0\), and \(\dot{\phi}\) is expressed as \[\dot{\phi}=\frac{\partial\phi}{\partial x}\,f+\frac{\partial\phi}{\partial x} \,g\ u=L_{f}\phi+L_{g}\phi\ u. \tag{4}\] We compute \(\phi_{i}\) for every obstacle and add safety constraint whenever \(\phi_{i}\) is positive. Also we have velocity and acceleration limits. With all constraints, SSA will solve the following optimization problem through quadratic programming (QP): \[\begin{split}\min_{u\in U}||u-u^{r}||^{2}=\min_{u\in U}u^{\text{T} }\begin{bmatrix}1&0\\ 0&1\end{bmatrix}u-2u^{\text{T}}\begin{bmatrix}1&0\\ 0&1\end{bmatrix}u^{r}\\ s.t.L_{f}\phi_{i}+L_{g}\phi_{i}\ u\leq-\eta\,\phi_{i},i=1,2...m.\end{split} \tag{5}\] However, in clustered dynamic environment, there are situations that don't even exist safe control to guarantee safety as we will discuss later (note \(\phi\) in (3) only guarantees safety with single dynamic obstacle not multiple dynamic obstacles). Besides, low sample efficiency is a problem in vanilla SSA: the agent may require long training period or even fail to learn optimal controller when the task is complex or the environment is reward-sparse. To make it work, we need to improve the sample efficiency with the following three strategies. ## IV Methodology ### _Adapting the Safe Set Algorithm_ Vanilla SSA would output safe control that drives the system to the currently safest direction, which may not be an efficient direction in the long run, see fig. 0(a). Besides, in multi-obstacle environment, it's not safe to directly add constraints for every dangerous obstacle whose \(\phi\) is positive. In detail, since vanilla SSA only considers these dangerous obstacles, it may push the robot to the direction that is safe now but risky in the next step if there are unconsidered obstacles (\(\phi\) values are negative) in that direction, see fig. 0(b). Thus we decide to consider the current positions, estimate future positions of all approaching obstacles, and modify the direction of safe control by tuning parameters in the QP problem (6). After adapting the projection direction, we expect to generate control signal that is safe to all approaching obstacles, even these \(\phi\) values are negative, and efficient for the longer time horizon. \[\min_{u\in U}||u-u^{r}||_{Q}=\min_{u\in U}u^{\text{T}}\begin{bmatrix}\alpha& \sigma\\ \sigma&\beta\end{bmatrix}u-2u^{\text{T}}\begin{bmatrix}\alpha&\sigma\\ \sigma&\beta\end{bmatrix}u^{r}. \tag{6}\] In detail, we define the approaching obstacles as those whose distances to the robot are smaller than a threshold value. With the current position of the robot \((x_{0},y_{0})\) and the current positions of approaching obstacles \((x_{i},y_{i}),i=1,2,...,n\), we first predict the next \(k\) steps positions of each obstacle \((x_{i}^{j},y_{i}^{j}),j=1,2,...,k\) using constant velocity model. Then we solve the line \(l_{\theta}:-\sin(\theta)x+\cos(\theta)y=0\) that maximizes the distance to all approaching obstacles \[\max_{\theta}J(\theta),\ \ \ J(\theta):=\sum_{j=1}^{k}\sum_{i=1}^{n}d_{i}^{j}. \tag{7}\] where \(d_{i}^{j}\) is the distance of shifted obstacle \((x_{i}^{j}-x_{0},y_{i}^{j}-y_{0})\) to the line \(l_{\theta}\). In this setting, we regard the robot system \((x_{0},y_{0})\) as the origin and calculate \(\theta_{\max}\) that has largest overall distance. With \(\theta_{\max}\), we can get eigenvector \(\mathbf{x}_{1}=(-\sin(\theta_{\max}),\cos(\theta_{\max}))\), which is the safest direction for all approaching obstacles, and \(J(\theta_{\max})\) is its corresponding eigenvalue \(\lambda_{1}\). The larger \(\lambda_{1}\) is, the more we want to project the safe control to \(\mathbf{x}_{1}\) direction. The second eigenvector \(\mathbf{x}_{2}\) is perpendicular to \(\mathbf{x}_{1}\) and has smallest overall distance \(\lambda_{2}\). Then, we can build the QP parameter matrix, which is represented as the ellipse in fig. 0(a) and fig. 0(b), as follows: \[[\mathbf{x}_{1},\mathbf{x}_{2}]\begin{bmatrix}\lambda_{1}&0\\ 0&\lambda_{2}\end{bmatrix}[\mathbf{x}_{1},\mathbf{x}_{2}]^{-1}=\begin{bmatrix}\alpha& \sigma\\ \sigma&\beta\end{bmatrix}=:Q. \tag{8}\] ### _Exploration under Safety Constraints_ In real world reward-sparse and clustered dynamic environments, it's challenging to find a sequence of actions that can lead to positive reward and generalize to related situations, thus RL agents need long training time. Traditional exploration techniques used to address this problem are not suitable for safety-critical tasks, as they may explore unsafe controls. In our framework, with the help of SSA, we add safety constraints to the following two exploration strategies to improve the suboptimal policy safely. **Parameter Space Noise (PSN)**[16] At the start of each episode, we create a copy of RL policy and add noise directly to the policy's parameters, which can lead to consistent exploration and a richer set of behaviors. Suppose we parameterize the policy \(\pi_{\theta}\) as a neural network with weights \(\theta\). Then the exploration policy is \(\pi_{\widetilde{\theta}}\), where \[\widetilde{\theta}=\theta+N(0,\sigma^{2}I) \tag{9}\] **Random Network Distillation (RND)**[17] This exploration strategy will modify the reward function to encourage the agent to visit novel states. In detail, we create two neural networks that take the state \(s=(x_{R},x_{E}^{c})\) as input and train one of the networks to predict the output of the other. The prediction error of two neural networks is defined as novelty and will be added to reward: \[\widetilde{r}(s,a)=r(s,a)+||f_{\theta_{1}}(s)-f_{\theta_{2}}(s)||_{2}^{2} \tag{10}\] ### _Learning from SSA Safe Demonstration_ Another technique people use to improve sample efficiency is learning from demonstration (LfD) instead of learning from scratch. Different from the traditional LfD or safe controller guided training in [13], we don't need to prepare demonstration data and pre-train the RL agent or approximate all prior safe controllers. Instead, SSA would generate safe controls during the interactions with the world and these safe controls are regarded as expert demonstration. To be more specific, in default SSA+RL framework fig. 1(a), SSA is part of the environment which means the RL agent's control signal will be modified when \(\phi\geq 0\) and the agent wouldn't realize that. While in safe learning, we separate the SSA from the environment and make it an independent module, see fig. 1(b). In this way, the agent could know the world is taking \(a_{t}\) or \(a_{t}^{ssa}\), then store the self-generated data and demonstration data into two buffers. When updating the policy, we simply use a fixed ratio between self-generated data and SSA demonstrations to mix the training samples. ## V Experiments **Environment and Evaluation Method** We evaluate our proposed framework in a clustered dynamic environment Fig. 1: Comparison between vanilla SSA and adapted SSA. The darker color presents the current positions of the robot and the obstacles. The lighter color presents the future positions of the robot and the obstacles. Fig. 2: Block diagrams of the default learning and proposed safe learning. In default learning, the environment (the red box) contains the world and SSA module, while in safe learning, SSA is separated from the environment. with sparse reward. Our goal is to move the vehicle, starting from the bottom, to the green area on the top while avoiding 50 moving obstacles in between, which is challenging to be solved by the state-of-the art RL algorithms alone. A success case is shown in fig. 2(a). We assume the vehicle can sense the correct positions and velocities of obstacles, but doesn't know their accelerations. The obstacles will be randomly initialized at each episode. This environment tries to simulate the real world scenarios like parking lots and busy streets that have multiple dynamic objects moving around. We adopt the Twin Delayed Deep Deterministic Policy Gradients (TD3) [23] as our baseline RL model. For experiments that evaluate safety, we train the models for 50 episodes as we notice it's long enough for the SSA-based models to converge. For experiments that evaluate sample efficiency, we train the models until their performances reach a threshold reward \(R_{min}\). The required \(R_{min}\) is to achieve at least 1900 on average for the past 20 episodes. For models that fail to meet \(R_{min}\) within 1000 episodes, we set its result as 1000 episodes. We repeat each training with different seeds for 10 times and calculate the average performance. The code is open sourced at [https://github.com/hychenanza/SSA-RL](https://github.com/hychenanza/SSA-RL). **Hypothesis** we evaluate the proposed framework by verifying the following four hypotheses: * H1: The SSA+RL framework can greatly reduce safety violation comparing to the TD3 baseline model in clustered dynamic environment. * H2: The adapted SSA can achieve better efficiency and higher task success rate than vanilla SSA. * H3: Traditional exploration strategies are not safe and hence not data efficient, but exploration under safety constraints could improve the sample efficiency. * H4: Direct learning from the safe controls demonstrated by SSA can best speedup training and maintain safety compared to pure reward-driven approaches. * H5: Compared to other safe RL methods, our SSA+RL framework can achieve best safety performance. ### _Results_ **H1:** Through our experiments, TD3 model gets 31.7% collision rate and 66.3% failure rate on average, see table I. During training, the policy gradually converges to a local optimum after repeated collisions, which is staying at the bottom where obstacles can't get to. On the other hand, SSA helps to reduce the collision rate from \(31.7\%\) to \(0.8\%\). We find SSA can't achieve 0-safety violation discussed in other papers using static testing cases like avoiding fixed obstacles or static hazard areas; in which cases, these always exist a safe control to meet safety constraints [13][22]. But in our environment, the obstacles are dynamic and the vehicle can be surrounded by multiple obstacles. In fig. 2(c), there were four obstacles driving from three different directions toward the vehicle, and SSA cannot find a safe control meet all safety constraints and collision happens (as shown in the control space plot in fig. 2(b)). Besides, the success rate increased from \(2\%\) to \(50.2\%\) as SSA prevents the RL agent from collisions and increases the possibility of reaching the goal. But SSA+RL may still stuck in local optimums as its failure rate is 49%. In fig. 2(b), the vehicle tries to navigate towards the goal, but it is pushed back by obstacles and stays at the bottom to avoid collision. **H2:** The improvement of SSA adaptation is less significant compared to other techniques we used due to two main reasons. Firstly, SSA adaption only works when SSA is triggered, which wouldn't happen at every step. Secondly, in many cases, vanilla SSA and adapted SSA will generate very close safe controls as both of them need to satisfy the same hard safety constraint. As shown in table II, the averaged interaction number of adapted SSA+RL model is \(2.3\times 10^{5}\), which is very close to the interaction number \(2.4\times 10^{5}\) of SSA+RL model, because the adapted SSA+RL model may also get stuck in the local optimum. Nevertheless, we find the failure rate drops from \(49\%\) to \(30\%\) and the success rate increases from \(50.2\%\) to \(68.6\%\) in table I. Because adapted SSA generates fewer detours when meeting dangerous obstacles, and the projection guidance leads the vehicle to a more efficient direction. The effects of adapted safe controls will accumulate and improve the overall success rate and reward. In fig. 2(d), adapted SSA and vanilla SSA are tested in the same environment. At the beginning, their paths are identical, but diverge gradually. When meeting dangerous obstacle at the black star point, which is the most critical divergence point, the adapted SSA outputs control in upwards direction while the vanilla SSA outputs control in downwards direction. At last, the adapted SSA helps the vehicle reach the goal while the vanilla SSA pushes the vehicle back to the bottom. **H3:** This hypothesis can be proved via results in table I. The exploration strategies PSN and RND only improve the success rate slightly from 2% to 3.6% and 5.2% respectively. But the failure rate of PSN-enabled model increases \begin{table} \begin{tabular}{l c c c} \hline \hline & Model & Episodes & Interaction \\ \hline Baseline & RL & 1000 & \(10^{6}\) \\ Models & PSN+RL & 1000 & \(10^{6}\) \\ & RND+RL & 1000 & \(10^{6}\) \\ \hline \multirow{4}{*}{Proposed & SSA+RL & 254 & 2.4\(\times 10^{5}\) \\ & Adapted SSA+RL & 245 & 2.3\(\times 10^{5}\) \\ & PSN+SSA+RL & 251 & 2.4\(\times 10^{5}\) \\ Models & RND+SSA+RL & **23** & **6183** \\ & LfD+SSA+RL & 26 & 8464 \\ & Penalty+SSA+RL & 331 & 3.2\(\times 10^{5}\) \\ \hline \hline \end{tabular} \end{table} TABLE II: Efficiency comparison between proposed models. \begin{table} \begin{tabular}{l c c c c} \hline \hline & Model & Success & Failure & Collision & Reward \\ \hline Baseline & RL & 2\% & 66.3\% & 31.7\% & -118.6 \\ Models & PSN+RL & 3.6\% & 76.8\% & 19.6\% & -26 \\ & RND+RL & 5.2\% & 49.5\% & 47.8\% & -133.8 \\ \hline \multirow{4}{*}{Proposed & SSA+RL & 50.2\% & 49\% & **0.8\%** & 1000 \\ & Adapted SSA+RL & 68.6\% & 30\% & 1.4\% & 1365 \\ \cline{1-1} & PSN+SSA+RL & 40.4\% & 58.3\% & 1.3\% & 800 \\ \cline{1-1} & RND+SSA+RL & 71.4\% & 27.2\% & 1.4\% & 1421 \\ \cline{1-1} & LfD+SSA+RL & **89.8\%** & **9.4\%** & **0.8\%** & **1792** \\ \cline{1-1} & Penalty+SSA+RL & 43.8\% & 55.2\% & 1.0\% & 871 \\ \hline \hline \end{tabular} \end{table} TABLE I: Safety comparison between proposed models. to 76.8% and the collision rate of RND-enabled model increases 47.8%. That is mainly because these exploration strategies have large portion of harmful exploration, i.e., unsafe controls that don't meet the safety constraint \(\dot{\phi}\leq-\eta\,\phi\). The RND+SSA+RL model can reduce the failure rate to 27.2% and increase the success rate to 71.4%, which mitigates the problem of being overly conservative in clustered environment. Moreover, in table II, this model uses substantially fewer episodes and 38\(\times\) fewer interactions to meet \(R_{min}\) compared to the SSA+RL model. By visiting new states, the agent has more chance to avoid getting stuck in the local optimum and converge to optimal policy faster. However, PSN+SSA+RL gets worse performance than SSA+RL, which may because PSN is unable to sufficiently explore in our challenging environments. **H4:** For LfD+SSA+RL, it takes only 26 episodes and 8464 steps to meet \(R_{min}\), using 28\(\times\) fewer steps compared to SSA+RL model in table II. From training plot fig. 3(d), this model learns the optimal controller in all experiments, achieving faster convergence and greater training stability than other models. Moreover, LfD+SSA+RL achieves highest success rate 89.8% and lowest collision rate 0.8% in table I, which proves that learning from SSA demonstration can best maintain safety. This is due to two reasons: Firstly, there is a mismatch between the control that RL agent generates and the real control that environment takes due to control modification in the default SSA+RL framework. This introduces errors to the training data and lower the training efficiency. Secondly, the RL policy could better learn how to take safe control with SSA demonstration, instead of learning from scratch or the reward penalty. To validate the second point, we give the agent a negative reward penalty \(-||a_{t}^{ssa}-||a_{t}^{ssa}-||a_{t}^{ssa}||a_{ \(a_{i}||_{2}^{2}\) to negatively reinforce unsafe control following the idea in [24]. The results show that penalty+SSA+RL model has lower success rate 43.8%, slightly higher collision rate 1.0% and takes more interactions steps compared to SSA+RL model in table I and table II. This is because learning safe control from the penalty is too hard for the agent, which reduces the learning efficiency. **H5** We compare our results with the recent safe RL methods from three categories: constrained policy optimization (CPO) [6], probabilistic shields+RL [12], CBF+RL [13]. CPO will converges to constraint-satisfying policies in the end, but not consistently constraint-satisfying throughout training. Probability shield is applied in discrete MDPs, in which they can calculate the safety violation probability for all possible actions and states within the finite horizon. But this method is hard to check all possible actions in continuous systems and will inevitably be suboptimal. The CBF used in [13] is only tested in simple environments with linear barrier function \(h(x)=p^{T}x+q\). Moreover, the CBF approach in general has limitations than the SSA approach: 1) enforce control constraint everywhere, which is not necessary; 2) our SSA method uses a design rule [20][22] to nonlinearly synthesize \(\phi\) from \(\phi_{0}\) which ensures that there always exists a feasible control input to satisfy the safety constraint on control, while the counterpart in CBF is the exponential CBF which is still a linear function on \(\phi_{0}\). Nonetheless, the design rule can also be applied to CBF and we added a comparison of CBF+RL using the same \(\phi\). In table III, CPO gets high collision rate since it still relies on trial-and-error to enforce constraints. Probabilistic Shields+RL can only guarantee safety with some probability and is too restrictive to the agent. The barrier function in CBF+RL is too simple to correctly evaluate the safety in higher dimensional spaces. The collision rate of CBF using safety index \(\phi\) is 1.0% higher than that of SSA+RL. The reason is that CBF adds constraints for all nearby obstacles even for safe ones, which restrict the control space and may fail to find safe control for dangerous obstacles. ## VI Conclusion In this work, we propose to use SSA to improve safety during RL policy training, and introduce three strategies, including SSA adaption, exploration under safety constraint and learning from SSA demonstrations, to improve the learning efficiency. We further validate the proposed framework in a clustered dynamic environment. The results show that SSA can greatly reduce the safety-violation except for the situations that no safe control exists, and achieve better safety performance statistically compare to other safe RL methods. Combining with the three strategies, the agent can solve the task with substantially fewer episodes and interactions.
2301.01777
Intrinsically-multilayer moiré heterostructures
We introduce trilayer and multilayer moir\'e heterostructures that cannot be viewed from the ``moir\'e-of-moir\'e" perspective of helically-twisted trilayer graphene. These ``intrinsically trilayer" moir\'e systems feature periodic modulation of a local quasicrystalline structure. They open the door to realizing moir\'e heterostructures with vastly more material constituents because they do not constrain the lattice constants of the layers. In this manuscript, we define intrinsically multilayer patterns, provide a recipe for their construction, derive their local configuration space, and connect the visual patterns to physical observables in material systems.
Aaron Dunbrack, Jennifer Cano
2023-01-04T19:00:00Z
http://arxiv.org/abs/2301.01777v2
# Intrinsically-multilayer moire heterostructures ###### Abstract We introduce trilayer and multilayer moire heterostructures that cannot be viewed from the "moire-of-moire" perspective of helically-twisted trilayer graphene. These "intrinsically trilayer" moire systems feature periodic modulation of a local quasicrystalline structure. They open the door to realizing moire heterostructures with vastly more material constituents because they do not constrain the lattice constants of the layers. In this manuscript, we define intrinsically multilayer patterns, provide a recipe for their construction, derive their local configuration space, and connect the visual patterns to physical observables in material systems. ## I Introduction The observation of superconductivity and correlated insulators in twisted bilayer graphene [1; 2] launched the study of "moire materials," where two-dimensional materials with the same [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35] or similar [36; 37; 38; 39; 40; 41; 42; 43; 44; 45] lattice constants are stacked at a small relative twist angle. This paradigm is naturally extended to trilayer stacking and beyond, both with some layers aligned [46; 47; 48; 49; 50; 51; 52] and with multiple twist angles [53; 54; 55; 56; 57; 58; 59]. Recently it has also been extended to stacking at angles nearby a large commensurate twist angle [60; 61]. In all cases, the moire pattern is obtained from layers with either the same or similar lattice constant (or a commensurate supercell). In this paper, we lift that restriction. We introduce moire patterns made from stacking more than two layers in which no two layers separately display a moire pattern. We call these patterns "intrinsically trilayer moire" (or more generally, "intrinsically \(N\)-layer moire") because, unlike twisted trilayer graphene, the moire pattern disappears if any one layer is removed. As we will explain, intrinsically trilayer moire patterns cannot be viewed from the "moire of moire" perspective often used to describe twisted trilayer graphene [53]. Intrinsically \(N\)-layer moire patterns have an important advantage over bilayer moire patterns because they do not impose a constraint on lattice constants. This vastly increases the space of possible material combinations. Specifically, moire patterns in bilayer systems require the constituent materials to have nearly the same lattice constant or to be nearly commensurate. In contrast, intrinsically \(N\)-layer moire patterns can be constructed from virtually _arbitrary_ combinations of materials. In the present work, we focus on the crystal structure of intrinsically \(N\)-layer moire heterostructures, postponing a study of electronic structure to future work. We begin by reviewing the origin of moire patterns. In Sec. II, we provide an intuitive picture of how moire patterns arise in real space. We explain the construction for bilayers and then offer a naive generalization to multilayers. In Sec. III, we argue that reciprocal space provides a more natural and concise characterization, from which we derive both bilayer and \(N\)-layer moire patterns. We then focus on multilayer heterostructures. In Sec. IV, we return to real space to resolve an apparent contradiction: the momentum-space perspective implies that periodic moire patterns of more than two layers exist, but the naive generalization of bilayer configuration space [62; 63] fails to indicate these patterns, in part because the local structure is generally quasicrystalline rather than crystalline. Consequently, we develop a more nuanced notion of configuration space, in which some apparent degrees of freedom disappear on moire wavelengths. We discuss physical properties that are a function of this configuration space; lattice relaxation is one example. Finally, in Sec. V, we discuss experimental probes and propose physical realizations of intrinisically \(N\)-layer moire patterns. Throughout, we assume a three-, four-, or six-fold rotation symmetry shared between all layers of the moire \begin{table} \begin{tabular}{|c|c|c|} \hline Types of & Small twist & Large twist \\ moire & & \\ patterns & & \\ \hline Two layers & & \\ & Twisted bilayer & \\ & graphene & Near-commensurate \\ \hline Three or & & \\ & & \\ & Twisted trilayer & \\ & graphene & Imtiinsically trilayer \\ \hline \end{tabular} \end{table} Table 1: Summary of moiré heterostructures: the “intrinsically trilayer” moiré patterns we introduce occur at large twist angle and with three or more layers. heterostructure. In the absence of this symmetry, the generic moire pattern will be stripes rather than a 2D pattern. ## II Configuration space for bilayers: moire patterns in real space Moire patterns are intuitively understood in real space as a slow modulation of the local lattice structure. The set of all possible local environments is known as configuration space [62; 63]. The configuration space approach extends beyond linear transformations of perfectly rigid crystals to include lattice relaxation effects. However, the approach becomes subtle for heterostructures of multiple layers or different lattice constants. In this section, we review configuration space in the simplest case of bilayers with near-identical lattices. We then extend the formalism to bilayer systems perturbed from a commensurate stacking. Finally, we offer a "naive configuration space" for trilayer systems, and briefly discuss how it leads to the complex patterns observed in twisted trilayer graphene. (Later, in Sec. IV, we will provide a more complete accounting of configuration space in systems with more than two layers and explain the breakdown of the naive configuration space.) ### Two square lattices Consider two stacked periodic layers. There are two cases to consider: when the two layers share a common (larger) period, and when they do not. If they do share a common period, we call the structures commensurate. If they do not, we call them incommensurate. In Fig. 1, we illustrate a small commensurate pattern formed by two square lattices at a relative twist angle of approximately \(6.7^{\circ}\) about a square corner. This aligns the square corners of the unit cell (8,9) of one layer with (9,8) of the other, forming the commensurate superlattice outlined in red. However, in the center of each red supercell is a location that looks very similar to the corners, where the unit cells are also aligned at the center of the square cells rather than at a vertex. This smaller grid of locations where the square-centers are aligned defines the moire lattice, outlined in blue. Thus, the visual moire cell, which enjoys an approximate translation symmetry, is smaller than the commensurate unit cell, which exhibits an exact translation symmetry. In general, the visual pattern will either be the same size or smaller than the commensurate cell (although for two identical square lattices, the moire cell is always smaller by at least a factor of \(\sqrt{2}\), regardless of twist angle. The commensurate cell size is highly sensitive to angle and exists only on a dense subset of angles. Computing the size of a commensurate cell as a function of twist angle is analogous to determining the size of the minimal denominator of a fraction as a function of the value of that fraction, as explained in Supplement 1. The moire cell, however, varies smoothly with twist angle for small twist angles. At sufficiently large twist angles, the moire cell becomes smaller than a unit cell, which indicates that the moire pattern ceases to exist and no visual pattern arises. This example shows how a moire pattern arises from the two layers being stacked at different "local relative translations" at different positions, i.e., in the brighter regions, the lattices are stacked atom-on-atom, while in the darker regions, the lattices are stacked atom-on-void. The moire lattice is defined by the collection of points where the two layers align in either configuration. ### Local configuration space: two identical layers The space of relative translations of the aligned layers defines the local configuration space. For instance, TBLG exhibits regions of AA and AB stacking, as well as intermediate regions, as illustrated in Fig. 2. For two identical layers, the local configuration space is defined with respect to relative translations of the two untwisted layers, as we will now describe. Although the idea is intuitive in this case, developing the mathematical infrastructure carefully here will elucidate the more complicated situations we consider later. #### ii.2.1 Configuration space as differences of relative coordinates In the simplest setup where the two untwisted layers have identical lattice vectors, we define the local configuration \(C(x)\) in terms of the relative coordinates \(x_{i}\) of each layer. The relative coordinate \(x_{i}(x)\) is a two-component vector that specifies where the position \(x\) resides in the Figure 1: A moiré lattice of two square layers twisted at \(6.7329^{\circ}\). Commensurate lattice in red, moiré lattice in blue. unit cell of layer \(i\). Thus, \(x_{i}\) is determined by the matrix \(A_{i}\), whose columns are the (twisted) lattice vectors of layer \(i\), as \[x_{i}(x)=A_{i}^{-1}x\mod\mathbb{I} \tag{1}\] where "mod \(\mathbb{I}\)" means "modulo the columns of \(\mathbb{I}\)" (i.e., \(\mod\left\{(1,0),(0,1)\right\}\)). The local configuration is then defined as the difference between the two relative coordinates \[C(x) =x_{2}(x)-x_{1}(x)\mod\mathbb{I} \tag{2}\] \[=(A_{2}^{-1}-A_{1}^{-1})x\mod\mathbb{I} \tag{3}\] While the functions \(x_{i}\) vary on the scale of the original lattice, for a small twist or lattice mismatch, \(C(x)\) varies much more slowly, and the period of \(C(x)\) defines the moire lattice. Therefore, the moire lattice vectors are given by the columns of the matrix \[A_{M}=(A_{2}^{-1}-A_{1}^{-1})^{-1} \tag{4}\] in the case where the inverse exists. If the inverse does not exist, then there is not a 2D moire pattern. In the case where the two layers are identical and twisted by a relative angle \(\theta\), one can simplify further by writing \(A_{1,2}=R(\pm\theta/2)A\), where \(R(\theta)\) is the rotation matrix. The moire lattice vectors then simplify to \[A_{M}=\left[R(\theta/2)-R(-\theta/2)\right]^{-1}A=\frac{1}{2\sin(\theta/2)}R \left(\frac{\pi}{2}\right)A. \tag{5}\] In other words, the moire lattice vectors are rotated by \(\pi/2\) compared to the original lattice vectors \(A\) and scaled up by a factor of \(1/(2\sin(\theta/2))\). The same formalism applies to aligned layers with a small difference in their lattice constants. For example, if \(A_{2}=(1+\delta)A_{1}\), then Eq. (4) can be simplified without any matrix algebra to \(A_{M}=\frac{1+\delta}{\delta}A_{1}\) (neglecting the overall sign). Generalizing to the case of two layers with a small lattice mismatch arranged with a slight twist angle yields Eq. (1) in Ref. [64]. Eq. (4) in this paper also allows for anisotropic lattice mismatch, as might be induced by a strain. #### ii.1.2 Configuration space as a quotient of translation groups More abstractly, configuration space is equivalently defined as the space of nontrivial translations of the lattices before twisting, as we now explain. A combination of translations is "trivial" if it differs from zero translation of each layer by the simultaneous translation of all layers by the same amount. In other words: consider the two identical lattices before twisting. Denote the group of translations of each layer modulo lattice translations by \(T_{i}\). (Note \(T_{i}\) will be isomorphic to the torus \(T^{2}=\mathbb{R}^{2}/\mathbb{Z}^{2}\).) Similarly denote the group of translations of the two lattices simultaneously (modulo translations that preserve the shared pre-twist lattice) as \(T_{12}\). The space of configurations is the space of translations of each layer, modulo simultaneous translations of the two layers: \[T_{\rm config}=T_{1}\times T_{2}/T_{12}. \tag{6}\] This space of configurations is itself a torus. We now relate this space to the moire pattern. Suppose we transform each layer by a linear transformation \(M_{i}\), e.g., for twist, \(M_{i}=R(\theta_{i})\). In terms of the matrices of lattice vectors before and after twisting, \[M_{i}=A_{i}A^{-1}. \tag{7}\] We now interpret this transformation as a position-dependent translation, which will give the \(T_{i}\)-coordinate in Eq. (6). To find the translation of one layer associated with a point \(x_{0}\) in real space, consider the map which first transforms physical space, then transforms back but centered at \(x_{0}\). (E.g., for a twist by \(\theta\), first twist about the origin by \(\theta\), then twist back around \(x_{0}\) by \(-\theta\).) Conceptually, the first transformation sets up the twisted system, and the latter re-aligns the layers without further translating \(x_{0}\). Algebraically, understanding that "transform around \(x_{0}\)" can be written as "translate \(x_{0}\) to the origin, transform, then translate back," the translation is given by \[x\to M_{i}^{-1}(M_{i}x-x_{0})+x_{0}=x-(M_{i}^{-1}-\mathbb{I})x_{0}, \tag{8}\] which is a translation because it takes the form \(x\to x-a\). This translation is then taken modulo the pre-twist lattice vectors to get the element of \(T_{1}\). Figure 2: A moiré lattice of two hexagonal layers with unit-length interatomic distance stacked with a relative twist angle of \(5^{\circ}\). Red and blue circles indicate an “AA-stacked” region where hexagons align and an “AB-stacked” region where they are offset, respectively. Doing this for each layer yields the translation operators that determine a point in configuration space defined by Eq. (6). Modding out by simultaneous translations in Eq. (6) yields the relative translation difference between the two layers, \[\tilde{C}(x)=(M_{2}^{-1}-M_{1}^{-1})x\mod A \tag{9}\] where \(A\) is the shared lattice before twisting. This is in one-to-one correspondence with the characterization of configuration space in Eq. (3). The moire unit cell is given by \[A_{M}=(M_{2}^{-1}-M_{1}^{-1})^{-1}A, \tag{10}\] which is exactly Eq. (4). Written in this way, the moire lattice is "factored" into one term, \(M_{2}^{-1}-M_{1}^{-1}\), that depends on the transformations but not the original lattice, and another term, \(A\), that depends on the lattice but not the transformations. The second term can be interpreted as the size of configuration space and the first as the rate at which the moire pattern explores that space. ### Generalization to near-commensurate twisting Now instead of two identical layers, consider two layers that form a small (i.e., not moire) commensurate supercell. Applying a small twist or lattice mismatch produces a moire pattern. For instance, two square lattices whose side lengths differ by a factor of \(\sqrt{2}\) form a commensurate supercell when arranged at a \(45^{\circ}\) relative orientation; when twisted by an angle near \(45^{\circ}\), they form a moire pattern as illustrated in Fig. 3. A second example is two identical honeycomb lattices twisted near a commensurate angle that is not a multiple of \(60^{\circ}\), as discussed in Ref. [61]; near-\(21.8^{\circ}\) TBLG is shown in Fig. 4. The abstract description of configuration space described in Eq. (6) extends to this case with only one minor modification: instead of considering the translations as acting on the lattices at zero twist, consider them at the relevant commensurate stacking. Hence, the \(T_{i}\) are now defined modulo the individual lattices at the commensurate stacking, whereas \(T_{12}\) is defined modulo the lattice vectors of the commensurate structure. An argument for the size of the moire pattern comes from Eq. (9) and the subsequent discussion. A linear transformation (e.g. twist) performed on a near-commensurate structure explores the configuration space at the same rate as the structure formed by performing the same transformation on a zero-degree stacked structure. However, the configuration space of the former is (perhaps counterintuitively) smaller, for reasons we now explain heuristically. The size of configuration space in the case of two layers stacked to form a supercell can be sensibly guessed from Eq. (6). Let \(\mathcal{A}_{i}\), \(\mathcal{A}_{C}\), \(\mathcal{A}_{\text{cs}}\) and \(\mathcal{A}_{M}\) denote the areas of the unit cell of layer \(i\), the commensurate supercell, configuration space, and the moire unit cell, respectively. Replacing each translation group in Eq. (6) by the area of the corresponding torus yields \[\mathcal{A}_{\text{cs}}=\frac{\mathcal{A}_{1}\mathcal{A}_{2}}{\mathcal{A}_{C }}. \tag{11}\] Exploiting the fact that \(\mathcal{A}_{i}=|\det(A_{i})|\) and guided by the intuition that \(A\) in Eq. (10) should be generalized to some "configuration space lattice," the area of the moire cell is \[\mathcal{A}_{M}=\frac{1}{|\det\bigl{(}M_{2}^{-1}-M_{1}^{-1}\bigr{)}|}\frac{ \mathcal{A}_{1}\mathcal{A}_{2}}{\mathcal{A}_{C}}. \tag{12}\] The intuition that we should use the configuration space lattice follows from factoring Eq. (10) as described in the text following that equation. (We give a rigorous description of how to find the "configuration space lattice vectors" \(A_{\text{cs}}\) in Appendix B and prove that they are indeed the analogue of \(A\) in Eq. (10).) As a concrete example, consider two identical lattices twisted at an angle \(\theta\) away from a commensurate stacking where the commensurate cell is a factor of \(N\) larger in area than the original unit cell (for instance, in near-\(21.8^{\circ}\) TBLG, the commensurate cell is \(7\) times larger in area than the original graphene cell). The size of \(T_{i}\) does not depend on how the layers are stacked, but \(T_{12}\) will be a factor of \(N\) larger in area when they are twisted \(\theta\) away from the commensurate stacking compared to when the layers are stacked at an overall twist angle of \(\theta\). Therefore, according to Eq. (11), the configuration space, which is defined modulo \(T_{12}\), would be a factor of \(N\) smaller. Since the matrices \(M_{1,2}\) in the denominator of Eq. (12) depend only on \(\theta\) and not on the supercell or original lattice, it follows that, contrary to the most obvious intuition, for two specified 2D layers, the larger the commensurate cell, the _smaller_ the moire pattern. Figure 3: Moiré pattern from two square lattices with side lengths \(1\) and \(\sqrt{2}\) arranged with a relative twist angle of \(42^{\circ}\). In Appendix B, in addition to formally deriving Eq. (11), the relative coordinates of heterostructures nearby a supercell configuration are derived, generalizing Eq. (9). ### A naive approach to configuration space with more than two layers We now try to apply the idea of configuration space as the translation of each layer modulo overall translations to heterostructures with more than two layers. We call this notion "naive configuration space" (in contrast to a more nuanced notion to be given in Sec. IV). For instance, in the case of three identical layers near zero stacking, as in twisted trilayer graphene, the local configuration space is a four-dimensional torus: \[T_{\rm config}=T_{1}\times T_{2}\times T_{3}/T_{123} \tag{13}\] In general, the local configuration space of \(N\) arbitrarily-twisted layers (with respect to a reference configuration) is a \((2N-2)\)-dimensional torus: \[T_{\rm config}=\left(\prod_{i}T_{i}\right)/T_{\rm all} \tag{14}\] Because this configuration space has dimension greater than two, we do not generally expect that it is fully explored. The consequence is a complex structure of overlapping moire patterns (illustrated for twisted trilayer graphene in Fig. 1b of Ref. [65]), and the four-dimensional space will generally be the correct parameter space for many layers twisted near a single commensurate structure of all layers (as can be seen in, e.g., Ref. [54]). As the next section will show, however, there are moire patterns that arise when multilayer structures are twisted near special incommensurate configurations. In these cases, more care is required to define which configurations are distinct in a way that will manifest on moire lengthscales: \(T_{\rm config}\) as written in Eq. (14) is not correct because \(T_{\rm all}\) is not the correct space by which to mod out. Figure 4: A moiré pattern formed by two unit triangular lattices arranged with a relative twist of \(22.4^{\circ}\) (\(21.8^{\circ}+0.6^{\circ}\)). The resulting triangular moiré lattice has a unit cell of side length 36.1, shown in green. The moire pattern is subtle, alternating between regions with individual sixfold-symmetric “centers” (red) and regions with triplets of “centers” connected in a triangle (blue). A larger picture of the moiré pattern is shown in Fig. S2-1. ## III Moire in frequency space An alternative to defining a moire pattern in real space is to define it by the appearance of low-frequency modes in momentum space. This approach is discussed at length in Ref. [66]; here we summarize by focusing on the modes of a black-and-white image. However, the content is much more general; see Appendix A for details. Consider a layered material as a set of transparencies placed over a light source. The atomic structure defines a local transmission coefficient \(T_{i}(x)\) that specifies how much light layer \(i\) lets through at point \(x\). For a black-and-white image, \(T_{i}(x)=1\) wherever the layer's image is white and \(T_{i}(x)=0\) where it is black; this paradigm extends to grayscale images using opacities between zero and one. By the definition of the transmission function, given \(T_{i}(x)\) in each layer \(i\), the resulting transmission function of the layered structure is given by: \[T(x)=\prod_{i}T_{i}(x), \tag{15}\] which defines how the resulting multilayer pattern is formed from the patterns of the individual layers. The moire-scale physics emerges by extracting the low-frequency modes. In each periodic layer \(i\), the Fourier transform is defined by: \[T_{i}(x)=\sum_{\mathbf{n}}c_{i,\mathbf{n}}\exp{(ik_{i,\mathbf{n}}\cdot x)} \tag{16}\] where the sum is over the reciprocal lattice vectors \(k_{i,\mathbf{n}}\). Fourier transforming Eq. (15) yields: \[\hat{T}(k)=[\hat{T}_{1}*\hat{T}_{2}*\ldots*\hat{T}_{N}](k), \tag{17}\] where \(*\) denotes the discretized convolution: \[[f*g](k)=\sum_{\mathbf{n},\mathbf{m}}c_{\mathbf{n}}d_{\mathbf{m}}\delta(k-k_{ \mathbf{n}}-k_{\mathbf{m}}^{\prime}), \tag{18}\] so that \[[T_{1}*\ldots*T_{N}](k)=\sum_{\mathbf{n}_{1},\ldots,\mathbf{n}_{N}}\left[\left( \prod_{i}c_{i,\mathbf{n}_{i}}\right)\delta(k-\sum_{i}k_{i,\mathbf{n}_{i}})\right] \tag{19}\] Therefore, a low-frequency (small-\(k\)) mode requires there exist a collection of modes \(\mathbf{n}_{i}\) so that \(\sum_{i}k_{i,\mathbf{n}_{i}}\approx 0\). This sum is the moire wavevector, \[k_{M}=\sum_{i}k_{i,\mathbf{n}_{i}}, \tag{20}\] which in turn yields the moire wavelength and orientation. Such a collection of modes arise naturally by considering a small deformation (twist, stretch, etc.) away from a reference configuration where \(\sum_{i}k_{i,\mathbf{n}_{i}}=0\) exactly. For a bilayer system, \(k_{\mathbf{1},\mathbf{n}}+k_{\mathbf{2},\mathbf{m}}=0\) is precisely a commensurability condition. The case \(\mathbf{n}=\mathbf{m}\) corresponds to the familiar near-zero-degree moire pattern for nearly-identical lattices. On the other hand, the case \(\mathbf{n}\neq\mathbf{m}\) corresponds to a near-commensurate moire, which can result when the two lattices differ in size (illustrated in Fig. 3) or are arranged near a commensurate angle (illustrated in Figs. 4 and 5). ### Near-commensurate example As a concrete example, consider two square lattices arranged with a twist angle near the \(36.9^{\circ}\) commensurate angle, as illustrated in Fig. 5. The lowest Fourier modes before twisting are illustrated in Fig. 6; note the (1,2) mode of one layer coincides with the (2,1) mode of the other. The magnitude of the wave vector of these modes is \(|k_{36.9}|=\sqrt{5}k_{0}\), where \(k_{0}\) is the magnitude of the wave vector of the lowest mode of a single layer. In general, if two modes with a wave vector of magnitude \(|k|\) are initially aligned before twisting, then after a relative twist by an angle \(\theta\), the difference between the two wave vectors has magnitude \[|k_{M}|=2\sin(\theta/2)|k|, \tag{21}\] as is seen geometrically in Fig. 7 and can be derived mathematically by taking \(k_{1}=-R(\theta)k_{2}\) in Eq. (20). Accordingly, the moire pattern at \(36.9^{\circ}+\theta\) is a factor of \(\sqrt{5}\) smaller in real space than the moire pattern at \(0^{\circ}+\theta\) because \[|k_{M}^{36.9+\theta}| =2\sin(\theta/2)|k_{36.9}| \tag{22}\] \[=2\sin(\theta/2)\sqrt{5}|k_{0}|\] \[=\sqrt{5}|k_{M}^{0+\theta}|.\] The same result was obtained in Sec. II.3 through more complicated arguments in real space. The moire patterns obtained from twisting near a commensurate angle, as illustrated in Figs. 4 and 5, are fainter than those for the corresponding structures near zero degrees in Figs. 2 and 1, respectively. The faint pattern occurs because the higher-frequency modes have smaller amplitudes than the lowest mode, and therefore the coefficients \(c_{\mathbf{n}}d_{\mathbf{m}}\) in Eq. (18) are smaller. (The range of visibility of different near-commensurate moire patterns is also illustrated in Fig. 3.2 of Ref. [66].) ### Intrinsically multilayer moire The moire formalism in reciprocal space, i.e. Eq. (20), also provides a requirement for a moire pattern to exist in a multilayer heterostructure: there must exist a linear combination of reciprocal lattice vectors in the different layers that adds up to a vector much smaller than the reciprocal lattice vectors of the original layers. In the following, we provide a recipe for meeting this condition that is analogous to twisting near commensurate structures. First, find a stacking arrangement of the layers such that a reciprocal lattice vector can be chosen in each layer so that the sum over the chosen reciprocal lattice vectors in all layers is zero, i.e., \(\sum_{i}k_{i,\mathbf{n}_{i}}=0\), where \(k_{i,\mathbf{n}_{i}}\) is the chosen reciprocal lattice vector in layer \(i\). We call such a configuration _singular_ (following the terminology from Ref. [66]), which is a generalization of a commensurate configuration. Note this notation differs from Ref. [62], where incommensurate is defined as non-singular in our terminology. Once a singular configuration is identified, a small twist or stretch of each layer away from the singular configuration results in the same sum of reciprocal lattice vectors being nonzero but small. This small sum of the lattice vectors is precisely a reciprocal lattice vector of the moire lattice, as defined in Eq. (20). We call a moire pattern "intrinsically \(n\)-layer" if it originates from a singular configuration where no two layers are singular. In other words, an intrinsically \(n\)-layer moire material is one whose singular configuration is a sum of reciprocal lattice vectors from all layers that add to zero, but no two vectors from that sum add to zero by themselves. Notice this is distinct from, e.g., helically-twisted trilayer graphene [53; 54; 55; 56]; there the singular pattern is at zero twist angle, where _any_ two layers have reciprocal lattice vectors which add to zero. (Patterns where some layers are aligned, such as alternating-twisted trilayer[46; 47] and twisted double bilayer graphene[48; 49; 50; 51]), often have patterns that arise from only two misaligned sets of layers, rather than more than two; moreover, such patterns are always singular in themselves.) An example of an intrinsically trilayer moire pattern is three square lattices twisted near \(120^{\circ}\), illustrated in Fig. 8. The sum of the \(\mathbf{n}=(1,0)\) lattice vectors from each layer vanishes, so at \(120^{\circ}\) there is a singular structure. Notice that this singular structure is not commensurate; in fact, it is a twelvefold-symmetric quasicrystal. In general, the singular structures will be quasicrystalline, but not necessarily with higher rotational symmetries. Figure 5: A moiré lattice formed by two unit square lattices arranged at a relative twist of \(37.5^{\circ}\) (\(36.9^{\circ}+0.6^{\circ}\)), with a \(42.7\) side length moiré cell (green square). There is a resulting pattern of “holey regions” (red square) and “knitted regions” (blue square). A larger unannotated picture of the moiré pattern is presented in Fig. S2-2. #### iii.2.1 What is a singular structure? Since the notion of a "singular structure" is not a standard notion of the physics literature (although it has appeared in the mathematical literature on moire patterns; see Ref. [66]), it is worth spending a moment highlighting both how it is different from a commensurate structure and how it is different from a general twist angle. First, a multilayer system is _commensurate_ if the combined system has exact translation symmetries. In other words, there must exist lattice vectors \(a_{1,2}\) for the multilayer system such that, for each layer \(i\) with lattice vectors \(a_{1,2}^{(i)}\), the vectors \(a_{1,2}\) are integer linear combinations of \(a_{1,2}^{(i)}\). As shown in Appendix D, this definition of commensurate is equivalent to every layer being individually commensurate with the first layer. Therefore, in an \(N\) layer system with threefold or fourfold rotational symmetry, commensurability imposes \(2N-2\) scalar constraints (from \(N-1\) vector constraints) on the size and orientation of the lattice vectors. By contrast, consider the singularity condition \(\sum_{i}k_{i,\mathbf{n}_{i}}=0\), where \(k_{i,\mathbf{n}_{i}}\) are each reciprocal lattice vectors of layer \(i\). This imposes only two scalar constraints (one vector constraint) on the orientations of layers, regardless of the number of layers. For a bilayer system, the singularity condition is equivalent to commensurability, but with more than two layers, commensurability is a strictly stronger condition. Now contrast that situation with generic twist angles. Singular structures have a property unusual among twisted systems: the average, long-distance properties of the system are sensitive to relative translations of the layers, as we now explain. Given a system with a local property \(f(x)\), the average value of that property over an area \(A\) is given by \(\frac{1}{|A|}\int_{A}f(x)d^{2}x\). If that area becomes very large, under appropriate convergence conditions on \(f\), the average value converges to the Fourier transform of \(f\) at the origin, \(\hat{f}(0)\). Suppose now that \(f(x)\) can be written as a product of functions of each layer; e.g., for a trilayer system, \(f(x)=f_{1}(x)f_{2}(x)f_{3}(x)\), where \(f_{i}(x)\) is periodic with the periodicity of layer \(i\). Notice the transmission function defined in Eq. (15) has this property. The zeroth Fourier mode of \(f\) is determined by Fourier modes \(\hat{f}_{i}(k_{i})\) of each layer such that \(\sum_{i}k_{i}=0\), as shown in Eq. (19). If the layers are not stacked in a singular structure, the only solution to \(\sum_{i}k_{i}=0\) is when \(k_{i}=0\) in each layer. Therefore, the average value of \(f\) in the multilayer is a product of the average values of \(f\) in each individual layer; relative translations of the layers have no impact on this zeroth Fourier mode. By contrast, for a singular structure, there exists a nontrivial combination of Fourier modes in each layer that contribute to the average value of \(f\). For instance, consider a trilayer system with reciprocal lattice vectors \(k_{i}\) in each layer such that \(\sum_{i}k_{i}=0\). Further suppose Figure 6: Reciprocal space of two square lattices stacked at a commensurate 36.9\({}^{\circ}\) twist angle. Red(blue) open circles indicate the reciprocal lattice vectors of the top(bottom) layer; black filled circles indicate shared reciprocal lattice vectors. Thick lines shows that the (1,2) mode of the blue layer coincides with the (2,1) mode of the red layer. Light gray indicates the reciprocal commensurate lattice. Figure 7: Lowest frequency modes of two square lattices at a small relative twist. Red and blue circles indicate reciprocal lattice vectors of each layer. The small difference between the lowest modes \(k_{1}-k_{2}\) gives the moiré wavevector \(k_{M}\), from which Eq. (21) follows. \(f_{i}=c_{0,i}+2c_{1,i}\cos(k_{i}\cdot x)\), for some coefficients \(c_{0,i}\), \(c_{1,i}\). From Eq. (19), the zeroth Fourier mode of \(f\) is \[\hat{f}(0)=c_{0,1}c_{0,2}c_{0,3}+2c_{1,1}c_{1,2}c_{1,3} \tag{23}\] where the factor of 2 derives from the positive and negative contributions of the cosine. (If \(f_{i}\) had a rotation symmetry instead of being a 1D cosine, the factor of 2 would turn into a 4 or 6.) Now translating each layer \(i\) by \(a_{i}\) transforms the zeroth Fourier mode into \[\hat{f}(0)=c_{0,1}c_{0,2}c_{0,3}+2c_{1,1}c_{1,2}c_{1,3}\cos\Bigl{(}\sum k_{i} \cdot a_{i}\Bigr{)}, \tag{24}\] which is different for generic choices of \(a_{i}\). Thus, the physical consequence of a singular structure is that local properties of the multilayer are sensitive to relative translations. This is also true for commensurate structures, but is not true for a general non-singular or non-commensurate stacking. However, notice that for a fixed set of \(k_{i}\), Eq. (24) is invariant under the special set of translations \(a_{i}\) which satisfy \(\sum k_{i}a_{i}=0\). These special translations will be important in developing our notion of configuration space for multilayer systems in Sec. IV. As discussed in Sec. III, the condition that the physical quantity of interest is a product of properties in each layer, i.e., \(f=f_{1}f_{2}f_{3}\) for a trilayer system, simplifies the discussion, but can also be relaxed significantly. The more general description is given in Appendix A. #### iii.2.2 Labelling singular structures We now provide a convenient labelling schema for singular structures. Since a singular structure is specified by a combination of reciprocal lattice vectors that adds up to zero, it can be conveniently labelled by the integer indices of the reciprocal lattice vectors. Let \(b_{i,1}\) and \(b_{i,2}\) be the basis of reciprocal lattice vectors Figure 8: A moiré lattice of three unit square lattices at a relative twist of \(119.3^{\circ}\), resulting in a moiré unit cell of side length 47 (drawn in green). Local structures are shown at right. Top right illustrates the reciprocal lattice vectors at exactly \(120^{\circ}\) (left) and after the \(0.7^{\circ}\) deviation from the singular structure (right, deviation exaggerated for illustration purposes), resulting in the moiré reciprocal lattice vector \(G_{M}\) shown in green. A larger unannotated picture of the moiré pattern is presented in Fig. S2-3. in layer \(i\). Then a singular structure will be specified by a set of \(n_{i,j}\) that satisfy the singularity condition \[\sum_{i,j}n_{i,j}b_{i,j}=0. \tag{25}\] For a trilayer system, the singular structure given by \(n_{i,j}\) is labelled as \((n_{1,1},n_{1,2};n_{2,1},n_{2,2};n_{3,1},n_{3,2})\). This description can be generalized to any number of layers, including bilayers. Note that the labelling depends on the choice of reciprocal lattice vectors; thus, a set of \(n_{i,j}\) combined with knowledge of the reciprocal lattice vectors in each layer determines the singular structure. The \(n_{i,j}\) for an \(N\)-layer system naturally live in \(\mathbb{Z}^{2N}\). The singularity condition in Eq. (25) defines a 1D sublattice in this space. Assuming rotational symmetry, one choice of \(n_{i,j}\) yields another linearly-independent \(n_{i,j}\) after rotation. Thus, combined there is a 2D sublattice in \(\mathbb{Z}^{2N}\) satisfying the singularity condition. It is also possible for the sublattice to have a higher even dimension, as we will show for trilayer graphene in Sec. III.3. Regardless of dimension, we call the \(n_{i,j}\) that satisfy the singularity condition the _zero mode lattice_, because they correspond to combinations of Fourier modes in each layer that contribute to the \(k=0\) Fourier mode of the singular structure. Under the assumption that the sublattice is 2D and that the degree of rotational symmetry is known, each singular structure can be labelled by a single set of \(n_{i,j}\) that defines one of the basis vectors of the zero mode lattice; the other basis vector follows from rotational symmetry. As a few concrete examples: the standard near-zero moire pattern of two layers is the \((1,0;-1,0)\) moire pattern because \(b_{1,1}-b_{2,1}=0\). The near-\(21.8^{\circ}\) structure shown in Fig. 4 and the near-\(36.9^{\circ}\) structure in Fig. 5 are both \((1,2;-2,-1)\) moire patterns because \(b_{1,1}+2b_{1,2}-2b_{2,1}-b_{2,2}=0\) in both cases, despite their different rotational symmetry. Finally, the intrinsically trilayer pattern illustrated in Fig. 8 would be the \((1,0;1,0;1,0)\) moire, assuming the first basis vector of the three layers are chosen 120 degrees apart. #### iv.2.3 Degeneracy of singular structures We now consider how singular structures arise in the manifold of possible twists and lattice mismatches between the layers, which we call _deformation space_. (More generally, we could also include strains that break rotational symmetries in our deformations; we call this generalization _anisotropic deformation space_. However, since such deformations can result in 1D instead of 2D moire patterns, we neglect such transformations here and simplify our discussion by referring to our space of isotropic deformations by the shorter term.) Commensurate structures of bilayer systems are special among singular structures because they are zero-dimensional manifolds in deformation space: no small deformation of a bilayer singular structure yields the same singular structure. For instance, in the simple case of aligned layers (corresponding to the \((1,0;-1,0)\) commensurate structure), no combination of small relative mismatch or twist of the two layers will yield another \((1,0;-1,0)\) commensurate structure. This is not, however, the case for singular structures with more than two layers. With \(N\) layers there are \(2N-2\) possible isotropic deformations (twists and isotropic strains) of the layers relative to each other: each layer beyond the first adds two additional parameters (namely, strain and mismatch with respect to the first layer). The singular structure then adds two constraints (Eq. (25) and its rotated counterpart) on this deformation space, meaning that it forms a \((2N-4)\)-dimensional manifold in this space of deformations. Intuitively, this is because there is a continuum of ways to change the sides of the triangle that keep it a triangle. For example, given a triangle formed by reciprocal lattice vectors, one can deform two of the lattices by a combination of twists and (isotropic) strains while leaving the third fixed and still have a triangle, as illustrated in Fig. 9. In contrast, the only way to deform the layers and preserve a singular digon formed by the reciprocal lattice vectors of a bilayer is to perform an overall twist or isotropic stretch of both layers simultaneously. These singularity-preserving deformations are at the crux of understanding what the naive configuration space description in Sec. II.4 fails to see about intrinsically trilayer moire patterns, namely, why the effective parameter space seems to be periodically spanned by the two dimensional moire pattern even though the naive parameter space is four-dimensional. The connection between these pictures will be explained in Sec. IV.2. ### The doubly-singular structure of twisted trilayer graphene We now examine twisted trilayer graphene from the perspective of singular structures. Twisted trilayer graphene arises at the intersection of _two_ singular structures: the \((1,0;-1,0;0,0)\) singular structure and the \((0,0;1,0;-1,0)\) singular structure. In this sense, it is "doubly-singular"; therefore, with four singularity constraints instead of the two considered in the previous Figure 9: Starting from a particular singular structure, a small twist away combined with a corresponding strain results in another singular structure. These transformations yield a manifold of singular structures rather than an isolated point, as occurs for bilayers. section, the combination of singular structures is zero-dimensional, not 2D like the intrinsically trilayer pattern (the dimension is \(2N-6\) instead of \(2N-4\), where \(N=3\) for three layers). Twisting relative to the singular structure in this case be understood as generating multiple moire patterns simultaneously. Without a fine-tuned combination of twist and mismatch, the overlapping structure of the multiple moire patterns complicated quasiperiodic patterns, as illustrated in Refs. [59] and [65]. In the special case where the twist angles of the first and third layers are equal and opposite, however, something special happens: at \(\frac{1}{\theta^{2}}\) length scales, a single regular moire pattern is observed. This pattern is referred to as a "moire of moire," since it arises from a moire pattern induced by the two competing \(\frac{1}{\theta}\)-scale moire patterns. This \(\frac{1}{\theta^{2}}\)-order pattern can be understood as the pattern arising from the \((1,0;-2,0;1,0)\) singular structure. Specifically, defining \(k_{0}\) to be a smallest reciprocal lattice vector of graphene, the trilayer structure where the first and third layers are twisted a small amount in opposite directions away from the middle layer can be described by \(k_{1}=R(\theta)k_{0}\), \(k_{2}=-2k_{0}\), and \(k_{3}=R(-\theta)k_{0}\). Per Eq. (20), the moire wave vector is given by \[k_{M}=[R(\theta)+R(-\theta)-2\mathbb{I}]k_{0}=2(\cos(\theta)-1)\mathbb{I}k_{0}, \tag{26}\] which is of order \(\theta^{2}\) for small \(\theta\). Hence, the moire wavelength is of order \(\frac{1}{\theta^{2}}\). Moreover, since the order-\(\theta^{2}\) deviation is only from this particular singular structure, and not from the "doubly-singular" structure, it exhibits a single 2D moire pattern rather than complex overlapping structures. The relevant singular structures are illustrated in Fig. 10. ## IV Configuration space of intrinsically trilayer moire patterns There is an apparent contradiction between the naive configuration space described in Sec. II.4, which indicates that trilayers have complex moire patterns that cannot possibly fit on a lattice, and the intrinsically trilayer moire patterns presented in Sec. III, which very clearly do so. We seek to resolve this contradiction by a more nuanced description of the configuration space. The missing ingredient from the naive configuration space given in Eq. (14) is a collection of "nontrivial trivial transformations," which are nontrivial in that they do not correspond to overall translations, but trivial in that they do not change the local moire structure. The correct configuration space of the moire pattern is the set of translations of each layer modulo overall translations (i.e., simultaneous translations of all layers by the same amount) _and_ these new transformations. We now describe how to find these additional transformations. We do so in a way that naturally derives not only the dimensionality of the true configuration space, but also explains why it is toroidal. The intuition of the argument derives from the characterization of singular structures provided in Sec. III.2.1: singular structures are those structures for which certain relative translations of the layers change the average value of local quantities by providing phases between different contributions to the zeroth Fourier mode of the quantity of interest, as in Eq. (24). A moire heterostructure can be viewed as resulting from these different possible phases: different regions in the moire heterostructure correspond to different relative translations of the singular structure. The nontrivial trivial transformations we seek to find derive from the converse of that identification: any relative translation which does _not_ result in a phase will make no impact on average properties. Such relative translations that do not result in phases, therefore, are precisely the nontrivial trivial transformations. We find the nontrivial trivial transformations formally using in the frequency picture described in Sec. III. For simplicity, we take as a concrete example the \((1,0;1,0;1,0)\)-moire on the square lattice (illustrated in Fig. 8). The Fourier modes are indexed by \(\mathbb{Z}^{6}\), but the moire modes arise from the zero mode lattice described in Sec. III.2.2. In this specific case, the zero mode lattice is spanned by the vectors \((1,0,1,0,1,0)\) and \((0,1,0,1,0,1)\), which we call \(n^{(1)}\) and \(n^{(2)}\) (each of which also have indices, \(n^{(1,2)}_{i,j}\)). A translation of layer \(i\) by \(\mathbf{a}_{i}\) (not necessarily a lattice vector) will multiply the Fourier mode with indices \(n_{i,j}\) by a phase \(\exp\Bigl{(}\sum_{i,j}n_{i,j}\mathbf{b}_{i,j}\cdot\mathbf{a}_{i}\Bigr{)}\), which follows from the discrete Fourier transform in Eq. (19). For the relative translations which preserve the moire lattice, this phase vanishes when evaluated on the zero mode lattice. Clearly, translating each layer by the same amount, Figure 10: Several singular structures of TTLG along two specific slices of the four-dimensional parameter space \((\theta_{12},\theta_{23},\delta_{12},\delta_{32})\) indicated by solid colored lines. The dashed green line represents the constraint of helically-twisted trilayer graphene. The bilayer singular structures shown in the left figure deviate from helically-twisted trilayer graphene to order \(\theta\), but the trilayer singular structure shown in the right figure only deviates to order \(\theta^{2}\). Hence, the green singular structure produces a moire pattern at \(1/\theta^{2}\)-scale, whereas the bilayer singular structures plotted in blue/red/purple produce (competing) moiré pattern at \(1/\theta\) scale. \(\mathbf{a}_{i}=\mathbf{a}\), results in this phase vanishing on the zero-mode lattice, where \(\sum n_{i,j}^{(k)}\mathbf{b}_{i,j}=0\) for both \(k\). This imposes two constraints on the six-dimensional space. The additional constraints are found by setting \(\mathbf{a}_{1}=0\), at which point the constraint is \(\mathbf{b}_{2,i}\cdot\mathbf{a}_{2}=-\mathbf{b}_{3,i}\cdot\mathbf{a}_{3}\); the simplest two basis solutions are \(\{\mathbf{a}_{2}=\mathbf{b}_{3,1},\mathbf{a}_{3}=-\mathbf{b}_{2,1}\}\) and \(\{\mathbf{a}_{2}=\mathbf{b}_{3,2},\mathbf{a}_{3}=-\mathbf{b}_{2,2}\}\). These extra translations are most of the "nontrivial trivial transformations" we were searching for, and suffice to reduce the dimensionality of the configuration space from four to two. Note that this two-dimensional space is periodic, i.e., a torus rather than a plane, because the sum \(\sum_{i,j}n_{i,j}\mathbf{b}_{i,j}\cdot\mathbf{a}_{i}\) need not vanish identically for the phase to vanish; instead, it can be a multiple of \(2\pi\). This periodicity ensures that the final phase space is indeed a torus. Therefore, our final and most general characterization of the phase space is as the collection of relative translations of the layers modulo those which act trivially on the zero mode lattice (i.e., on the combinations of modes that contribute to the zero mode in the singular structure). Note that in multiply-singular structures, such as TTLG, the moire-generating lattice is greater than two-dimensional. Therefore, there is at least a four-dimensional manifold defining the configuration space. Consequently, one cannot regard this configuration space as being periodically fully explored in real space. This explains the difference between the complex patterns in TTLG and the periodic moire in intrinsically trilayer systems. ### Configuration space and lattice relaxation To illustrate the usefulness of configuration space, we consider lattice relaxation in intrinsically trilayer moire systems. Lattice relaxation is usually computed by taking an average energy density of any particular stacking configuration, then enlarging regions of low-energy stacking while shrinking regions of high-energy stackings [67]. We claim that _the average energy density of a singular structure on long wavelengths does not change under a nontrivial trivial transformation_. That is to say, structures in the naive configuration space (Sec. II.4) that differ by a nontrivial trivial transformation have the same energy density. We now justify this claim. Consider the energy density of a singular structure, \(\rho(x)\), and consider the energy density over some large region of radius \(R\), \(\frac{1}{\pi R^{2}}\int_{|x|<R}\rho(x)d^{2}x\). As \(R\to\infty\), this is precisely the zero-frequency mode of the Fourier transform of energy density, \(\hat{\rho}(k=0)\). Since the nontrivial trivial transformations preserve the zero mode lattice, they preserve any observable that is only dependent on that mode. In particular, they do not change \(\hat{\rho}(k=0)\). Therefore, the average density is only dependent on the reduced configuration space, not the higher-dimensional naive configuration space. In a moire heterostructure, this implies that long-wavelength lattice relaxations arise on the moire scale. A particular point \(x\) on the moire pattern specifies a specific stacking of the singular structure; let \(\rho_{x}\) denote the local average energy density for that singular structure. On length scales much longer than atomic lengthscales but much shorter than the moire lengthscale, we can approximate the local energy density by \(\hat{\rho}_{x}(k=0)\), i.e., the average energy density of the singular structure formed at the point \(x\). This is a moire-periodic function of \(x\), and since long-wavelength lattice relaxation can be extracted from the energy density, long-wavelength relaxations are periodic on the moire lengthscale. ### Relation to singular structure degeneracy We now connect the set of nontrivial trivial transformations to the singular manifold in deformation space (discussed in Sec. III.2.3). In short, while we have so far been considering a structure as a deformation from a particular singular _point_, it is more accurate to consider a structure as a deviation from the _manifold_ of singular structures generated by including stretch as well as twist. The nontrivial trivial transformations correspond to moving along this manifold. To understand the relationship, we begin by extending the discussion surrounding Eq. (8). Take a singular structure and transform each layer \(i\) by a matrix \(M_{i}\). Then consider the matrix (written here for a trilayer in terms of \(2\times 2\) blocks) \[M=\begin{bmatrix}M_{1}^{-1}-\mathbb{I}\\ M_{2}^{-1}-\mathbb{I}\\ M_{3}^{-1}-\mathbb{I}\end{bmatrix}. \tag{27}\] Similar to how in a small-angle twisted bilayer, each point in real space can be viewed as a specific untwisted stacking of the two layers, each point in a near-singular tri-layer heterostructure can be viewed as a specific singular stacking of the three layers. The matrix \(M\) maps a point in real space to the relative translation of layers required to transform the stacking at the origin to the stacking at that particular point. The specific matrices \(M^{\prime}\) which map \(\mathbb{R}^{2}\) to nontrivial trivial transformations (or overall translations) are precisely those which correspond to deformations \(M_{i}\) that _preserve the singular structure_ of our setup (i.e., those that move along the degenerate manifold of singular structures, rather than perturbing off of it). This makes sense because the low-frequency moire modes arise from deviations from the singular structure; therefore, moving along the singular structure manifold does not yield moire. This identification has important implications for twisted multilayer moire systems beyond trilayers. In a trilayer system, given a reciprocal lattice vector from each layer, there is a unique way to stack the layers (i.e., a unique set of twist angles) that results in a singular structure, provided such a structure is possible. This results from the fact that given three sides of a triangle, the interior angles of the triangle are determined. However, for four or more sides, the side lengths do not uniquely specify the interior angles, as shown in Fig. 11. Consequently, in a heterostructure with four or more layers, there are multiple twist angles that result in a singular structure. The same moire lattice can be formed by twisting away from either of these configurations. Since equivalent points on the two moire patterns differ only by a non-trivial trivial transformation, their moire-scale physics is identical. This is elaborated in Appendix A. A similar phenomenon occurs in a trilayer system if slight strain is included, i.e., for a given three layers, multiple singular configurations are possible if the layers can be isotropically strained in addition to being twisted. It may be possible to make use of the choice in reference configuration for theoretical insight or computational advantage, as discussed briefly in Appendix E. ## V Detection of intrinsically trilayer moire We now describe how to measure intrinsically multilayer patterns experimentally. A measurement that sees the moire pattern must probe each layer: in an intrinsically multilayer moire structure, no subset of layers alone will exhibit a moire pattern, unlike a trilayer moire of moire structure. ### Structural probes One standard way to detect moire patterns in bilayer systems is to use STM. However, a surface probe like STM primarily probes the top layer of a heterostructure. This effectively probes the moire pattern in a bilayer system because the top layer reconstructs on the moire scale. However, such a reconstruction may be weak in a multilayer system due to the large twist angles and multiple layers. Thus, we expect STM to be less effective at probing intrinsically trilayer moire patterns than at probing bilayer patterns. In contrast, we expect TEM - which has already been used to detect moire patterns in bilayer systems [68] - to be an ideal probe because it passes through all the layers. TEM does not require lattice relaxation to see the moire effect: the pattern that results from diffraction from each layer sequentially reproduces the sums of reciprocal lattice vectors discussed in Sec. III (see Eq. (20)). Therefore, intrinsically-trilayer structures will produce satellite peaks. ### Transport: engineering flat bands using intrinsically trilayer moire Transport probes of intrinsically multi-layer moire patterns depend strongly on the electronic structure of the underlying materials. A full study of engineering electronic structures from intrinsically trilayer moire is beyond the scope of this work. Instead, we propose a few promising platforms. #### v.2.1 Large-angle TBLG with a potential As a first setup, consider twisted bilayer graphene at a large angle. Unlike small-angle TBLG, if the \(K\) points of the two layers are significantly separated in momentum space after twisting, then interlayer hopping will couple only to high-energy states. This obstacle is overcome by sandwiching the large-angle TBLG between two copies of a third insulating layer chosen so that its reciprocal lattice vectors (almost) Figure 11: Different quadrilaterals can be formed with the same side lengths. Consequently, a stacked four-layer system can have multiple singular configurations with different large twist angles, i.e., there are different twist angles such that \(\sum_{i}G_{i}=0\), where \(G_{i}\) is a reciprocal lattice vector in each layer. Moire lattices formed by twisting slightly away from these configurations exhibit the same physics on the moiré length scale, provided the small twists are chosen to give the same moiré lattice vectors. Figure 12: Two layers of graphene (black) arranged with a large twist angle generate a moiré pattern with the additional outer (blue) layers on the exterior, which are aligned with each other. Left: Physical configuration of the layers. Right: The large black hexagons and small blue hexagons indicate the BZ of graphene and the outer layers, respectively. The reciprocal lattice vector of the outer layers couples the \(K\) points of the graphene layers, effectively compensating for the large twist. perfectly compensate the momentum difference between the \(K\) points of the two graphene layers. This setup is shown in Fig. 12. If an electron feels a potential from the insulating layer as it hops from one graphene layer to the other, then it can hop from \(K\) in one layer to (nearly) \(K\) of the other, mimicking the process in TBLG. However, the resulting system is slightly different from magic angle TBLG because the Dirac cones are rotated with respect to each other. (The relative rotation of the Dirac cones in magic angle TBLG is small enough to be ignored.) A different large angle moire bilayer graphene structure was studied in Ref. [61]. There it was found that with one tuning parameter, a "hyper-magic manifold" with many flat bands and a kagome-like band structure emerges. In that paper, however, the authors were limited by needing to be near a commensurate structure. Our proposal described above avoids that limitation, at the cost of requiring a suitable third material. #### v.3.2 Two potentials imposed on a single layer Consider a layer of graphene sandwiched between two identical insulators. If the insulating layers are arranged at a small relative angle, then they will impose a superlattice potential on graphene, whose size is determined by the moire scale of the two layers. The effect of a superlattice potential on graphene has been extensively studied [69; 70; 71; 72; 73; 74; 75; 76; 77; 78]. Notably, an artificially imposed potential has been shown to produce satellite cones on a single layer of graphene [69] and is predicted to produce topological flat bands in bilayer graphene [79]. A superlattice potential on the surface of a topological insulator may also induce correlated topological phases [80; 81; 82]. Alternately, the outer layers can be arranged to form an intrinsically trilayer moire pattern with the center layer. This set-up should yield the same band structure as the previous proposal, but with two physical differences. First, this structure's existence is now dependent on the orientation of the lattice in the center layer. This dependence on the center layer enables more tunability but requires additional control. Second, lattice relaxation effects between the two insulating layers are likely to be very small since they are not arranged at a small angle. Theoretically, this lack of relaxation indicates that a rigid rotation approximation is generally more accurate than a 1D network limit (studied in, e.g., Refs. [83; 84; 85]). The two setups are compared in Fig. 13. ## VI Conclusions We have presented a new kind of moire structure, "intrinsically trilayer moire," which results from twisting multilayers near certain special "singular structures." The local structure of such systems is quasicrystalline, but this quasicrystalline structure is periodically modulated on long (moire) lengthscales. We characterized the local configuration space of such systems, and showed that previous description of configuration space for bilayer systems [62; 63] is insufficient to provide a real-space intuition for why these patterns arise. Our new notion of configuration space is useful to determine lattice relaxation effects. It also explains the \(\frac{1}{\theta^{2}}\) moire pattern in helically-twisted trilayer graphene [53]. Finally, we connected these abstract patterns to their material realizations. We described how to observe intrinsically multi-layer moire structures experimentally, contrasting STM and TEM probes' suitability for this purpose. We also proposed a few promising material realizations that may give rise to flat bands via either interlayer or intralayer hopping terms. Other possible future directions would be to examine higher-order interlayer hopping processes or layers that are individually strongly interacting. The systems we propose thus far are the simplest cases, and don't take advantage of the most potent aspect of these patterns: the ability to engineer moire heterostructures _without regard for lattice constant_. Intrinsically trilayer moire can be made from materials with _any_ lattice constant combination, and therefore enables engineering moire heterostructures with material combinations not previously imaginable, including those where the individual layers have vastly different physics. ## VII Acknowledgments We thank Cory Dean, Philip Kim, Abhay Pasupathy, and Ziyan Zhu for useful conversations. This material is based upon work supported by the National Science Foundation under the Columbia MRSEC on Precision-Assembled Quantum Materials (PAQM), Grant No. DMR-2011738. This work was performed in part at the Figure 13: Inducing two periodic potentials (reciprocal lattice vectors in blue) on a layer of graphene (BZ in black) produces a moiré superlattice (reciprocal lattice vector in red). Left: sandwiching graphene between two nearly-aligned layers produces an effective superlattice potential. The alignment of the graphene layer is unimportant. Right: if the two other layers are twisted at a large angle so their reciprocal lattice vector adds to one of graphene, intrinsically-trilayer moiré can arise. Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. J.C. acknowledges the support of the Flatiron Institute, a division of the Simons Foundation, and the Alfred P. Sloan Foundation through a Sloan Research Fellowship.
2308.15209
Shared Lexical Items as Triggers of Code Switching
Why do bilingual speakers code-switch (mix their two languages)? Among the several theories that attempt to explain this natural and ubiquitous phenomenon, the Triggering Hypothesis relates code-switching to the presence of lexical triggers, specifically cognates and proper names, adjacent to the switch point. We provide a fuller, more nuanced and refined exploration of the triggering hypothesis, based on five large datasets in three language pairs, reflecting both spoken and written bilingual interactions. Our results show that words that are assumed to reside in a mental lexicon shared by both languages indeed trigger code-switching; that the tendency to switch depends on the distance of the trigger from the switch point; and on whether the trigger precedes or succeeds the switch; but not on the etymology of the trigger words. We thus provide strong, robust, evidence-based confirmation to several hypotheses on the relationships between lexical triggers and code-switching.
Shuly Wintner, Safaa Shehadi, Yuli Zeira, Doreen Osmelak, Yuval Nov
2023-08-29T10:55:44Z
http://arxiv.org/abs/2308.15209v1
# Shared Lexical Items as Triggers of Code Switching ###### Abstract Why do bilingual speakers code-switch (mix their two languages)? Among the several theories that attempt to explain this natural and ubiquitous phenomenon, the _Triggering Hypothesis_ relates code-switching to the presence of lexical triggers, specifically cognates and proper names, adjacent to the switch point. We provide a fuller, more nuanced and refined exploration of the triggering hypothesis, based on five large datasets in three language pairs, reflecting both spoken and written bilingual interactions. Our results show that words that are assumed to reside in a mental lexicon shared by both languages indeed trigger code-switching; that the tendency to switch depends on the distance of the trigger from the switch point; and on whether the trigger precedes or succeeds the switch; but not on the etymology of the trigger words. We thus provide strong, robust, evidence-based confirmation to several hypotheses on the relationships between lexical triggers and code-switching. ## 1 Introduction More than half the world's population today is multilingual, yet our understanding of the underlying linguistic and cognitive principles that govern multilingual language is imperfect. It is largely based on controlled laboratory studies, and only recently have psycholinguists begun exploring the extent to which insights from laboratory experiments can be applied in a real-world, communicative setting [20]. Lacking firm theoretical underpinnings, contemporary language technology often does not reflect the ubiquity of multilingual communication. We focus in this paper on _code-switching (CS)_, the natural tendency of bilingual speakers conversing with each other to switch between two languages, sometimes within a single utterance or even a single word. Our main goal is to explore a specific hypothesis related to CS, namely that certain words tend to _trigger_ CS more than others. The main contribution of this work is theoretical, but we trust that its results will be instrumental for improving future multilingual NLP applications. Several competing theories try to explain CS, and in particular to identify the factors that contribute to the (typically unconscious) decision of a speaker to code-switch. Speakers are conjectured to code-switch when the concept they are about to utter is more _accessible_ in the other language [1]; or more _specific_, lacking precise enough words in the current language [1]; or carrying a major _information_ load, so that the switch signals to the listener that an important concept is introduced [14]. The tendency to code-switch is influenced by linguistic factors (e.g., _cognates_ are assumed to trigger CS), socio-linguistic factors (e.g., the fluency of the interlocutors in each of the two languages), demographic ones (e.g., the age, gender or provenance of dialogue participants), and more [14, 15, 16]. We focus on the _triggering hypothesis_, whereby "lexical items that can be identified as being part of more than one language for the speaker [...] may facilitate a transversion from one language to another" [13, p. 162]. This hypothesis was explored extensively in the past, but earlier studies were limited in scope, were based on limited data, and addressed only spoken language. This work makes several contributions. First, we investigate a specific type of lexical trigger: we define a category of lexical items (mainly proper names and culturally specific terms) that we expect to reside in more than one (or alternatively, in a _shared_) mental lexicon (Section 3). We also pay attention to whether such items originate in one of the two languages or in a third language. Second, unlike previous work, which dealt exclusively with spoken data, we investigate both spoken and written data, in five large datasets that include CS in three language pairs: English-Spanish, English-German and English-Arabic1 (Section 4). Third, while we employ the same statistical test that has been used by previous works to assess the association between such shared items and CS, we augment the analysis by also quantifying the _magnitude_ of this association as an indication of the strength of the phenomena we observe (Section 5), thereby adding statistical rigor to our analysis. Footnote 1: The Arabic in this work reflects mostly the dialects of Egypt and Lebanon, and is written in _Arabizi_, an informal writing system that uses the Roman alphabet. See Section 4. Our results (Section 6) show strong associations between the presence of shared items (the type of trigger we focus on) and the tendency to code-switch, in all language pairs and datasets. We also provide a thorough and nuanced analysis (Section 7) of the location of the shared item with respect to the switch point, showing that the tendency to switch is lower when the trigger is adjacent to the switch rather than precedes it; and that the association between triggers and CS diminishes as the shared items are more distant from the switch point. Overall, we provide a much fuller, more nuanced picture of the relationships between lexical triggers and CS than was available so far.2 Footnote 2: All resources produced in this work, including the annotated datasets and the code, are publicly available on our GitHub repository. ## 2 Related work Multilinguality is becoming more and more ubiquitous, to the extent that psycholinguists increasingly acknowledge that bilingualism is the rule and not the exception (Harris and McGhee Nelson, 1992). Grosjean (2010, page 16) stated that "bilingualism is a worldwide phenomenon, found on all continents and in the majority of the countries of the world" and Grosjean and Li (2013) assessed that more than half the world's population today is multilingual. Monolingual and multilingual speakers alike seamlessly adjust their communication style to their interlocutors (Bell, 1984; Pickering and Garrod, 2004; Kootstra et al., 2012; Gallois and Giles, 2015; Fricke and Kootstra, 2016). Specifically, when interlocutors share more than one language, they almost inevitably engage in CS (Sankoff and Poplack, 1981; Muysken, 2000; Clyne, 2003). Most linguistic research on CS has focused on _spoken_ language (Lyu et al., 2010; Li and Fung, 2014; Deuchar et al., 2014, _inter alia_). However, with the rise of social media, _written_ CS (Sebba et al., 2012) has become a pervasive communication style (Rijhwani et al., 2017). The spoken language domain is not directly comparable to the written one, and findings on CS in written conversations differ somewhat from those in speech (McClure, 2001; Chan, 2009; Gardner-Chloros and Weston, 2015). The work we present here addresses both modalities. Various competing theories attempt to explain CS, or at least to propose factors that contribute to the tendency of bilingual speakers to code-switch. Notable among them is the _triggering hypothesis_, which states that specific lexical items that may be included in more than one mental lexicon for the speaker _trigger_ switching (Clyne, 2003). Such lexical items include, according to Clyne, _lexical transfers_ (i.e., borrowed words and expressions), _bilingual homophones_ (including loans from a third language) and _proper nouns_. In this work we focus on a specific type of potential triggers, consisting mainly of proper names but including also culturally specific lexical items that originate in one language and do not have a readily available translation in the other language (e.g., 'taco', originally from Spanish, in English-Spanish dialogues, or'muezzin', originally from Arabic, in English-Arabic conversations). The triggering hypothesis was explored extensively by Clyne (1967, 1972, 1980, 1987), but these early investigations did not include any statistical analysis. This was first introduced by Broersma and De Bot (2006), who worked with "a series of transcribed conversations between three Dutch-Moroccan Arabic bilinguals". This dataset was extremely small by modern standards (it included a few dozen switch points and a few dozen potential triggers). Similarly, Broersma (2009) based her entire analysis on a single 24-minute interview with a single (Dutch-English speaking) informant. Still, both were able to find statistically significant associations between triggering and CS. More recently, Soto et al. (2018) extended this investigation to a larger corpus (the Bangor-Miami corpus of Spanish-English (Deuchar, 2009)), but focused only on a pre-defined list of cognates that they collected. In contrast, we work with much larger datasets that include thousands of switch points and potential triggers, in three different language pairs, and with both spoken dialogues and written social-media interactions. Broersma and De Bot (2006) (and, subsequently, also Broersma (2009) and Soto et al. (2018)) used the \(\chi^{2}\) test to measure the correspondence between triggering and CS. We use the same measure (more precisely, _Fisher's exact test_, whose significance does not rely on an approximation that is only exact in the limit); but we extend the analysis by considering not only the statistical significance of the test, as determined by its \(p\)-value, but also the magnitude of the association between categories as an indication of the strength of the phenomena we observe, as determined by _relative risk_ (also known as _risk ratio_). This facilitates a much more nuanced analysis of the results. ## 3 Goals Our main goal in this work is to explore the triggering hypothesis more closely, focusing on a class of lexical items that we expect to be shared across the multiple mental lexicons of the multilingual speaker. Extending previous research, we aim at addressing the association between such shared items and CS in multiple datasets reflecting three different language pairs (EN-AR, EN-DE and EN-ES)3 and two different modalities (spoken and written). Footnote 3: We use _AR_ for Arabic, _DE_ for German, _EN_ for English, and _ES_ for Spanish. ### _Shared_ lexical items Shehadi and Wintner (2022) defined _shared_ lexical items as named entities in one language that are not translated to the other, and consequently have a similar form in both languages. They also included terms that lack (or have rare) translation equivalents in the other language. Following Osmelak and Wintner (2023), we refine the definition of _shared_ items by reflecting also the language in which such terms originate. Our motivation is the assumption that a word like 'taco', which originates in Spanish but is fully adopted by English, may trigger code-switching from English to Spanish but perhaps less so in the reverse direction. In addition, words in \(L_{1}\) that do not have a commonly used translation equivalents in \(L_{2}\), and are hence used in both languages (e.g., 'taxi', which is commonly used in many Arab-speaking communities) are not considered a code-switch themselves but may trigger code-switching.4 Specifically, we divide the _shared_ category to three sub-categories, depending on the origin of the word. Footnote 4: Another deviation from the scheme of Shehadi and Wintner (2022) is that we treat named entities that are specific to a foreign language as words in that language. For example, ‘_Lebanon_’, which is an English-specific variant of the Arabic ‘_hubnan_’, is viewed as an English token. Footnote 5: These categories are defined separately for each language pair. E.g., _shared-Arabic_ is defined only for the EN-AR datasets. The same holds for _shared-Other_. **Shared English** Named entities shared between two lexicons that originate in English, including person names (e.g., '_Johnson_'), commercial entities (e.g., 'Twitter', '_Seven Eleven_'), and geographic names that contain English words (e.g., '_Times Square_'). Also included are English-originating cultural terms that are adopted by the other language (e.g., 'taxi', '_film_') and English acronyms used cross-culturally on social media (e.g., '_lol_'). **Shared Arabic/German/Spanish** Named entities shared between the two lexicons6 whose origin is Arabic (e.g., 'Salah', '_Bahrain_'); German (e.g., 'Merkel', '_Berlin_'); or Spanish (e.g., 'Carlita', 'Guatemala'). Also, culturally dependent terms originating in these three languages that do not have translations in English, e.g., Arabic '_Ramadan_', German '_schnitzel_' or Spanish 'taco'. This category includes also interjections that are identified with one of these languages, e.g., Spanish '_jajajaja_'; and acronyms that expand to those languages (e.g., Spanish '_PR_' for '_Puerto Rico_' or German '_NRW_' for '_Nordrhein-Westfalen_'). Footnote 6: Another deviation from the scheme of Shehadi and Wintner (2022) is that we treat named entities that are specific to a foreign language as words in that language. For example, ‘_Lebanon_’, which is an English-specific variant of the Arabic ‘_hubnan_’, is viewed as an English token. **Shared Other** Words and terms that are used in both languages, but are not clearly identified with either of them, including named entities or terms that originate in a third language (e.g., '_Erdogan_', 'Pikachu_' or '_pizza_'); terms whose origin is English but that do not include strong English linguistic features (e.g., '_iPod_'); interjections that are commonly used in both languages (e.g., '_oh_' or '_wow_'); person names that are common in both languages (e.g., '_Lily_', '_Adam_'); and geographical terms that originate in a third language and are written and pronounced similarly in both languages (e.g., '_Vietnam_'). It is important to note that the tagging is context dependent: much like named entities, shared items may have different readings (i.e., tags) depending on the context in which they occur. Consider the two examples below. The token '_warda_' is tagged as Arabic in Example 1, but as shared Arabic in Example 2. Consequently, using lists of shared items (as was done in previous work, e.g., by Soto et al. (2018)), is not a sufficient solution. (1) _Maynfl3sh warda wahda tayep!_ it doesn't work flower one only! 'It doesn't work, only one flower!' (2) _kan beydafe3 3an amr warda_ was defend about Amr Warda 'He was defending Amr Warda' Finally, note that some shared items are multi-word, e.g., 'amr warda' (a person name) in Example 2. When all tokens have the same origin \(L\), we label the item shared-\(L\); we do the same also when some tokens are shared-\(L\) and others are shared-Other. But if one token is shared-\(L_{1}\) and the other is shared-\(L_{2}\), we label each token differently. For example, we tag 'Nueva York' as shared-Spanish followed by shared-English. ### Hypotheses We pose the following hypotheses: 1. _Shared_ lexical items are associated with CS, i.e., they tend to co-occur in the same utterances. This is the main hypothesis investigated intensively by Clyne's many works and by subsequent research, but we define shared items somewhat differently here, not relying on predefined (or manually annotated) lists of cognates and proper names. 2. Such tendencies are more pronounced when the trigger is closer to the switch point. Previous work investigated "adjacent words", whereas we investigate shared words located up to 6 tokens from the switch point. 3. Triggers that precede the switch point are more strongly associated with CS than those that are adjacent to them. Broersma and De Bot (2006) explain that the trigger can succeed the CS point because language planning does not always work linearly, and the choice of language for words is not necessarily aligned with the linear order of these words in a sentence. They therefore search for "basic clauses" that contain both switches and trigger words, in any order. We do not define basic clauses, resorting instead to a fixed-length window around shared items. But we do check, separately, the case of shared items that precede the CS point, and those that occur on either side of the CS point. We do not separately investigate potential triggers that _follow_ the CS point because we expect the association in such cases to be weak. We focus instead on triggers _near_ the switch, on either side of it, and compare this situation with triggers that strictly precede the switch. 4. Terms that originate in language \(L_{1}\) are more likely to trigger a switch from \(L_{2}\) to \(L_{1}\) than the other way round. Our rationale here stems from the assumption that shared-\(L_{1}\) words may be more deeply rooted in the lexicon of \(L_{1}\) than the lexicon of \(L_{2}\), even if they are included in both; and hence are more likely to trigger switches _to_\(L_{1}\) than _from_\(L_{1}\). These hypotheses are based on a precise definition of what constitutes a CS point (detailed in Section 5.1). But first, we describe the datasets we use to investigate these hypotheses. ## 4 Data We use five different datasets, in three language pairs. The texts are either transcribed dialogues (in the case of Bangor-Miami) or sequences of utterances that constitute a _thread_ (in the case of social media). We view a turn of a single author/speaker as a basic unit; if the dataset is not already tokenized, we segment turns to utterances and then to tokens using NLTK Bird et al. (2009). Each token is then associated with a language ID tag. Arabic-EnglishWe used the English-Arabizi (Arabic written in the Roman alphabet) dataset compiled and released by Shehadi and Wintner (2022). This corpus includes social media posts from Reddit and Twitter; it contains 2,643 utterances that were manually annotated for language ID (at the word level), which were used to train a highly accurate classifier (the accuracy of identifying words in Arabizi and English was 95%; identifying _shared_ items was only 84% accurate, with a precision of 89% and much lower recall). The classifier was then used to automatically annotate additional utterances, resulting in a total of over 865,000 utterances that include CS between English and Arabizi. Each word in this dataset is associated with a unique language ID: Arabizi, English, French,6 Arabic, Shared, or Other. Footnote 6: [https://github.com/en-arxiv/2019/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/166/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/166/16/16/16/16/16/16/16/16/166/16/16](https://github.com/en-arxiv/2019/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/166/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/16/166/16/16/16/16/16/16/16/16/166/16/16) tagged as _shared-English_ are introduced, and evidently the author switches to English. Finally, Example 5 begins in German and ends in English, perhaps in connection with the use of'schnitzel', which is _shared-German_. ## 5 Methodology ### Definition of CS points To check the association between shared items and CS, the latter concept must be carefully defined, which is not always a trivial task [12]; previous work has sometimes been careless with this. We consider CS to be a property of a single token, defined as follows: A token \(w\) is considered code-switched from \(L_{1}\) to \(L_{2}\) when: (i) \(w\) is labeled as \(L_{2}\); (ii) it is preceded (in the same utterance) by a sequence of \(n\geq 0\) tokens labeled neither as \(L_{1}\) nor as \(L_{2}\); and (iii) this sequence is preceded (in the utterance) by a token labeled as \(L_{1}\). This definition allows for sequences of shared lexical items (and other tokens, e.g., emoji) to intervene between a token in \(L_{1}\) and a token in \(L_{2}\); the CS point is the first \(L_{2}\) token that follows such a sequence. Having said that, we exclude some CS points from our analysis: we treat _insertional_ switches differently from _alternational_ ones. [13] defines alternation as "a true switch from one language to the other, involving both grammar and lexicon". All three examples in Table 4 are international. In contrast, insertion is the embedding of a phrase from one language into an utterance that is otherwise in the other language. Example 6 demonstrates insertional CS: the English token 'technically' is inserted into an otherwise fully Arabic utterance. _Gamma3a e7na technically fi ramadan_ guys we're in Ramadan_ 'Guys, we're technically in Ramadan' It is common to assume that insertional CS like the one in Example 6 involve _a single_ trigger, which affects the tendency to switch from \(L_{1}\) to \(L_{2}\); the switch back to \(L_{1}\) is merely an inevitable consequence of the CS being insertional. Therefore, we exclude from our analyses the second switch in case of insertional CS.10 Footnote 10: We experimented also with the alternative approach, namely treating insertional and international switches identically. The results were pretty similar. We operationalize this as follows: given a sequence of tokens \(w_{1}w_{2}w_{3}\), where \(w_{1}\) and \(w_{3}\) are in \(L_{1}\) and \(w_{2}\) is in \(L_{2}\), we only consider \(w_{2}\), but not \(w_{3}\), to be a CS point. This does introduce noise occasionally, especially because some insertional switches involve the insertion of two, and sometimes even three tokens, as in Example 7. In such cases, the switch back to Arabic will (erroneously) be taken into consideration in our analyses. _Mafi good internet b kel leben_ there-isn't in all Lebanon 'There's no good internet in all of Lebanon'_ ### Statistical analysis To explore the associations between shared items (as defined in Section 3.1) and CS (as defined above), we ran a multitude of statistical tests. The tests vary in terms of the dataset used, the type of shared items investigated (the three sub-classes of shared items, or all shared items combined), the direction of the CS (from English to the other language or vice versa), whether the shared item _precedes_ the CS point or _neighbors_ it (given a shared item, we look for CS points following it, but also adjacent to it on either side), and the distance between the two (we look at CS distanced at most 1 to 6 tokens from the shared item). We now outline the structure of a single such test, where the dataset is the SentiMix corpus, the type of shared item is shared-English, the CS direction is EN\(\rightarrow\)ES, and the CS follows the item, at a distance of at most 2 tokens. Table 5 depicts the data used in this test: it is a \(2\times 2\) contingency table, whose columns indicate if the lexical item is shared or not, and whose rows correspond to the presence or absence of CS points near the shared item. The sum of the numbers in the first column is the number of shared items in the dataset, and in the first row, the number of switch points investigated (a single CS point may be counted several times for different shared items). We exclude from the investigation the first and the last token in each utterance (the last token in an utterance cannot trigger a switch following it; and the first is limited in triggering a switch neighbouring it). Across the shared items in the dataset, the proportion of switches (within the specified distance) is \(216/(216+659)=24.7\%\), whereas across the non-shared items, this proportion is \(17515/(17515+143299)=10.9\%\). Thus, the propensity to switch is \(24.7/10.9=2.266\) times higher near a shared item, compared to a non-shared item. We refer to the latter ratio as the _relative switching propensity_; mathematically, it is analogous to the well known "relative risk" (or "risk ratio") from epidemiology and biostatistics (Rothman, 2012). We test whether this ratio differs from 1 in a statistically significant way via a Fisher test, and obtain a \(p\)-value of \(2.2\times 10^{-30}\), indicating a highly significant increased tendency to switch near a shared item. Clearly, the relative switching propensity equals 1 if and only if the odds ratio equals 1, and the same statistical test (namely, Fisher's) is appropriate for studying either of these quantities. We prefer to use the relative switching propensity to quantify the magnitude of the association between shared items and CS, as it is more readily interpretable: in the above example, the number \(2.266\) is our estimate for the factor by which switches are more common near shared items, compared to near non-shared items. ## 6 Results Section 5.2 defines multiple statistical tests: first, we work with five datasets; for each dataset, we individually explore four types of shared items. For example, with Bangor-Miami we independently explore shared-English items, shared-Spanish, shared-Other, and all shared items combined. We depict the results of each such "multi-test", with a specific dataset and a specific type of shared item, as one plot. On each plot we depict the results of 36 statistical tests, which differ by the type of switch (EN\(\rightarrow\)ES or ES\(\rightarrow\)EN or both); whether the shared item precedes the CS point or neighbors it; and finally, the distance between the two (1-6). The result of each of these 36 statistical tests is depicted as a point in a graph, where the \(X\) axis is the distance (1-6) and the \(Y\) axis indicates the value of the relative switching propensity for the specific statistical test. This facilitates a clear view of the magnitude of the effect (i.e., the strength of the association) of each test. Additionally, when the \(p\)-value of a particular test is greater than 0.05 (the usual threshold for statistical significance), we indicate this as a black diamond marking on the point that corresponds to that test. Figure 1 shows the multi-test plot reflecting the \begin{table} \begin{tabular}{l c c} & & **Is**_shared_ \\ & & Yes & No \\ \cline{2-3} \multirow{2}{*}{**Sampling**} & Yes & 216 & 17515 \\ & No & 659 & 143299 \\ \cline{2-3} \multirow{2}{*}{**Sampling**} & 24.7\% & 10.9\% \\ & Relative switching propensity: 2.266 \\ \(p\)-value: \(2.2\times 10^{-30}\) & \\ \end{tabular} \end{table} Table 4: Example utterances with their language ID annotations. \begin{table} \begin{tabular}{l c c} & & **Is**_shared_ \\ & & Yes & No \\ \cline{2-3} \multirow{2}{*}{**Sampling**} & Yes & 216 & 17515 \\ & No & 659 & 143299 \\ \cline{2-3} \multirow{2}{*}{**Sampling**} & 24.7\% & 10.9\% \\ & Relative switching propensity: 2.266 \\ \(p\)-value: \(2.2\times 10^{-30}\) & \\ \end{tabular} \end{table} Table 5: Contingency table constructed for the SentiMix corpus, reflecting EN\(\rightarrow\)ES switches that follow shared-English lexical items at distance at most 2 from the switch point. results on the Bangor-Miami corpus with shared-Spanish items. The 36 points on this plot are connected by lines: solid lines reflect statistical tests where the shared item precedes the CS point, and dashed lines are for statistical tests where the shared item can occur before or after a CS point. The color of the line reflects the type of switch: EN\(\rightarrow\)ES (yellow), ES\(\rightarrow\)EN (red), or both (green). See the legend to the right of the plot. Several observations are revealed in Figure 1. First and foremost, with no exceptions, all the tests yield statistically significant results (\(p<0.05\)), as there are no black diamonds on the plot. This fundamentally supports our first hypothesis, namely that there is a clear association between shared items and CS. Furthermore, all the lines are monotonically decreasing, or at least non-increasing, thereby confirming our second hypothesis: the association between shared items and CS is stronger when the two are close, and diminishes as the distance between them increases. The fact that the solid lines are always above the dashed lines confirms our third hypothesis: the association is stronger with shared items that precede CS points than with shared items that are adjacent to them, on either side. Finally, the solid red line is always above the solid yellow line, indicating that shared-Spanish items are more strongly associated with CS from Spanish to English (red) than with CS from English to Spanish (yellow), in contrast to our hypothesis. This pattern, however, is partly reversed in the dashed lines: the jury is still out on our fourth hypothesis. While Figure 1 summarizes the results of 36 statistical tests, it is only one out of 20 similar "multi-tests": we have similar plots for five datasets, with four types of shared items per dataset. Space limitations prevent us from presenting all of them here (they will be included in the supplementary materials), but we do show a similar plot of the Denglisch EN-DE corpus, with data reflecting shared-English items, in Figure 2. In addition, we now analyse the aggregate results of all 20 multi-tests in light of our four hypotheses. ## 7 Analysis **Association between shared items and CS.** Our first hypothesis was that shared items are indeed associated with CS. To assess the association, we expect the Fisher test to yield a statistically significant result in each of the statistical tests (i.e., no black diamonds in the plots). Not surprisingly, we do find such association. Recall that each plot (such as the one in Figure 1) depicts the results of 36 tests, and that we have 20 such plots. Of the 720 statistical tests, only 10 (1.4%) yield \(p\)-values greater than \(0.05\). We thus overwhelmingly establish the hypothesis that in all our datasets there is significant association between shared items of all kinds and CS, even when the shared item is as far as 6 tokens away from the CS point, and even when the shared item is adjacent Figure 1: A multi-test plot depicting the results of 36 tests on the Bangor-Miami corpus with shared Spanish items. to (i.e., may succeed) the CS point. It is interesting to note that of the 10 exceptions, 8 are in the Denglisch corpus. We return to this below. The impact of the distance between shared items and the CS point.Our second hypothesis was that the magnitude of the association diminishes as the shared item and the CS point are more distant. To confirm this hypothesis we need to show that the lines in the plots are decreasing, or at least non-increasing. This is indeed the case for 98 (82%) out of the 120 lines (6 per plot). Furthermore, most of the lines that are not decreasing include only a single point that violates the hypothesized trend. The main issue is, again, with the DE-EN dataset, which is responsible for 13 out of the 22 exceptions. Shared items before or adjacent to CS points.We also hypothesized that the shared item "triggers" CS, namely that such items are more influential when they precede the CS point, as compared to when they are merely adjacent to it (on either side). To establish this hypothesis we need to show that the solid lines are above the dashed ones. Of the 720 data points in our 20 plots, only 38 did not comply with this condition; in other words, our hypothesis holds for almost 95% of the points. One potential reason for the existence of outliers may be noise in our definition of insertional switches. Recall from Section 5.1 that we try to find triggers for all switches except switches "back from" an insertion, but our definition of insertional CS assumes that they consist of exactly one token, whereas in reality some of them are multi-word expressions. We expect that switching "out of" such longer insertional switches does not require a trigger, but our analysis nonetheless looks for one. Interestingly, all but one of the outliers involve switches _from_ English to the other language (yellow lines). We do not have an explanation for this observation. The etymology of the shared item and its relation to the direction of the switch.Finally, we hypothesized that shared-\(L_{1}\) items are more strongly associated with \(L_{2}\to L_{1}\) switches than with \(L_{1}\to L_{2}\) switches. This hypothesis is not supported by the data. For example, in the two AR-EN datasets, switches _to_ English were systematically more prominent than switches _from_ English, independently of the type of the shared item. In the EN-ES datasets, shared-EN and shared-ES items were associated with switches of both types almost to the same extent; and the DE-EN dataset also showed mixed, inconsistent results. One potential explanation for this observation has to do with insertional switches. With the exception of Bangor-Miami, our datasets reflect so Figure 2: A multi-test plot depicting the results of 36 tests on the EN-DE corpus with shared English items. cial media interactions on platforms that typically include discussions in English. We conjecture that when a particular discussion is conducted in another languages, insertions of English expressions are highly likely, much more than insertions of phrases in the other language to an otherwise English utterance. As mentioned above, our handling of insertional switches is noisy, which might affect the results. A more theoretically based explanation of our failure to confirm the fourth hypothesis is grounded in theories of bilingualism which maintain that the two languages of a bilingual are both active simultaneously, and one of them has to be suppressed in order to yield words in the other (e.g., Finkbeiner et al., 2006). If this is indeed the case, then the origin of a shared item does not have to influence its likelihood to trigger a switch in any particular direction. SummaryPrevious work on the triggering hypothesis focused solely on spoken dialogues and, consequently, was limited by the data available: typically, a few dozen dialogues spoken by a handful of participants in a single language pair. The extension to written CS, exemplified here, opens the door to investigations with vast amounts of data, but also raises interesting questions on the differences between written and spoken language and how CS is manifested in both modalities. Another interesting question has to do with the differences in the ways CS is manifested in closely-related language pairs (English-German, and to a lesser extent also English-Spanish) vs. in typologically unrelated language pairs (English-Arabic). A third dimension of comparison involves the differences in how CS is related to the status of the two languages involved: whether one of them is a minority language, a heritage language, or a lingua franca. A thorough investigation of all these issues is beyond the scope of this work; but we do note that among the five datasets used in this research, we did not find major differences between the (spoken dialogue) Bangor-Miami corpus and the (Twitter) SentiMix corpus, which are both in English-Spanish. This suggests that CS in written language, at least as it is used on very informal social media outlets, behaves similarly to CS in spoken language with respect to the triggering hypothesis. We _did_ find significant differences between the Denglisch dataset and all others. Like the Denglisch corpus, one of the Arabizi datasets we studied also consists of Reddit posts, so the peculiarity of the English-German corpus cannot be attributed to the source of the texts it includes. As Osmelak and Wintner (2023) note, this dataset is different from most corpora of bilingual language: German is the official language of Germany, where English is widely understood but is not a minority or heritage language, nor a lingua franca of a sub-community. This may result in a unique pattern of CS, different from the one observed in other language pairs, and might explain the different results we obtain on this dataset. We conjecture that CS between English and German is special because of the status of the two languages in German-speaking countries, but more research is needed to confirm this conjecture. ## 8 Conclusion We investigated the triggering hypothesis using five datasets that reflect bilingual interactions in three language pairs. Employing standard yet powerful statistical methodology, we strongly confirmed three hypotheses: (1) that there is a strong association between code-switching and shared lexical items (proper names, but also culturally specific items that may lack translation equivalents in the other language); (2) that this association is stronger when the shared item precedes the switch point, rather than neighbors it; and (3) that the association diminishes as the shared item is farther away from the CS point. We were unable to confirm a fourth hypothesis, namely that shared items originating in language \(L_{1}\) are more likely to trigger a switch from \(L_{2}\) to \(L_{1}\) than the other way round. We do not know whether this is due to noise in our datasets or a bona fide property of bilingual language, rooted in cognitive-theoretical explanations; we leave this for future investigation. While the data used to establish the above results are unprecedented in terms of their size and diversity, at least in the psycholinguistic literature, we believe that they do not tell a full story. We would very much like to extend our datasets to more language pairs; to have sufficiently large datasets that would facilitate a comparative analysis of spoken vs. written data; and also enough data to compare CS between etymologically close languages vs. unrelated language pairs. We leave such investigations for future research. ## Ethical considerations This research was approved by the University of Haifa IRB. We used previously collected data that are freely available for research purposes, and redistribute those data according to their original licenses. All data are anonymized and we anticipate very minimal risk of abuse or dual use of the data. ## Limitations Our datasets are by no means representative, and any conclusion resulting from their processing is limited to the population of speakers they reflect. However, the magnitude of the data we used here, especially compared to the sizes of corpora used previously to derive theories of code-switching, is sufficient to guarantee the replicability of our findings on further data. AcknowledgementsWe are grateful to Melinda Fricke and Anat Prior for many discussions related to this work, and excellent ideas. We also thank the three anonymous TACL reviewers for their valuable feedback and suggestions. This work was supported in part by grant No. 2019785 from the United States-Israel Binational Science Foundation (BSF), and by grants No. 2007960, 2007656, 2125201 and 2040926 from the United States National Science Foundation (NSF).
2310.18567
Reliability modeling and statistical inference of accelerated degradation data with memory effects and unit-to-unit variability
Accelerated degradation testing (ADT) is an effective way to evaluate the lifetime and reliability of highly reliable products. Markovian stochastic processes are usually applied to describe the degradation process. However, the degradation processes of some products are non-Markovian due to the interaction with environments. Besides, owing to the differences in materials and manufacturing processes, products from the same population exhibit diverse degradation paths. Motivated by this issue, an ADT model with memory effects and unit-to-unit variability (UtUV) is proposed in this article. The memory effect in the degradation process is captured by the fractional Brownian motion (FBM) and the UtUV is considered in the acceleration model. Then, the lifetime and reliability under the normal operating condition are presented. To give an accurate estimation of the memory effect, a statistical inference method is devised based on the expectation maximization (EM) algorithm. The effectiveness of the proposed method is verified by a simulation case and a microwave case. It is shown that the estimation of the memory effect obtained by the EM algorithm is much more accurate than the traditional method. Moreover, without considering UtUV in the ADT model, the estimation of the memory effect can be highly biased. The proposed ADT model is superior in both deterministic degradation trend predictions and degradation boundary quantification compared to existing models.
Shi-Shun Chen, Xiao-Yang Li, Wenrui Xie
2023-10-28T02:28:51Z
http://arxiv.org/abs/2310.18567v2
# Accelerated degradation modeling considering long-range dependence and unit-to-unit variability ###### Abstract Accelerated degradation testing (ADT) is an effective way to evaluate the reliability and lifetime of highly reliable products. Existing studies have shown that the degradation processes of some products are non-Markovian with long-range dependence due to the interaction with environments. Besides, the degradation processes of products from the same population generally vary from each other due to various uncertainties. These two aspects bring great difficulty for ADT modeling. In this paper, we propose an improved ADT model considering both long-range dependence and unit-to-unit variability. To be specific, fractional Brownian motion (FBM) is utilized to capture the long-range dependence in the degradation process. The unit-to-unit variability among multiple products is captured by a random variable in the degradation rate function. To ensure the accuracy of the parameter estimations, a novel statistical inference method based on expectation maximization (EM) algorithm is proposed, in which the maximization of the overall likelihood function is achieved. The effectiveness of the proposed method is fully verified by a simulation case and a microwave case. The results show that the proposed model is more suitable for ADT modeling and analysis than existing ADT models. Accelerated degradation testing; fractional Brownian motion; unit-to-unit variability; EM algorithm; long-range dependence. ## 1 Introduction With the continuous progress of design and manufacturing technology, modern products tend to have extremely high reliability and long lifetime. At this point, accelerated degradation testing (ADT) is always employed, in which degradation data are obtained under more severe stress conditions. By modeling ADT data, the lifetime and reliability of products under normal conditions can be extrapolated and the saving of cost and time can be achieved. In general, the commonly used degradation models for ADT modeling are stochastic process models. Charki et al. [1] carried out ADT on photovoltaic modules under three temperature and humidity levels and employed the Wiener process to inference its lifetime under normal conditions. Santini et al. [2] utilized the gamma process to model the threshold voltage degradation of a commercial SiC MOSFET at different temperature and gate voltage levels and assessed its reliability under normal conditions. Yan et al. [3] recorded the tensile force degradation of the flax fiber reinforced composite laminates under different temperature and humidity levels and evaluated its lifetime and reliability under normal conditions using the Tweedie exponential dispersion process. More related works on ADT modeling can be found in [4, 5, 6]. A reasonable description of the degradation process in ADT modeling is essential to ensure the credibility of the lifetime and reliability assessments. Most existing ADT models hold a general assumption that performance degradation is a memoryless Markovian process with independent increments. However, for real engineering products, such as batteries or blast furnace walls [7], degradation typically exhibits non-Markovian properties, i.e., the future degradation increment is influenced by the current or historical states. The root cause of this phenomenon can be attributed to the interaction with environments [8]. Since traditional Markovian models are completely memoryless, they fail to describe the degradation of such dynamic systems, leading to an imprecise lifetime and reliability assessments. Hence, building up an ADT model considering the non-Markovian degradation is essential for enabling more accurate extrapolation of lifetime and reliability. To describe the non-Markovian property of a degradation process, some research introduced a state-dependent function to quantify the influence of the current state on the degradation rate and established different transformed stochastic processes. Giorgio et al. [10] established the transformed gamma process and derived the conditional distribution of the first passage time. Following this, Giorgio and Pulcini [9] further proposed the transformed Wiener process to describe the non-Markovian degradation process with possibly negative increments. Also, the transformed inverse Gaussian process [11] and the transformed exponential dispersion process [12] were presented and the corresponding parameter estimation methods based on the Bayesian framework were developed. However, for the above transformed stochastic processes, it is difficult to choose a reasonable form of the state-dependent function without any prior knowledge. Besides, they only focus on the influence of the current state on the future degradation increment, which may not be sufficient for describing the long-range dependence of the entire degradation path. Different from the state-dependent function, fractional Brownian motion (FBM) is an effective mathematical tool for modeling non-Markovian stochastic processes with long-range dependence [13, 14]. In the FBM, the degree of long-range dependence can be simply quantified by the Hurst exponent. To this end, FBM has drawn much attention in degradation modeling [15]. Xi et al. [16] first introduced the FBM to capture the long-range dependence in the degradation process and predicted the remaining useful life (RUL) of a turbofan engine. They showed that the degradation model based on the FBM is more suitable to describe the degradation process than that based on the Wiener process. Afterward, Xi et al. [17] proposed an improved degradation model by considering both the long-range dependence and unit-to-unit variability. Their results illustrated that catching the long-range dependence in the degradation data can improve the accuracy of the RUL prediction. Furthermore, Shao and Si [18] extended the degradation model based on the FBM by considering the measurement errors. Xi et al. [19] further developed a multivariate degradation model based on the FBM to describe multivariate stochastic degradation systems with long-range dependence. Besides, Zhang et al. [7] established a degradation model with long-range dependence and multiple modes, and presented a framework for automatic identification of the multiple modes in the degradation process. The above studies found that compared to the degradation models based on the Wiener process, the degradation models based on the FBM all obtained more accurate RUL predictions. Although scholars have made much progress in degradation modeling based on the FBM, the above degradation models fail to quantify the effect of external stresses on performance degradation. Hence, they are incapable of assessing the reliability and lifetime of highly reliable products under limited time and resources. To solve this problem, in a recent study, Si et al [20] constructed an ADT model based on the FBM, thus quantifying the effect of external stresses on performance degradation and considering the long-range dependence in the degradation process. Their results indicated that the ADT model considering long-range dependence could get more accurate lifetime assessments than the Markovian ADT model. Actually, the accuracy of lifetime extrapolation based on the ADT model depends not only on the reasonable description of the degradation process, but also on proper uncertainty quantification. In an ADT, we can observe that different items have different degradation paths, known as unit-to-unit variability. The unit-to-unit variability comes from the design, manufacturing and production stages of products [21]. For products with good consistency, ignoring unit-to-unit variability may have little effect on the accuracy of lifetime extrapolation. But for other products with poor consistency, unit-to-unit variability needs to be clearly distinguished and quantified, which otherwise provides little guidance for controlling these uncertainties and results in unreliable reliability and lifetime assessments in practical applications [22, 23]. Nonetheless, to the best of our knowledge, none of the existing ADT models with long-range dependence consider unit-to-unit variability. Motivated by the above limitation, this paper constructs an improved non-Markovian ADT model considering both long-range dependence and unit-to-unit variability. Specifically, the long-range dependence is described by the FBM and unit-to-unit variability is quantified by a random variable in the degradation rate function. On this basis, a novel statistical inference method based on the EM algorithm is proposed to improve the accuracy of parameter estimations. The main contributions are summarized as follows: * An improved non-Markovian accelerated degradation model is constructed, in which the long-range dependence and unit-to-unit variability are considered and quantified properly. * A novel statistical inference method to estimate unknown parameters in ADT models is presented based on expectation maximization (EM) algorithm, in which the maximization of the overall likelihood function is achieved. The organization of the paper is as follows. Firstly, Section 2 gives the preliminary about the FBM. Then, in Section 3, an improved non-Markovian accelerated degradation model is proposed, and a statistical inference method based on the EM algorithm is developed. After that, the practicability and the superiority of the proposed methodology are verified by a simulation case and an engineering case in Section 4 and Section 5, respectively. Finally, Section 6 concludes the work. ## 2 Preliminary to fractional Brownian motion According to [24], a standard FBM can be defined as \[B_{H}\left(t\right)=\frac{1}{\Gamma\left(H+\frac{1}{2}\right)}\int_{-\infty}^{t }W_{H}\left(t-s\right)\mathrm{d}B(s), \tag{1}\] where, \[W_{H}\left(t-s\right)=\begin{cases}\left(t-s\right)^{H-1/2},&0\leq s\leq t\\ \left(t-s\right)^{H-1/2}-\left(-s\right)^{H-1/2},&s<0;\end{cases} \tag{2}\] \(\Gamma(\cdot)\) is a gamma function formulated by \[\Gamma(x)=\int_{0}^{\infty}t^{x-1}e^{-t}\mathrm{d}t; \tag{3}\] \(B(\cdot)\) represents the standard Brownian motion; \(t\) is a given time; and \(H\) is the Hurst exponent with the range of \((0,\,1)\). For the standard FBM shown in (1), the autocorrelation function can be expressed by \[\mathbb{E}\left[B_{H}\left(t\right)B_{H}\left(s\right)\right]=\frac{1}{2} \left(t^{2H}+s^{2H}-\left|t-s\right|^{2H}\right). \tag{4}\] It can be seen from equation (4) that the memory effect is determined by \(H\). Depending on the value of \(H\), there are three different cases [17]: 1. When \(0<H<0.5\), the increments of FBM are negatively correlated, which implies that future increments tend to an opposite tendency of the previous direction. 2. When \(H=0.5\), the FBM degenerates into the BM, and there is no dependence in the process. 3. When \(0.5<H<1\), the increments of FBM are positively correlated, which implies that future increments tend to follow the previous direction. We can see that the standard BM is a special case of the standard FBM. In addition, for \(t\geq 0\), the standard FBM satisfies the following properties: 1. \(B_{H}\left(0\right)=0\) ; 2. \(B_{H}\left(t\right)-B_{H}\left(s\right)-B_{H}\left(t-s\right)\) for any \(s<t\); 3. \(E\left[B_{H}\left(t\right)\right]=0\) and \(E\left[B_{H}^{2}\left(t\right)\right]=t^{2H}\) ; 4. \(B_{H}\left(at\right)\sim a^{H}B_{H}\left(t\right)\) for \(a>0\); ## 3 Methodology ### Model construction As mentioned previously, the Markovian ADT models, which assume that current degradation is independent of historical degradation, fail to describe the degradation process with the memory effect. Moreover, none of the existing ADT models can deal with the coexistence of non-Markovian degradation with long-range dependence and unit-to-unit variability. In this section, we construct an improved non-Markovian accelerated degradation model, in which the degradation process with long-range dependence is described by the FBM and the unit-to-unit variability is analyzed and quantified properly. #### 3.1.1 Non-Markovian Accelerated Degradation Model based on FBM In order to describe the non-Markovian degradation process with long-range dependence, FBM is introduced to construct the accelerated degradation model as follows: \[X\big{(}s,t\big{)}=f\big{(}s,t\big{)}+\sigma B_{H}\left(t\right), \tag{5}\] where \(X(\cdot)\) denotes the performance degradation;\(f(\cdot)\) is a function of stress \(s\) and time \(t\), which depicts the deterministic degradation trend of the product; \(\sigma\) represents the diffusion coefficient. Without loss of generality, the initial degradation is assumed to be zero. If not, \(X(s,t)=X(s,t)-X(s,0)\) will be used [25]. In essence, performance degradation is a substance or energy change process within the product [26]. According to the transport process in statistical physics [27], the change amount in substance or energy can be expressed as the change rate influenced by stresses multiplied by a monotonically increasing function of time. Accordingly, \(f(\cdot)\) in (5) can be decoupled as the performance degradation rate function \(e(s)\) multiplied by the time scale function \(\tau(t)\), i.e.: \[X\big{(}s,t\big{)}=e\big{(}s\big{)}\cdot\tau\big{(}t\big{)}+\sigma B_{H}\left( t\right), \tag{6}\] where \(\tau(t)=t^{\beta}\), \(\beta>0\) is a common assumption for the time-scale transformation function [25]. If \(\beta=1\), then it represents a linear degradation process, otherwise a nonlinear degradation process; \(e(s)\) is also known as the acceleration model which indicates the relationship between degradation rate and stress [28]. In general, \(e(s)\) is usually assumed as a log-linear function as follows: \[e\big{(}s\big{)}=\exp\Big{(}\alpha_{0}+\alpha_{1}s^{*}\Big{)} \tag{7}\] where \(\alpha_{0}>0\) and \(\alpha_{1}>0\) are constant unknown parameters; \(s^{*}\) is the standardized stress. For different acceleration models, \(s^{*}\) can be calculated as [29] \[s^{*}=\begin{cases}\dfrac{1\,/\,s_{0}-1\,/\,s}{1\,/\,s_{0}-1\,/\,s_{H}},&\text {Arrhenius model},\\ \dfrac{\ln s-\ln s_{0}}{\ln s_{H}-\ln s_{0}},&\text{Power law model},\\ \dfrac{s-s_{0}}{s_{H}-s_{0}},&\text{Exponential model},\end{cases} \tag{8}\] where \(s_{0}\) and \(s_{H}\) are the normal and the highest applied stress levels, respectively. The choice of acceleration models mainly depends on the type of accelerated stress. For instance, if the accelerated stress is temperature, the Arrhenius model is adopted; if the accelerated stress is humidity, the Power law model is adopted [30]. #### 3.1.2 Unit-to-unit Variability in Non-Markovian Accelerated Degradation Model As aforementioned, the unit-to-unit variability plays a significant role in ADT modeling. To be specific, for a certain product, due to manufacturing imperfections, each item may have its own performance degradation rate [31]. As a result, Since \(\tau(t)\) is usually identical for all tested items in a certain stress level, it is reasonable to deduce that each item has unique \(e(s)\) in (6). Without other prior information, we assume that the degradation rate of different items follows a normal probability distribution, i.e., \(e\big{(}s\big{)}\sim N\left(\mu_{e}\big{(}s\big{)},\sigma_{e}^{2}\big{(}s\big{)} \right),\ \ \mu_{e}\big{(}s\big{)}>0\,\ \sigma_{e}\big{(}s\big{)}>0\.\) Thus, (6) is transformed as \[X\big{(}s,t\big{)}=e\big{(}s\big{)}\cdot\tau\big{(}t\big{)}+\sigma B_{H}\big{(} t\big{)},e\big{(}s\big{)}\sim N\left(\mu_{e}\big{(}s\big{)},\sigma_{e}^{2} \big{(}s\big{)}\right). \tag{9}\] Considering that \(e(s)\) is a random variable, the degradation rate function in (6) can be rewritten as \[e(s)=\exp\left(\alpha_{0}+\alpha_{1}s^{*}\right)=a\exp\left(\alpha_{1}s^{*} \right), \tag{10}\] where \(a=\exp\left(\alpha_{0}\right)\) is a random variable with a normal probability distribution, i.e., \(a\sim N\left(\mu_{a},\sigma_{a}^{2}\right)\), \(\mu_{a}>0\,\ \sigma_{a}>0\). Then, the distribution parameter \(\ \mu_{e}\big{(}s\big{)}\) and \(\ \sigma_{e}\big{(}s\big{)}\) can be expressed as \[\mu_{e}(s)=\mu_{a}\exp\left(\alpha_{1}s^{*}\right),\sigma_{e}(s)=\sigma_{a} \exp\left(\alpha_{1}s^{*}\right). \tag{11}\] #### 3.1.3 Relationship of the proposed model to other ADT models The non-Markovian ADT model proposed in this paper is described by (8), (9), and (11), which is denoted as \(M_{0}\) in the following sections. It should be noted that the proposed ADT model has three special cases as follows: 1. When the unit-to-unit variability is not considered, i.e., \(a\) is a constant, it degenerates into the model in [20], which is denoted as \(M_{1}\) in the following sections. 2. When the non-Markovian degradation is not considered, i.e., \(H=0.5\), it degenerates into the model in [32], which is denoted as \(M_{2}\) in the following sections. 3. When the unit-to-unit variability and the non-Markovian degradation are both not considered, i.e., \(a\) is a constant and \(H=0.5\), it degenerates into the model in [33], which is denoted as \(M_{3}\) in the following sections. ### Lifetime distribution and reliability analysis The main purpose of ADT is to extrapolate the lifetime distribution and reliability of the product at normal stress \(s_{0}\) by using the degradation data at high stress levels. A product is considered to fail when its performance degradation accumulates to a critical threshold. Let \(\omega\) be the critical threshold of the degradation process, then the failure time \(T\) of a product can be defined as the first passage time when its degradation exceeds the pre-determined critical threshold value \(\omega\), that is \[T=\inf\Big{\{}t\Big{|}X\big{(}s_{0},t\big{)}<\omega\Big{\}}. \tag{12}\] The reliability of a product is expressed as the probability that a component or system can perform the required function at a given time \(t\) under specified operating conditions, which can be expressed as \[R\big{(}t\big{)}=\Pr\big{\{}T\geq t\big{\}}. \tag{13}\] Due to the complexity of the model, it is difficult to derive an explicit expression for (12) and (13). To solve this problem, we employ the Monte Carlo (MC) method to calculate the lifetime distribution and reliability of a product at normal stress \(s_{0}\). The detailed steps of the MC method are summarized in Algorithm 1. Specifically, to generate a standard FBM path, a fast Fourier transform (FFT) algorithm is utilized [34]. Detailed steps of the FFT algorithm can be found in [20]. ``` Step 1. Obtain the model parameters \(\widehat{\boldsymbol{\theta}}\), where \(\boldsymbol{\theta}\!=\!\left[\mu_{a},\sigma_{a}^{2},\alpha_{1},\beta,\sigma^{ 2},H\right]^{\!\!T}\) (using the maximum likelihood estimation (MLE) approach based on the EM algorithm developed in the subsequent Section 3.3); Step 2. Based on \(\widehat{\boldsymbol{\theta}}\), simulate \(N\) degradation paths at normal stress \(s_{0}\): For\(q\) in 1:\(N\) (\(N\) is the number of MC iterations and should be sufficiently large), 2.1 Simulate the \(q^{\text{th}}\) standard FBM path \(B_{H=\widehat{H}}^{q}\left(t\right)\) using the FFT algorithm; 2.2 Generate a unit-specific \(a^{q}\) according to its distribution \(N\!\left(\widehat{\mu}_{a},\widehat{\sigma_{a}^{2}}\right)\); 2.3 Obtain the degradation path as \(X_{q}\left(s_{0},t\right)\!=\!a^{q}\,\exp\!\left(\widehat{\alpha_{i}}s_{0}^{ \ast}\right)\!\cdot\!t^{\widehat{\beta}}+\widehat{\sigma}B_{H=\widehat{H}}^{ q}\left(t\right).\) Step 3. Compute the product lifetime distribution and reliability at normal stress \(s_{0}\): 3.1 Compute the failure time based on the \(N\) simulated degradation paths using (12), denoted as \(T_{q}(1\leq q\leq N)\); 3.2 Obtain an empirical estimate of the product lifetime distribution at stress \(s_{0}\) as \(F\!\left(t\left|\widehat{\boldsymbol{\theta}},s_{0}\right.\right)\!=\! \frac{1}{N}\!\sum_{q=1}^{N}\!1_{t}\left(T_{q}\right);1_{t}\!\left(T_{q}\right) \!=\!\begin{cases}1,\text{if }\ T_{q}\!\!\leqslant\!t\\ 0,\text{otherwise}\end{cases};\) 3.3 Calculate the product reliability at the specific time \(t\) using (13). ``` **Algorithm 1** The MC method to compute the product lifetime distribution and reliability at normal stress \(s_{0}\) ### Parameter estimation Statistical inference Based on (9) and (11), the unknown parameters \(\boldsymbol{\theta}\!=\!\left[\mu_{a},\sigma_{a}^{2},\alpha_{1},\beta,\sigma^ {2},H\right]^{\!\!T}\) in \(M_{0}\) need to be determined. In general, \(\boldsymbol{\theta}\) will be estimated by the observed ADT data in laboratory. In this section, a statistical inference method based on the EM algorithm is developed to estimate the unknown parameters by the constant-stress ADT data. The observed ADT data \(x_{ij}\) is denoted as the \(j^{\text{th}}\) degradation value of unit \(i\) under the \(l^{\text{th}}\) stress level and \(t_{ij}\) is the corresponding measurement time, \(l=1\), \(2\),..., \(k\); \(i=1\), \(2\),..., \(n_{i}\); \(j=1\), \(2\),..., \(m_{li}\), where \(k\) is the number of stress levels, \(n_{i}\) is the number of test items under the \(l^{\text{th}}\) stress level, and \(m_{li}\) is the number of measurements for unit \(i\) under the \(l^{\text{th}}\) stress level. Hereby, the degradation observations of the \(i^{\text{th}}\) item tested at \(l^{\text{th}}\) stress level can be denoted as \(\boldsymbol{x}_{li}\!=\!\left[x_{li1},x_{li2},\cdots,x_{lim_{u_{i}}}\right]^{\! \!T}\). Furthermore, we define \(\boldsymbol{\tau}_{li}\!=\!\left[\tau\left(t_{li1}\right),\tau\left(t_{li2} \right),\cdots,\tau\left(t_{lim_{u_{i}}}\right)\right]^{\!\!T}\) and \(\boldsymbol{B}_{H}^{ll}\!=\!\left[B_{H}\left(t_{li1}\right),B_{H}\left(t_{li2} \right),\cdots,B_{H}\left(t_{lim_{u_{i}}}\right)\right]^{\!\!T}.\) Since each product has a unique degradation rate, for the \(i^{\text{th}}\) item tested at \(l^{\text{th}}\) stress level, we have \[\mathbf{x}_{li}=a_{li}\mathbf{\psi}_{li}+\sigma\mathbf{B}_{H}^{li}, \tag{14}\] where \[\mathbf{\psi}_{li}=\exp\left(\alpha_{1}s_{i}^{*}\right)\mathbf{\tau}_{li}. \tag{15}\] According to the properties of FBM, \(\mathbf{x}_{li}\) follows a multivariate normal probability distribution given by \[\mathbf{x}_{li}\sim N\left(\mu_{a}\mathbf{\psi}_{li},\mathbf{\Xi}_{li}\right), \tag{16}\] where \(\mathbf{\Xi}_{li}\) is a \(K_{li}\times K_{li}\) dimensional covariance matrix and its (\(u\), \(v\))th entry can be calculated by \[\left(\mathbf{\Xi}_{li}\right)_{uv}=\sigma_{e}^{2}\tau\left(t_{liu}\right)\tau \left(t_{liv}\right)+\frac{\sigma^{2}}{2}\left(t_{liu}^{2H}+t_{liv}^{2H}-\left| t_{liu}-t_{liv}\right|^{2H}\right). \tag{17}\] Hence, the likelihood function given \(\mathbf{x}_{li}\) can be derived as \[L_{ij}\left(\mathbf{\theta}\left|\mathbf{x}_{ii}\right)=\left(2\pi\right)^{-\frac{K_{ ji}}{2}}\left|\mathbf{\Xi}_{li}\right|^{\frac{1}{2}}\exp\left[-\frac{1}{2}\left( \mathbf{x}_{li}-\mu_{a}\mathbf{\psi}_{li}\right)^{T}\mathbf{\Xi}_{li}^{-1}\left(\mathbf{x}_{ ii}-\mu_{a}\mathbf{\psi}_{li}\right)\right]\!. \tag{18}\] Next, based on (18), the log-likelihood function given all the observations \(\mathbf{x}\) can be expressed as \[\ln L\left(\mathbf{\theta}\left|\mathbf{x}\right.\right)=\sum_{l=1}^{k}\sum_{i=1}^{n_ {l}}\ln L_{ij}\left(\mathbf{\theta}\left|\mathbf{x}_{li}\right.\right) \tag{19}\] In order to obtain the estimate of \(\mathbf{\theta}\), we need to maximize the log-likelihood function (19). However, the unobservability of \(a_{li}\) makes the direct constrained optimization of the log-likelihood function very tough, and generally hard to get converge to a solution. Therefore, a two-step MLE method was proposed and to estimate the unknown parameters in ADT models considering unit-to-unit variability [25, 32]. Nevertheless, the two-step MLE method obtains local maximization of two likelihood functions rather than maximizing the overall likelihood function. This will deviate the parameter estimations from the actual data, thus affecting the precision of reliability and lifetime assessments, especially when the degradation process has a complex property like long-range dependence. To address this issue, we propose a parameter estimation method based on the EM algorithm [35]. The main idea of the EM algorithm lies in substituting the unobservable variables with their conditional expectations, where the parameter updates at each step can be obtained in a closed form or a simple form [36]. The EM algorithm consists of two steps: the expectation-step (E-step) and the maximization step (M-step). During the E-step, the expectation of the log-likelihood is computed with respect to the latent variables, and the M-step determines the maximizer of this expected likelihood. The two steps are repeated iteratively until satisfactory convergence is achieved. We denote \(\mathbf{\Omega}=\left\{a_{11},\ldots,a_{1n_{1}},a_{21},\ldots,a_{ln_{j}}\right\},\) then given the complete data including \(\mathbf{x}\) and \(\mathbf{\Omega}\), the complete log-likelihood function can be written as \[\ln L(\mathbf{\theta}\left|\mathbf{x},\mathbf{\Omega})=\sum_{l=1}^{k}\sum_{i=1}^{n_{l}} \Bigl{[}\ln p\left(\mathbf{x}_{li}\left|a_{li},\mathbf{\theta}\right.\right)+\ln p \left(a_{li}\left|\mathbf{\theta}\right.\right)\Bigr{]}, \tag{20}\] where \(p\left(\mathbf{x}_{li}\left|a_{li},\mathbf{\theta}\right.\right)\) denotes the probability density function (PDF) of \(\mathbf{x}_{li}\) given \(a_{li}\) and \(\mathbf{\theta}\); and \(p\left(a_{li}\left|\mathbf{\theta}\right.\right)\) denotes the PDF of \(a_{li}\) given \(\mathbf{\theta}\). According to the ADT models (9) and (11), (20) can be extended as \[\ln L(\mathbf{\theta}\left|\mathbf{x},\mathbf{\Omega}\right)=-\frac{1}{2}\sum_{l=1}^{k}\sum_{i= 1}^{n_{l}}\left\{m_{li}\ln(2\pi)+\ln\left|\mathbf{Q}_{li}\right|+\left(\mathbf{x}_{li}-a _{li}\mathbf{\psi}_{li}\right)^{T}\mathbf{Q}_{li}^{-1}\left(\mathbf{x}_{li}-a_{li}\mathbf{\psi}_ {li}\right)\right\} \tag{21}\] where \(\mathbf{Q}_{li}\) is a \(K_{li}\times K_{li}\) dimensional covariance matrix and its \((u,v)^{\text{th}}\) entry can be calculated by \[\left(\mathbf{Q}_{li}\right)_{uv}=\frac{\sigma^{2}}{2}\left(t_{lu}^{2H}+t_{lv}^{2 H}-\left|t_{liu}-t_{lv}\right|^{2H}\right). \tag{22}\] To facilitate the following statistical inference, the unknown parameters are re-parameterized by \(\mathbf{\Sigma}_{li}=\frac{\mathbf{Q}_{li}}{\sigma^{2}}\). Then, (21) can be rewritten as \[\ln L(\mathbf{\theta}\left|\mathbf{x},\mathbf{\Omega})=-\frac{1}{2}\sum_{l=1} ^{k}\sum_{i=1}^{n_{l}}\left(m_{li}+1\right)\ln(2\pi)-\frac{1}{2}\sum_{l=1}^{k }\frac{n_{i}}{2}\ln\sigma_{a}^{2}-\frac{1}{2}\sum_{l=1}^{k}\sum_{i=1}^{n_{l}} \frac{m_{li}n_{l}}{2}\ln\sigma^{2} \tag{23}\] \[-\frac{1}{2}\sum_{l=1}^{k}\sum_{i=1}^{n_{l}}\ln\left|\mathbf{\Sigma} _{ij}\right|-\frac{1}{2}\sum_{l=1}^{k}\sum_{i=1}^{n_{l}}\left[\frac{\left(\bm {x}_{li}-a_{li}\mathbf{\psi}_{li}\right)^{T}\mathbf{\Sigma}_{li}^{-1}\left(\mathbf{x}_{li}- a_{li}\mathbf{\psi}_{li}\right)}{\sigma^{2}}+\frac{\left(a_{li}-\mu_{a}\right)^{2}}{ \sigma_{a}^{2}}\right]\] Considering the update of \(\mathbf{\theta}\) during the iterations in the EM algorithm, let \(\mathbf{\theta}^{(p)}=\left[\mu_{a}^{(p)},\sigma_{a}^{2(p)},\alpha_{1}^{(p)},\beta ^{(p)},\sigma^{2(p)},H^{(p)}\right]^{T}\) represent the estimations in the \(p^{\text{th}}\) step. To calculate the expectation of the log-likelihood function (23), we need to obtain the first and second moments of \(a_{B}\) conditional on \(\mathbf{x}_{li}\) and \(\mathbf{\theta}^{(p)}\). It is easily found that the posterior distribution of \(a_{li}\) conditional on \(\mathbf{x}_{li}\) and \(\mathbf{\theta}^{(p)}\) is still normally distributed [37]. In the Bayesian framework, the posterior distribution of \(a_{li}\) can be obtained as follows \[p\left(a_{li}\left|\mathbf{x}_{li},\mathbf{\theta}^{(p)}\right)\propto p \left(\mathbf{x}_{li}\right|\left.a_{li},\mathbf{\theta}^{(p)}\right)p\left(a_{li} \left|\mathbf{\theta}^{(p)}\right.\right) \tag{24}\] \[\propto\exp\left[-\frac{1}{2}\left(\frac{\left(\mathbf{x}_{li}-a_{li} \mathbf{\psi}_{li}^{(p)}\right)^{T}\left[\mathbf{\Sigma}_{li}^{(p)}\right]^{-1}\mathbf{ \psi}_{li}^{(p)}}{\sigma^{2(p)}}+\frac{1}{\sigma_{a}^{2(p)}}\right)a_{li}^{2}- 2\left(\frac{\mathbf{x}_{li}^{T}\left[\mathbf{\Sigma}_{li}^{(p)}\right]^{-1}\mathbf{\psi}_ {li}^{(p)}}{\sigma^{2(p)}}+\frac{\mu_{a}^{(p)}}{\sigma_{a}^{2(p)}}\right)a_{ li}\right]\right]\] \[\propto\exp\left\{-\frac{\left[a_{li}-\left(\mathbf{x}_{li}^{T}\left[ \mathbf{\Sigma}_{li}^{(p)}\right]^{-1}\mathbf{\psi}_{li}^{(p)}\sigma_{a}^{2(p)}+\mu_{a} ^{(p)}\sigma^{2(p)}\right)/\left(\left[\mathbf{\psi}_{li}^{(p)}\right]^{T}\left[ \mathbf{\Sigma}_{li}^{(p)}\right]^{-1}\mathbf{\psi}_{li}^{(p)}\sigma_{a}^{2(p)}+ \sigma^{2(p)}\right)\right]^{2}}{2\sigma^{2(p)}\sigma_{a}^{2(p)}/\left(\left[ \mathbf{\psi}_{li}^{(p)}\right]^{T}\left[\mathbf{\Sigma}_{li}^{(p)}\right]^{-1}\mathbf{ \psi}_{li}^{(p)}\sigma_{a}^{2(p)}+\sigma^{2(p)}\right)}\right]\] \[\sim N\left(\mu_{li}^{(p)},\sigma_{li}^{2(p)}\right)\] with \[\mu_{i}^{(p)}=\frac{\mathbf{x}_{li}^{T}\left[\mathbf{\Sigma}_{li}^{(p)}\right]^{-1}\mathbf{ \psi}_{h}^{(p)}\sigma_{a}^{2(p)}+\mu_{a}^{(p)}\sigma^{2(p)}}{\left[\mathbf{\psi}_{ li}^{(p)}\right]^{T}\left[\mathbf{\Sigma}_{li}^{(p)}\right]^{-1}\mathbf{\psi}_{hi}^{(p)} \sigma_{a}^{2(p)}+\sigma^{2(p)}}, \tag{25}\] \[\sigma_{hi}^{2(p)}=\frac{\sigma^{2(p)}\sigma_{a}^{2(p)}}{\left[\mathbf{\psi}_{hi}^{ (p)}\right]^{T}\left[\mathbf{\Sigma}_{li}^{(p)}\right]^{-1}\mathbf{\psi}_{hi}^{(p)} \sigma_{a}^{2(p)}+\sigma^{2(p)}}. \tag{26}\] In the following, we use the EM algorithm to estimate \(\mathbf{\theta}\) iteratively. To be specific, the E-step computes the expectation \(Q\left(\mathbf{\theta}\left|\mathbf{x},\mathbf{\theta}^{(p)}\right.\right)\) of \(\ln L(\mathbf{\theta}\left|\mathbf{x},\mathbf{\Omega})\) with respect to \(\mathbf{\Omega}\). According to (23), (25) and (26), we have \[Q\left(\mathbf{\theta}\left|\mathbf{x},\mathbf{\theta}^{(p)}\right.\right)=E _{\mathbf{\Omega}\left|\mathbf{\theta}^{(p)}\right.}\left[\ln L(\mathbf{\theta}\left|\mathbf{ x},\mathbf{\Omega})\right.\right]\] \[=-\frac{1}{2}\sum_{l=1}^{k}\sum_{i=1}^{n_{l}}\left(m_{li}+1\right) \ln(2\pi)-\frac{1}{2}\sum_{l=1}^{k}\frac{n_{l}}{l}\ln\sigma_{a}^{2}-\frac{1}{ 2}\sum_{l=1}^{k}\sum_{i=1}^{n_{l}}\frac{m_{li}n_{l}}{2}\ln\sigma^{2}\] \[-\frac{1}{2}\sum_{l=1}^{k}\sum_{i=1}^{n_{l}}\ln\left|\mathbf{\Sigma}_ {li}\right|-\frac{1}{2}\sum_{l=1}^{k}\sum_{i=1}^{n_{u}}\frac{\left[\mu_{li}^{ (p)}\right]^{2}+\sigma_{ii}^{2(p)}-2\mu_{ii}^{(p)}\mu_{a}+\mu_{a}^{2}}{\sigma_{ a}^{2}} \tag{27}\] \[-\frac{1}{2}\sum_{l=1}^{k}\sum_{i=1}^{n_{u}}\frac{\mathbf{x}_{li}^{T} \mathbf{\Sigma}_{li}^{-1}\mathbf{x}_{ki}-2\mu_{ii}^{(p)}\mathbf{x}_{ki}^{T}\mathbf{\Sigma}_{li} ^{-1}\mathbf{\psi}_{hi}+\left[\left[\mu_{ii}^{(p)}\right]^{2}+\sigma_{ii}^{2(p)} \right)\mathbf{\psi}_{hi}^{T}\mathbf{\Sigma}_{li}^{-1}\mathbf{\psi}_{hi}}{\sigma^{2}}\] In the M-step, by taking the first partial derivatives of \(Q\left(\mathbf{\theta}\left|\mathbf{x},\mathbf{\theta}^{(p)}\right.\right)\) with regard to \(\mu_{a}\), \(\sigma_{a}^{2}\) and \(\sigma^{2}\), respectively, and setting each derivative to be zero, we have \[\mu_{a}^{(p+1)}=\frac{\sum_{l=1}^{k}\sum_{i=1}^{n_{l}}\mu_{ii}^{( p)}}{\sum_{l=1}^{k}n_{l}} \tag{28}\] \[\sigma_{a}^{2(p+1)}=\frac{\sum_{l=1}^{k}\sum_{i=1}^{n_{l}}\left(\left[\mathbf{x}_ {li}^{(p)}\right]^{2}+\sigma_{ii}^{2(p)}-2\mu_{ii}^{(p)}\mu_{a}^{(p+1)}+\left[ \mu_{a}^{(p+1)}\right]^{2}\right)}{\sum_{l=1}^{k}n_{l}} \tag{29}\] \[\sigma^{2(p+1)}=\frac{\sum_{l=1}^{k}\sum_{i=1}^{n_{l}}\left(\mathbf{x}_{li}^{T} \mathbf{\Sigma}_{li}^{-1}\mathbf{x}_{li}-2\mu_{ii}^{(p)}\mathbf{x}_{ki}^{T}\mathbf{\Sigma}_{li} ^{-1}\mathbf{\psi}_{hi}+\left[\left[\mu_{ii}^{(p)}\right]^{2}+\sigma_{ii}^{2(p)} \right)\mathbf{\psi}_{hi}^{T}\mathbf{\Sigma}_{li}^{-1}\mathbf{\psi}_{hi}\right]}{\sum_{l=1}^ {k}\sum_{i=1}^{n_{l}}m_{li}}. \tag{30}\] Note that \(\sigma^{2(p+1)}\) depends on the estimates of \(\alpha_{1}\), \(\beta\) and \(H\). Hence, by substituting \(\mu_{a}^{(p+1)}\), \(\sigma_{a}^{2(p+1)}\) and \(\sigma^{2(p+1)}\) into (27), the estimates of \(\alpha_{1}^{(p+1)}\), \(\beta^{(p+1)}\) and \(H^{(p+1)}\) can be achieved by maximizing the profile likelihood function through a three-dimensional search. Next, by substituting \(\alpha_{1}^{(p+1)}\), \(\beta^{(p+1)}\) and \(H^{(p+1)}\) into (30), we can obtain the estimate of \(\sigma^{2(p+1)}\). The iterations are often terminated when the relative change of the parameter estimations falls below a predefined threshold \(\mathcal{E}\), or when the number of iterations reaches a fixed threshold. The complete algorithm is summarized as shown in Algorithm 2. ``` Step 1. Initialize \(\boldsymbol{\theta}^{(0)}=\!\!\left[\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ ### Results and analysis In this case, the initial value of \(\mathbf{\theta}\) is taken from the estimations of the two-step MLE method. The terminated threshold \(\mathcal{E}\) for the EM algorithm iteration is set as 0.01. Subsequently, according to Algorithm 2, we obtain the estimations of unknown parameters as shown in Table 2. From Table 2, it can be seen that the estimations are basically close to the true values. For further exhibiting the effectiveness of the proposed methodology, the predicted deterministic degradation trends, upper and lower boundaries of 10000 simulated degradation paths at 90% confidence level and actual observations under 80\({}^{\circ}\)C, 100\({}^{\circ}\)C and 120\({}^{\circ}\)C are plotted in Fig. 1. As can be seen in Fig. 1, under all accelerated stress levels, the predicted deterministic degradation trends primarily lie in the observations, and the boundaries nicely envelope the observations. Next, the predicted deterministic degradation trend with time and temperature is shown in Fig. 2, which describes the degradation accumulation at a given time and temperature. It can be seen that degradation increases with the increase of time and temperature. Finally, according to Algorithm 1, when the failure threshold of the product is 5, we can estimate the lifetime distribution and reliability of the product under the normal use condition (40\({}^{\circ}\)C) as shown in Fig. 3 and Fig. 4, respectively. From Fig. 4, we can get the reliability of the product for a given operating time. For example, when the product is operated for 4613 hours, its reliability is 0.99. The results can provide guidance for the development of maintenance and warranty strategies. Fig. 3: Lifetime distribution of the product under normal stress level (40\({}^{\circ}\)C). Fig. 2: Deterministic degradation trend with time and temperature. ### Discussions #### 4.3.1 Comparison of parameter estimation methods In order to illustrate the superiority of the proposed parameter estimation method compared to the existing two-step MLE method [25, 32], in this subsection, nine combinations of (\(N\),\(M\)) are chosen to generate simulated ADT data. Other simulation settings are the same as in Table 1. For each combination, we obtain the parameter estimations by using these two methods. Then, we compare the maximum value of the log-likelihood function \(l_{\text{max}}\) and the relative errors (RE) between the estimations and the true values, so as to achieve a quantitative comparison. The RE of the parameter estimation is computed by \[RE=\frac{Estimated\_value-True\_value}{True\_value}. \tag{31}\] Obviously, the closer \(RE\) is to 0, the higher the accuracy of the parameter estimations. Table 3 gives the estimations of unknown parameters for two methods under nine combinations of sample sizes and measurements. Fig. 5 illustrates the RE for two methods under different combinations. From Table 3 and Fig. 5, we can get the following results: * For the two-step MLE method, when the number of measurements is small, the estimation of \(H\) is highly biased. As the number of measurements increases, the estimation of \(H\) comes closer to the true value but is still unsatisfactory. On the other hand, for the EM algorithm, when the number of measurements is small, the estimation of \(H\) is acceptable. As the number of measurements increases, the estimation of \(H\) is very close to the true value. This illustrates the significance of maximizing the overall likelihood function for parameter estimations, especially when degradation has the long-range dependence property and the number of measurements is small. * As the sample sizes and measurements increase, the \(RE\) obtained by these two estimation methods both decreases, indicating that increasing the information from the ADT experiment can improve the accuracy of parameter estimations effectively. * In all combinations, the \(l_{\text{max}}\) and \(RE\) obtained by the EM algorithm are better than those of the two-step MLE method, proving the superiority of the proposed estimation method. Figure 4: Reliability of the product under normal stress level (40\({}^{\circ}\)C). \begin{table} \begin{tabular}{c c c c c c c c c c} \hline (_N,M_) & Method & \(\mu_{a}\) & \(\sigma_{a}\) & \(\alpha_{i}\) & \(\beta\) & \(\sigma\) & \(H\) & \(I_{\rm max}\) & _RE_ \\ \hline \multirow{2}{*}{(6,10)} & two-step MLE & 9.119e-6 & 1.154e-6 & 2.243 & 1.539 & 0.145 & 2.948e-8 & 103.838 & 2.089 \\ & EM algorithm & 9.144e-6 & 9.559e-7 & 2.286 & 1.534 & 0.104 & 0.073 & 105.613 & 1.025 \\ \hline \multirow{2}{*}{(6,20)} & two-step MLE & 1.142e-5 & 2.650e-6 & 2.333 & 1.505 & 0.151 & 0.029 & 146.232 & 1.757 \\ & EM algorithm & 1.142e-5 & 2.635e-6 & 2.329 & 1.506 & 0.116 & 0.082 & 149.298 & 0.871 \\ \hline \multirow{2}{*}{(6,30)} & two-step MLE & 8.852e-6 & 1.345e-6 & 2.481 & 1.515 & 0.129 & 0.047 & 281.194 & 1.279 \\ & EM algorithm & 8.852e-6 & 1.340e-6 & 2.482 & 1.515 & 0.109 & 0.079 & 283.219 & 0.762 \\ \hline \multirow{2}{*}{(12,10)} & two-step MLE & 1.303e-5 & 2.702e-6 & 2.121 & 1.494 & 0.156 & 1.134e-8 & 167.828 & 2.376 \\ & EM algorithm & 1.302e-5 & 2.640e-6 & 2.135 & 1.492 & 0.138 & 0.033 & 169.207 & 1.830 \\ \hline \multirow{2}{*}{(12,20)} & two-step MLE & 1.320e-5 & 2.550e-6 & 2.215 & 1.492 & 0.138 & 0.031 & 357.171 & 1.781 \\ & EM algorithm & 1.320e-5 & 2.527e-6 & 2.214 & 1.493 & 0.101 & 0.091 & 363.958 & 0.802 \\ \hline \multirow{2}{*}{(12,30)} & two-step MLE & 1.110e-5 & 1.811e-6 & 2.470 & 1.491 & 0.124 & 0.059 & 528.113 & 0.880 \\ & EM algorithm & 1.110e-5 & 1.802e-6 & 2.470 & 1.491 & 0.103 & 0.096 & 532.326 & 0.297 \\ \hline \multirow{2}{*}{(18,10)} & two-step MLE & 1.182e-5 & 2.915e-6 & 2.297 & 1.494 & 0.164 & 1.584e-8 & 218.202 & 2.364 \\ & EM algorithm & 1.185e-5 & 2.727e-6 & 2.322 & 1.490 & 0.091 & 0.123 & 228.288 & 0.946 \\ \hline \multirow{2}{*}{(18,20)} & two-step MLE & 9.275e-6 & 2.153e-6 & 2.692 & 1.493 & 0.154 & 0.016 & 495.731 & 1.610 \\ & EM algorithm & 9.273e-6 & 2.144e-6 & 2.689 & 1.493 & 0.110 & 0.081 & 508.612 & 0.515 \\ \hline \multirow{2}{*}{(18,30)} & two-step MLE & 8.724e-6 & 2.085e-6 & 2.581 & 1.505 & 0.121 & 0.062 & 784.706 & 0.795 \\ & EM algorithm & 8.724e-6 & 2.080e-6 & 2.581 & 1.506 & 0.101 & 0.098 & 790.515 & 0.234 \\ \hline \end{tabular} Note: \(N\) is the sample size for each stress level, and \(M\) is the number of measurements for each sample. \end{table} Table 3: Parameter estimations for two methods under nine combinations of sample sizes and measurements. #### 4.3.2 Comparison of ADT models In order to illustrate the significance of the proposed ADT model compared to other ADT models mentioned in Section 3.1.3, in this subsection, we set the combination of (\(N\),\(M\)) as (12,30) to generate ADT simulation data. Other simulation settings are the same as in Table 1. For the model \(M_{0}\) and \(M_{2}\), the statistical inference method based on EM algorithm is used for parameter estimations. For the model \(M_{1}\) and \(M_{3}\), the parameters are obtained by directly maximizing the log-likelihood function. Then, we calculate the \(l_{\text{max}}\) and the Akaike information criterion (AIC) for each ADT model, so as to achieve a quantitative comparison. The AIC is expressed as [38] \[AIC=-2l_{\text{max}}+2n_{p}, \tag{32}\] where \(n_{p}\) is the number of unknown parameters. The AIC quantifies how well the model fits the data. The lower the AIC value is, the better the model fits. Table 4 gives the estimations of unknown parameters for the four ADT models. Clearly, from \(l_{\text{max}}\) and AIC values, \(M_{0}\) is the most suitable model, then is model \(M_{2}\), followed by model \(M_{1}\), and the worst is model \(M_{3}\). The reason is that the model \(M_{2}\) and \(M_{1}\) partially consider the unit-to-unit variability and the long-range dependence, respectively. Therefore, they both perform worse than the model \(M_{0}\), which considers the unit-to-unit variability and the long-range dependence simultaneously. Moreover, model \(M_{1}\) even gets wrong degradation law, i.e., the degradation process exhibits positively correlated long-range dependence, which is due to the lack of consideration of the unit-to-unit variability. As for the model \(M_{3}\), both the unit-to-unit variability and the long-range dependence are not considered, which leads to the poorest fitting results. \begin{table} \begin{tabular}{c c c c c c c c c} \hline Model & \(\mu_{a}\) & \(\sigma_{a}\) & \(a_{1}\) & \(\beta\) & \(\sigma\) & \(H\) & \(l_{\text{max}}\) & AIC \\ \hline \(M_{0}\) & 1.110e-5 & 1.802e-6 & 2.470 & 1.491 & 0.103 & 0.096 & 532.326 & -1052.65 \\ \hline \end{tabular} \end{table} Table 4: Parameter estimations for the four ADT models. Figure 5: RE for two methods under nine combinations of sample sizes and measurements. \begin{tabular}{c c c c c c c c c} \hline \(M_{1}\) & 9.800e-6 & / & 2.404 & 1.513 & 0.013 & 0.561 & 308.210 & -606.42 \\ \(M_{2}\) & 1.031e-5 & 1.539e-6 & 2.521 & 1.495 & 0.016 & / & 392.799 & -775.60 \\ \(M_{3}\) & 9.063e-6 & / & 2.670 & 1.501 & 0.016 & / & 266.943 & -525.89 \\ \hline \end{tabular} In addition, the lifetime distribution and reliability of the product under normal stress level (40\({}^{\circ}\)C) for the four ADT models with the real values are illustrated in Fig. 6 and Fig. 7, respectively. It is evident that model \(M_{0}\) gives the most accurate inference results compared to other ADT models, especially in the high reliability part of concern (i.e., the reliability range from 0.9 to 1), proving the superiority of the proposed ADT model. illustrate our method and compare it with other ADT models. ### Data description A tuner is a long-lifetime microwave electronic assembly designed primarily for reception, filtering, amplification and gain control of cable signals. Previous investigations into tuner failures have shown the significant impact of temperature on the degradation process [39]. To evaluate the reliability of the tuner, a constant-stress ADT based on temperature was performed under 4 stress levels (55\({}^{\circ}\)C, 70\({}^{\circ}\)C, 80\({}^{\circ}\)C and 85\({}^{\circ}\)C). And, the key performance parameter, noise, was measured every ten hours by a computerized measuring system. The number of measurements under each stress level is 50, 35, 31, 23. Fig. 8 illustrates detailed degradation data. ### Modeling and analysis Since the accelerated stress is temperature, the Arrhenius model is selected to construct the acceleration model. Then, stress normalization is employed according to (8), where the normal stress level is 20\({}^{\circ}\)C. Next, based on the two-step MLE method [25, 32], the initial values of the unknown parameters are set as \(\boldsymbol{\theta}^{(0)}=\left[\text{le-6,2e-7,4,1.5,0.01,le-7}\right]^{T}\), and the terminated threshold \(\mathcal{E}\) for the EM algorithm iteration is set as 0.01. Subsequently, according to Algorithm 2, we obtain the estimations of unknown parameters as shown in Table 5. From Table 5, it can be deduced that the degradation of the electronic assembly exhibits negatively correlated long-range dependence because \(H<0.5\). Then, according to Algorithm 1, when the failure threshold is 7, we can estimate the lifetime distribution and reliability of the electronic assembly under the normal use condition (20\({}^{\circ}\)C) as illustrated in Fig. 9 and Fig. 10, respectively. From Fig. 10, we can get some valuable information for the \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Parameters & \(\mu_{a}\) & \(\sigma_{a}\) & \(\alpha_{1}\) & \(\beta\) & \(\sigma\) & \(H\) \\ \hline Values & 1.073e-6 & 1.894e-7 & 4.667 & 1.691 & 0.073 & 2.497e-8 \\ \hline \hline \end{tabular} \end{table} Table 5: Estimated results of \(\boldsymbol{\theta}\) in the microwave case. Figure 8: Accelerated degradation data of microwave electronic assembly under 4 stress levels. development of maintenance and warranty strategies: when the electronic assembly is operated for 8639 hours, the reliability of the electronic assembly is 0.99; when the electronic assembly is operated for 9384 hours, the reliability of the product is 0.9. ### Discussions In general, the better the model fits the degradation data, the greater the potential for the model to reflect the degradation law of the product. Besides, it should be recalled that the ultimate goal of modeling ADT data is to extrapolate the lifetime and reliability of the product under the normal use condition, so it is crucial to ensure the accuracy of the degradation prediction for unseen stress levels. To this end, in the following, we will discuss the fitting and extrapolation capabilities of the proposed model \(M_{0}\) compared to other ADT models mentioned in Section 3.1.3. #### 5.3.1 Comparison of model fitting In this subsection, we first calculate the \(l_{\rm max}\) and the AIC to compare the effectiveness of different ADT models in fitting the degradation data. The parameter estimation methods for the four ADT models are the same as in Section 0. Table 6 gives the estimations of unknown parameters for the four ADT models. As can be seen in Table 6, from \(l_{\rm max}\) and AIC values, \(M_{0}\) is the most suitable model since it Figure 10: Reliability of the electronic assembly under normal stress level (20\({}^{\circ}\)C). Figure 9: Lifetime distribution of the electronic assembly under normal stress level (20\({}^{\circ}\)C). considers the unit-to-unit variability and the long-range dependence simultaneously. Furthermore, in order to quantitatively compare the performance of different ADT models in describing the real degradation data, we set up three comparison indices \(\overline{ER}\), \(\overline{ER}^{U}\), and \(\overline{ER}^{L}\) to calculate the accuracy of predicted deterministic degradation trend and uncertainty quantification [26]. Specifically, for each ADT model, we generate 1000 degradation paths according to the estimations in Table 6 and calculate the predicted deterministic degradation trend \(x_{ij}^{pre}\), predicted upper boundary \(x_{ij}^{pre-U}\) and predicted lower boundary \(x_{ij}^{pre-L}\) at \(t_{ij}\) under the \(l^{\text{h}}\) stress level. Then, the three comparison indices can be calculated. The calculations are shown in (33), where \(\underset{1\leq i\leq 1000}{\text{mean}}\left\{x_{ij}^{sim}\right\}\), \(\underset{1\leq i\leq 1000}{\text{quantile}}\left\{x_{ij}^{sim}\right\}^{U}_{5\%}\) and \(\underset{1\leq i\leq 1000}{\text{quantile}}\left\{x_{ij}^{sim}\right\}^{L}_{5\%}\) denote the mean value, upper 5% quantile and lower 5% quantile for the 1000 simulated degradation paths, respectively. \[\overline{ER}= \frac{1}{k}\sum_{i=1}^{k}\overline{ER}_{l}\,\overline{ER}_{l}= \frac{1}{n_{l}}\sum_{j=1}^{m_{l}}ER_{ij},ER_{ij}^{L}= \frac{\left|x_{ij}^{pre-\overline{x}_{ij}}\right|}{\overline{x}_{ij}}, \overline{x}_{ij}=\frac{1}{n_{l}}\sum_{j=1}^{m_{l}}x_{ij},x_{ij}^{pre}= \underset{1\leq i\leq 1000}{\text{mean}}\left\{x_{ij}^{sim}\right\}\] \[\overline{ER}^{U}= \frac{1}{k}\sum_{i=1}^{k}\overline{ER}_{l}^{U}\,\overline{ER}_{l}^{U}= \frac{1}{n_{l}}\sum_{j=1}^{m_{l}}ER_{ij}^{U}\,ER_{ij}^{U}= \frac{\left|x_{ij}^{pre-U}-x_{ij}^{U}\right|}{x_{ij}^{U}},x_{ij}^{U}= \underset{1\leq i\leq s_{l}}{\text{max}}\left\{x_{ij}\right\},x_{ij}^{pre-U}= \underset{1\leq i\leq 1000}{\text{quantile}}\left\{x_{ij}^{sim}\right\}^{U}_{5\%}\] \[\overline{ER}^{L}= \frac{1}{k}\sum_{i=1}^{k}\overline{ER}_{l}^{L}\,\overline{ER}_{l}^{L}= \frac{1}{n_{l}}\sum_{j=1}^{m_{l}}ER_{ij}^{L},ER_{ij}^{L}= \frac{\left|x_{ij}^{pre-L}-x_{ij}^{L}\right|}{x_{ij}^{L}},x_{ij}^{L}= \underset{1\leq i\leq s_{l}}{\text{min}}\left\{x_{ij}\right\},x_{ij}^{pre-L}= \underset{1\leq i\leq s_{l}}{\text{quantile}}\left\{x_{ij}^{sim}\right\}^{L}_{5 \%} \tag{33}\] It can be seen that the closer \(\overline{ER}_{l}\) is to 0, the more accurately the model predicts the deterministic degradation trend under the \(l^{\text{h}}\) stress level. The closer \(\overline{ER}_{l}^{U}\) or \(\overline{ER}_{l}^{L}\) is to 0, the better the predicted boundary can describe the uncertainty of the real data under the \(l^{\text{h}}\) stress level. Moreover, \(\overline{ER}\), \(\overline{ER}^{U}\), and \(\overline{ER}^{L}\) reflect the predicted performance of an ADT model under all stress levels. Table 7 gives the quantitative indices for the four ADT models under all stress levels. The visualization results of the deterministic degradation trend and the degradation boundary are shown in Fig. 11 and Fig. 12. According to the above comparative tables and figures, compared with the other models, \(M_{0}\) is superior not only in predicting the deterministic degradation, but also in the uncertainty quantification. Additionally, the uncertainty quantification of \(M_{0}\) and \(M_{1}\) is considerably superior to that \begin{table} \begin{tabular}{c c c c c c c c} \hline Model & \(\mu_{a}\) & \(\sigma_{a}\) & \(a_{1}\) & \(\beta\) & \(\sigma\) & \(H\) & \(l_{\text{max}}\) & AIC \\ \hline \(M_{0}\) & 1.073e-6 & 1.894e-7 & 4.667 & 1.691 & 0.073 & 2.497e-8 & 579.444 & -1146.888 \\ \(M_{1}\) & 1.349e-6 & / & 4.518 & 1.674 & 0.056 & 0.1113 & 551.784 & -1093.568 \\ \(M_{2}\) & 1.072e-6 & 2.113e-8 & 4.473 & 1.723 & 0.023 & / & 479.348 & -948.696 \\ \(M_{3}\) & 1.601e-6 & / & 4.280 & 1.684 & 0.023 & / & 479.483 & -950.966 \\ \hline \end{tabular} \end{table} Table 6: Parameter estimations for the four ADT models in the microwave case. of models \(M_{2}\) and \(M_{3}\), demonstrating the necessity of considering the long-range dependence in the degradation process. \begin{table} \begin{tabular}{c c c c} \hline Model & \(\overline{ER}\) & \(\overline{ER}^{U}\) & \(\overline{ER}^{L}\) \\ \hline \(M_{0}\) & **0.1335** & **0.9389** & **3.7036** \\ \(M_{1}\) & 0.1490 & 1.1209 & 3.8215 \\ \(M_{2}\) & 0.1429 & 2.3264 & 12.3555 \\ \(M_{3}\) & 0.1849 & 2.4501 & 11.3810 \\ \hline \end{tabular} _Note_: The bolded results depict the best ones. \end{table} Table 7: Quantitative indices of the microwave case under all stress levels. Figure 11: Prediction for the deterministic degradation trend under all stress levels. #### 5.3.2 Comparison of model extrapolation In this subsection, we develop the cross-validation scheme [40] shown in Table 8 to explore the extrapolation capability of the four ADT models. The quantitative indices of cross-validation 1 and 2 are listed in Table 9 and Table 10. The visualization results are shown in Fig. 13 and Fig. 14. From the above comparative tables and figures, the following results can be obtained: * According to Table 9 and Table 10, no matter for the cross-validation 1 or 2, the predictions of deterministic degradation trend based on \(M_{0}\) are both the best ones by great superiority. In terms of uncertainty quantification, the upper and lower boundary predictions based on \(M_{0}\) are both the best for cross-validation 2, while the lower boundary predictions based on \(M_{0}\) are worse than those of \(M_{1}\) for cross-validation 1. Overall, model \(M_{0}\) has a superior capacity for uncertainty quantification. * From Fig. 13 and Fig. 14, no matter for the cross-validation 1 or 2, \(M_{0}\) best describes the deterministic degradation trend and envelopes the real data, proving the superiority of the model \(M_{0}\). * By analyzing the predictions of the four ADT models for two cross-validations, we can deduce that \(M_{0}\) is more suitable for lifetime and reliability inference under the normal stress level in this case. \begin{table} \begin{tabular}{c c c c} \hline \hline Model & \(\overline{ER}_{4}\) & \(\overline{ER}_{4}^{U}\) & \(\overline{ER}_{4}^{L}\) \\ \hline \(M_{0}\) & **0.1249** & **0.4198** & **9.8124** \\ \(M_{1}\) & 0.2167 & 0.4468 & 9.9591 \\ \(M_{2}\) & 0.1617 & 0.8326 & 11.8889 \\ \(M_{3}\) & 0.2153 & 0.7809 & 12.5949 \\ \hline \end{tabular} _Note_: The bolded results depict the best ones. \end{table} Table 10: Quantitative indices of the microwave case for the cross-validation 2. \begin{table} \begin{tabular}{c c c c} \hline \hline Model & \(\overline{ER}_{1}\) & \(\overline{ER}_{1}^{U}\) & \(\overline{ER}_{1}^{L}\) \\ \hline \(M_{0}\) & **0.2937** & **0.7971** & 3.3203 \\ \(M_{1}\) & 0.4637 & 1.2940 & **1.3456** \\ \(M_{2}\) & 0.4973 & 3.2261 & 27.1602 \\ \(M_{3}\) & 0.3621 & 3.2290 & 28.5961 \\ \hline \end{tabular} _Note_: The bolded results depict the best ones. \end{table} Table 9: Quantitative indices of the microwave case for the cross-validation 1. Fig. 13: Prediction for the cross-validation 1: (a) deterministic degradation trend (b) degradation boundary. ## 6 Conclusion This paper aims to address the problems of lacking an ADT model considering both the long-range dependence and unit-to-unit variability. The following conclusions can be drawn from this paper: * This paper proposes an improved non-Markovian ADT model considering both the long-range dependence and unit-to-unit variability. The long-range dependence is quantified by the Hurst exponent in the FBM and the unit-to-unit variability is clearly analyzed and quantified in the degradation rate function. Then, the lifetime distribution and reliability are calculated by the MC method. Besides, a novel statistical inference method based on the EM algorithm is proposed to ensure the maximization of the overall likelihood function. * The results of the simulation case illustrate that the parameter estimations of the proposed statistical inference method are more accurate than those of the two-step MLE method, especially when degradation has the property of long-range dependence and the number of measurements is small. Moreover, without considering unit-to-unit variability, wrong degradation law will be deduced. * A microwave case is utilized to verify the effectiveness of the proposed methodology. Results show that ADT models considering long-range dependence are superior in uncertainty quantification. Besides, the proposed model gives superior predictions in both deterministic degradation trends and uncertainty quantification compared to the existing ADT models. Beyond the work of this study, there are other issues that merit future research. For example, we only focus on modeling constant-stress ADT data in this work. Constructing an ADT model considering long-range dependence for the step-stress ADT data is an interesting topic and worth further research. ## Acknowledgements This work was supported by the National Natural Science Foundation of China [grant number 51775020], the Science Challenge Project [grant number. TZ2018007], the National Natural Science Foundation of China [grant numbers 62073009].
2301.11564
Learning 6-DoF Fine-grained Grasp Detection Based on Part Affordance Grounding
Robotic grasping is a fundamental ability for a robot to interact with the environment. Current methods focus on how to obtain a stable and reliable grasping pose in object level, while little work has been studied on part (shape)-wise grasping which is related to fine-grained grasping and robotic affordance. Parts can be seen as atomic elements to compose an object, which contains rich semantic knowledge and a strong correlation with affordance. However, lacking a large part-wise 3D robotic dataset limits the development of part representation learning and downstream applications. In this paper, we propose a new large Language-guided SHape grAsPing datasEt (named LangSHAPE) to promote 3D part-level affordance and grasping ability learning. From the perspective of robotic cognition, we design a two-stage fine-grained robotic grasping framework (named LangPartGPD), including a novel 3D part language grounding model and a part-aware grasp pose detection model, in which explicit language input from human or large language models (LLMs) could guide a robot to generate part-level 6-DoF grasping pose with textual explanation. Our method combines the advantages of human-robot collaboration and LLMs' planning ability using explicit language as a symbolic intermediate. To evaluate the effectiveness of our proposed method, we perform 3D part grounding and fine-grained grasp detection experiments on both simulation and physical robot settings, following language instructions across different degrees of textual complexity. Results show our method achieves competitive performance in 3D geometry fine-grained grounding, object affordance inference, and 3D part-aware grasping tasks. Our dataset and code are available on our project website https://sites.google.com/view/lang-shape
Yaoxian Song, Penglei Sun, Piaopiao Jin, Yi Ren, Yu Zheng, Zhixu Li, Xiaowen Chu, Yue Zhang, Tiefeng Li, Jason Gu
2023-01-27T07:00:54Z
http://arxiv.org/abs/2301.11564v2
# Learning 6-DoF Fine-grained Grasp Detection Based on ###### Abstract Robotic grasping is a fundamental ability for a robot to interact with the environment. Current methods focus on how to obtain a stable and reliable grasping pose in object wise, while little work has been studied on part (shape)-wise grasping which is related to fine-grained grasping and robotic affordance. Parts can be seen as atomic elements to compose an object, which contains rich semantic knowledge and a strong correlation with affordance. However, lacking a large part-wise 3D robotic dataset limits the development of part representation learning and downstream application. In this paper, we propose a new large Language-guided SHape grAsPing datasEt (named Lang-SHAPE) to learn 3D part-wise affordance and grasping ability. We design a novel two-stage fine-grained robotic grasping network (named PIONEER), including a novel 3D part language grounding model, and a part-aware grasp pose detection model. To evaluate the effectiveness, we perform multi-level difficulty part language grounding grasping experiments and deploy our proposed model on a real robot. Results show our method achieves satisfactory performance and efficiency in reference identification, affordance inference, and 3D part-aware grasping. Our dataset and code are available on our project website [https://sites.google.com/view/lang-shape](https://sites.google.com/view/lang-shape) ## I Introduction Fine-grained robotic manipulation can allow a robot to tackle human tasks by mincing human hands in embodied AI [1]. Different from low-level manipulation in control (e.g pick and place), fine-grained robotic manipulation not only has the abilities (e.g graspability) but also considers additional details (e.g which part to grasp; why to grasp this part), which reduces to affordance [2]. For example, _when a human demonstrates the intention of drinking a cup of coffee to hope a robot brings his mug, the robot is expected to grasp the handle and not pollute the inside of the mug subliminally_. Behind that, the affordance of manipulated objects is needed and how to empower low-level control policy with high-level concept and knowledge represented by human symbolic language have received increasing attention in the robotics community [3]. Thanks to recent progress in artificial intelligent fields (e.g. computer vision and natural language processing), vision-based robotic manipulation methods have achieved great success. Many efforts are made by researchers to explore steady, dexterous, reliable grasping policy. For planar grasping, [4, 5, 6] propose 2D grasp detection neural network using image-based grasp datasets. It typically keeps the camera heading vertically to the tabletop and generate grasp pose in a 2D plane, which is simple and easy for using directly computer vision methods such as transformer [7] in [8]. For spatial grasping, [9, 10, 11, 12] propose point cloud-based networks to predict 6-DoF grasping pose, which allows a robot arm to grasp objects in 3D space more human-like. However, all of this research focuses on how to get a reliable and steady grasping pose based in object wise. A more fine-grained grasping is not available for a specific requirement, such as based on some affordance to interact with the environment. For affordance, Chu et al. [13] and Xu et al. [14] and Tekden et al. [15] explore affordance in grasping detection. However, they use limited objects (i.e. only two in [15]) or affordance with small-scale data (i.e. only seven in [16]) to take a trial on simple setup (i.e. simulation or small test set). For a real-world setup, a large-scale affordance-related grasping dataset and pragmatic grasping detection model in spatial space are desired. To overcome the challenges in existing work on grasping detection and affordance, we propose a large language-guided shape grasp dataset with human-in-the-loop method for human-robot interaction in spatial grasping detection systematically. We combine natural language [17], 3D part-segmentation [18], and 6-DoF robotic grasping [10] together Fig. 1: Dataset generation. One random placement on a table scenario and four viewpoints are given to sample the object point cloud. Affordance-related sentences are generated by the knowledge base. Positive (**cyan**) and negative (**red**) grasps about **Handle** and **Body** are sampled respectively. to solve fine-grained grasping with affordance. We propose a new large Language-guided SHape grAsPing datasEt (named **Lang-SHAPE**), which contains point cloud, grasping label, and affordance-related language reference considering \(35\) parts, shown in TABLE. I. Human instructions are constructed by \(44\) templates shown in TABLE II and more than 1.85 million sentences are generated. For modeling, we extend a typical spatial grasping detection method (PointNetGPD) [10] with human language intervention (named **PIONEER**). We make two modifications. First, we are to provide a visual servo eye-in-hand object scan policy to capture global and detailed point cloud observation. Second, we introduce a 3D part language grounding model to constrain sampling region [19] to realize part-aware grasping detection based on the affordance in human language input. The whole pipeline is shown in Fig. 2. We give multi-level difficulty language grounding grasping experiments to evaluate our proposed dataset and model in inference and compositional generalization. A real-robot experiment is performed to verify the effectiveness of our proposed model in the real-world qualitatively. Results show our method achieves a huge advantage in reference identification, affordance inference, and 3D part-aware grasping with strong generalization benefiting from using pre-trained language models. To the best of our knowledge, we are the first to consider affordance in part-level for spatial robotic grasping using natural language, building the first 3D large language-guided shape grasping dataset covering affordance and intention, named Lang-SHAPE, and proposing the first grasp point detection model with 3D part language grounding. ## II Related work ### _Language Grounding in Robotic Manipulation_ The task of visual language grounding is to localize a region described by a given referring expression, the query [20]. For planar grasping, the localized region is usually bounding box [21, 22, 23, 24]. This work executes on the tabletop and is sensitive to occlusion because of coarse-grained bounding box instead of pixel-wise segmentation mask. For spatial grasping, a closely related work by [25] studied to reason visual and non-visual language about 3D objects, which is mainly to observe the object instead of grasping it. Furthermore, from the language aspect, although there are several language grounded methods used in robotic grasping [21, 22, 23, 3, 3], most of them consider direct command (e.g. abstract action) or scene understanding with spatial relationship in object wise and object with affordance in part wise has not been investigated. Different from the above methods, we are the first to consider part-wise grounding language on point cloud and real spatial grasping. ### _Affordance in Robotic Grasping Detection_ The possible action an agent could make to interact with the object in the environment and the functionality is a permanent property of an object independent of the characteristics of the user [26], which is the core idea of affordance theory. To detect grasp point in pixel-wise, Vahrenkamp et al. [27] and Chu et al. [28] propose an affordance segmentation via synthetic images to realize planar grasping based on part affordance in the conventional mask. Furthermore, Xu et al. [14] introduce an affordance keypoint detection by providing structured predictions regarding the position, direction, and extent. However, the pixel-based affordance is barely used by 6-DoF grasping methods, which leads it only to deploy in scenarios such as tabletop. Recently, with the advent of the components like 3D AffordanceNet [29], point-wise spatial grasping detection with affordance based on point cloud is proposed [16, 30], which are to detect limited parts for affordance and lack in expansibility and generalization. Compared to existing work considering affordance in an image or a closed set, we give another perspective to solve affordance-based spatial grasping detection using natural language in open world. Benefiting from various affordance knowledge in the form of text and large pre-trained language models, we design a language-vision task to establish the mapping between affordance and objects in the real world. It characterizes effective and flexible deployment and strong generalization. To meet autonomous decisions for a robot, the natural language can also be abstracted as an interface of the external module (e.g.expert system [31] or cloud brain system [32]). ## III Data Generation We collect our dataset Lang-SHAPE consisting of input point cloud observation \(\mathbb{C}\) rendering, output grasping \(\mathbb{G}\) labeling, and natural language \(\mathbb{Q}\) generation. To obtain the semantics of object part, we choose ShapeNet part dataset [18] as a source to build our dataset. The dataset is widely used in 3D object part segmentation and provides object point cloud which contains 16,881 shapes from 16 categories, annotated with \(50\) parts in total. To complete observation rendering and grasping labeling, we retrieve object meshes from ShapeNetCore [37]. For language data, we construct our corpus based on a large commonsense knowledge graph COMET-ATOMIC-2020 [38]. Comparison with existing grasping dataset is shown in TABLE. I. Formally, following the definition of GPD [19], let \(\mathcal{W}\subseteq\mathbb{R}^{3}\) denote the robot workspace and \(\mathcal{C}_{raw}\subset\mathcal{W}\) the 3D point cloud perceived by the sensor. We extend \(\mathcal{C}_{raw}\) to \(\mathcal{C}\subseteq\mathbb{R}^{4}\) with extra part semantic information. Each point in the point cloud is paired with at least one viewpoint with camera pose \(\mathcal{V}\in SE(3)\). An object observation with a point cloud can be defined as a tuple \(\mathbb{C}=(\mathcal{C},\mathcal{V})\). We denote a grasp configuration in 3D space \(g=(\mathbf{p},\mathbf{R})\in SE(3)\), which specifies the position and orientation of the grasping center point of the gripper local coordinate frame to the robot base frame. A full example in our proposed dataset is organized as a 3-tuple \((\mathbb{C},\mathbb{G},\mathbb{Q})\) and detailed as follows. ### _Object Semantic Observation_ In real-world setting, a robot can obtain point cloud to perceive object 3D information using RGB-D camera. In our dataset, we establish a table scenario to randomly place one object \(O\) rendered by pyrender 1 and collect \(547,417\) point cloud observations. We repeat the random placement three times. In total, each example of one object contains \(13\) point cloud including one full-view point cloud observation, and three random placement partial observations (each placement includes four view observations.). We adopt part labels in ShapeNet part dataset [18] and map them to sampled point cloud by ICP and KD-tree search. Footnote 1: [https://github.com/mmatl/pyrender](https://github.com/mmatl/pyrender) In a random object placement \(p_{i}\), four viewpoints \(v_{i}\) point cloud \(\{\mathbb{C}_{v_{i}}^{p_{i}},i=0,1,2,3;j=0,1,2\}\) can be sampled with \(4,096\) points \(\mathcal{C}\in\mathbb{R}^{4\times 4096}\) with part label in object coordinates, as shown in Fig. 1. A full-view point cloud \(\mathbb{C}_{fullv}\) are sampled from object mesh directly with \(10,000\) points \(\mathcal{C}\in\mathbb{R}^{4\times 10000}\) with part label in object coordinates. A full point cloud observation sets are denoted as \(\mathbb{C}=\{\mathbb{C}_{fullv},\mathbb{C}_{v_{i}}^{p_{i}},i=0,1,2,3;j=0,1,2\}\). ### _Grasp Sampling and Labeling_ We follow the sampling policy of PointNetGPD [10] using antipodal sampling based on trimesh. 2 Different from existing work, which samples uniformly on the mesh of the object, we introduce part semantic to sampling process and only sample feasible grasps on the specific part surface with sanity checking. Each grasp also contains a force-closure metric \(Q_{fc}\) and Grasp Wrench Space (GWS) analysis metric \(Q_{gws}\) consistent with PointNetGPD, which is used to evaluate the grasp quality. Finally, for each example, we obtain a grasp set \(\mathbb{G}=\{(g^{i},Q_{fc}^{i},Q_{gws}^{i}),\mathbf{p^{i}}\in R_{part}\}\) with size \(60\) elements, where \(g^{i}\) is a grasp configuration with position \(\mathbf{p^{i}}\), \(Q_{fc}^{i}\) and \(Q_{gws}^{i}\) are grasping evaluation metrics, \begin{table} \begin{tabular}{c c c} \hline \hline Index & Type-Index & Template \\ \hline 1,2 & 1-1 & \(1<verb>\) the \(<object>\)/\(<sth>\). (Please) \(<action>\) it \(<by>\) the \(<part>\) that/which can \(<affordance>\). \\ 3,4 & 1-2 & \(1<verb>\) the \(<object>\)/\(<sth>\). (Please) \(<action>\) it \(<by>\) the \(<part>\)\(<purpose>\)\(<affordance>\). \\ 5,6 & 1-3 & \(1<verb>\) the \(<object>\)/\(<sth>\). (Please) \(<action>\) it \(<by>\) the \(<part>\) so that you can \(<affordance>\). \\ 7,8 & 1-4 & \(1<verb>\) the \(<object>\)/\(<sth>\). (Please) \(<action>\) it \(<by>\) the \(<part>\). \\ 9,10 & 1-5 & \(1<verb>\) the \(<object>\)/\(<sth>\). (Please) \(<action>\) it \(<by>\) \(<affordance>\). \\ 11,12 & 1-6 & \(1<verb>\) the \(<object>\)/\(<sth>\). (Please) \(<action>\) and \(R_{part}\) is the specific part surface (e.g. surface of **Handle** in a bag.). ### _Language Description_ We propose \(44\) templates to generate our language descriptions \(\mathcal{Q}\in\mathbb{Q}\) about the part, object and grasp shown in TABLE. II. We design the templates from reference, command, object description, intention, and part affordance to generate the language references, where four types of templates are interdicted inspired by [35]. The first three (1-*,2-*,3-* in Type-Index) contain \(7\) sentence templates respectively, and the object is replaced by a referring word to extend a referring version template (total \(14\) sentences), where templates \(1-3\) describe the affordance of a part, \(4-6\) remove the part information only describing part affordance, and \(7\) does not contain the affordance information. The first type **1-*** is established considering human _intention_ to do something. The second type **2-*** is about human instruction to give a _action_ command. The third type **3-*** is an integrated version of the above two kinds. Type **4** is an object or part description phrase. For \(\langle purpose\rangle\) and \(\langle affordance\rangle\), we seek a large of scene knowledge from a commsense knowledge graph COMET-ATOMIC-2020 [38] with \(1.33\)M everyday inferential knowledge tuples about entities and people&events. It represents a large-scale common sense repository of textual descriptions that encode both the social and the physical aspects of common human everyday experiences. We construct our corpus based on the tuples and people&events to enhance the practicability of our language data. For \(\langle part\rangle\), we do not adopt part label in ShapeNet part dataset, in which same semantic part in different objects are given different labels (i.e. handle in the bag and handle in the mug are given two different labels for classification). Instead, we merge part-level labels containing the same semantic (affordance) information (from \(50\) to \(35\) categories), and augment semantic labels by considering synonyms and hyponyms in PartNet [39], WordNet [40], and Wikipedia [41]. ## IV Problem Definition According to existing work [19] about grasp pose detection using point cloud, given a point cloud and a description of the geometry of a robotic hand (hand configuration), _grasp pose detection_ is to predict the grasp configurations based on the hand configuration, from which a grasp would be formed if the fingers are to close. A typical solution is to sample enough grasp configuration candidates and select the one with the highest score. We formulate it as a probability model: \[P(g_{i}|\mathcal{R},\mathbb{C},\Theta), \tag{1}\] where \(g_{i}\) is a sampled grasp configuration, \(\mathbb{C}\) is a point cloud observation, \(\mathcal{R}\) is an interesting region to sample grasp configuration candidates, and \(\Theta\) is a hand configuration. However, most existing work considers the region of interest (ROI) as prior input by object detection or localization in a scene (e.g. cluttered scene). These result in object wise coarse-grained grasping, where part semantic of an object is ignored. In this paper, we consider more fine-grained grasping detection by constraining ROI using affordance and intention. We consider a part-aware probability model of grasp pose detection using external knowledge from natural language and decompose it into two parts, given by: \[P(g_{i},\mathcal{R}|\mathbb{C},\mathcal{Q},\mathcal{Q})=P(g_{i}|\mathcal{R}, \mathbb{C},\Theta)\times P(\mathcal{R}|\mathbb{C},\mathcal{Q}), \tag{2}\] Fig. 2: The overall architecture of PIONEER. **black** arrow trace refers to 3D part language grounding. The **red** arrow trace refers to part-aware grasp pose detection. Multiple object observations with point cloud are collected, ICP and downsampled before fed into PIONEER. where \(\mathcal{Q}\) is a natural language sentence for object description and instruction. \(P(g_{i}|\mathcal{R},\mathbb{C},\Theta)\) is given by a grasp pose detection model (e.g., GPD). \(P(\mathcal{R}|\mathbb{C},\mathcal{Q})\) is given by a 3D part language grounding model. Two assumptions are as follows: **Assumption 1.** Natural language sentences are beneficial to grasp pose detection during human-robot interaction. **Assumption 2.** There is at least one positive grasping candidate that can be detected within the grounding part of the object under the observation. ## V Method We propose a novel human-in-the-loop framework to model Eq. 2, named **PIONEER** (gras**P** poInt detecti**ON** with shapE** languag**E** g**Rounding). The overall architecture is shown in Fig. 2. It consists of two modules, where the first is a part-wise 3D language grounding model, which is used for \(P(\mathcal{R}|\mathbb{C},\mathcal{Q})\). The second is a part-aware grasp pose detection model for \(P(g_{i}|\mathcal{R},\mathbb{C},\Theta)\). ### _Part-wise 3D Language Grounding_ Given a query sentence \(\mathcal{Q}\) from human and robotic point cloud observation point cloud \(\mathbb{C}\), our 3D language grounding model is to detect the query-related region \(\mathcal{R}\). It can be formulated as a binary classifier function \(\phi\) for each point in point cloud \(\mathcal{C}\): \((\mathcal{Q},\mathcal{C})\rightarrow\mathcal{R}\{0,1\}\). To achieve this, our proposed model consists of four modules: language encoder, point cloud encoder, multimodal fusion module, and a binary classifier, shown in Fig. 2 with **black trace**. A query sentence \(\mathcal{Q}\) from a human is fed to a pre-trained language model encoder (we use BERT [17]3) passing two fully connected layers to calculate a 128-dimension language feature \(Z_{q}\in\mathbb{R}^{1\times 128}\). For point cloud, we choose more than one viewpoint cloud to merge a relatively complete point cloud by iterative closest point (ICP) to camera coordinates and downsample \(2,048\) points4\(\mathbb{C}\in\mathbb{R}^{2048\times 3}\). The preprocessed \(\mathbb{C}\) is input into PointNet [42] to calculate a feature map \(Z_{c}\in\mathbb{R}^{2048\times 128}\). After extract language \(Z_{q}\) and point cloud features \(Z_{c}\), we repeat the \(Z_{q}\)\(2,048\) times to construct a feature map \(Z_{q}^{\prime}\in\mathbb{R}^{2048\times 128}\). We concatenate \(Z_{c}\) and \(Z_{q}^{\prime}\) and pass the new feature map to an MLP to extract fusion feature \(Z_{fused}\). At last, the fusion feature is input to a binary classifier to predict which points to be grounded. The whole pipeline can be formulated as: Footnote 3: We use bert-base-uncased model in our work. Footnote 4: We simplify to ignore viewpoint \(\mathcal{V}\) representation since we have transformed all points to the same coordinates by ICP. \[\begin{split} Z_{q}&=E_{lang}\left(\mathcal{Q} \right),\\ Z_{c}&=E_{point}\left(\mathbb{C}\right),\\ Z_{fused}&=MLP\left(repeat\left(Z_{q}\right) \oplus Z_{c}\right),\\ \mathcal{R}&=Classifier\left(Z_{fused}\right),\end{split} \tag{3}\] where \(E_{lang}\) and \(E_{point}\) are language and point cloud encoders respectively. \(\oplus\) denotes the concatenation operation. ### _Part-aware Grasp Pose Detection_ To achieve part-aware grasp pose detection, we extend PointNetGPD [10] in candidate sampling policy and grasp selection. Under **Assumption 2**, different from sampling uniform randomly on the preprocessed point cloud of the whole object [10], we introduce high-level cognitive semantic \(\mathcal{R}\) to constrain sampling region for candidate grasp set \(g_{i}\in S\), shown in Fig. 2 with **red trace**. During our sampling process, we sample potential grasp points within \(\mathcal{R}\), while making collision detection and force closure detection to evaluate sampling quality still using the whole object point cloud. ### _Training and Inference_ We train the 3D language grounding model and part-aware grasp pose detection model separately. To train 3D language grounding model, we use \((\mathbb{C},\mathbb{Q})\) in our proposed dataset Lang-SHAPE in Sec. III. The parameters of pre-trained BERT are frozen during the training process. We use a binary-class cross-entropy loss to optimize the network with Adam optimizer. We train the network for 200 epoches with batchsize \(32\) and learning rate \(1e^{-3}\). To the train the part-aware grasp pose detection model, we use \((\mathbb{C},\mathbb{G})\) in Lang-SHAPE dataset, in which the oracle point cloud semantic region is used to constrain the sampling region. We also use a binary-class cross-entropy loss to optimize the network with Adam optimizer. The batchsize is \(32\), training epoch is \(60\), and learning rate is \(5e^{-3}\). To infer new input data, the grounding region of an object from 3D language grounding model output is used to inject into the part-aware grasp pose detection model. A series of grasp candidate scores are predicted finally. We select the optimized grasp to execution considering these scores and robotic reachability. All models are trained and tested under PyTorch 1.10. ## VI Experiments We conduct both simulation and real-world robot experiments to investigate six research questions (**RQ**). The first one is regarding the usefulness of the new dataset, the middle four are about the analyses of the proposed models, and the last is concerning the effectiveness of our method on the real robot. **RQ1:** For the effectiveness of proposed dataset, is our proposed new dataset useful for fine-grained 3D robotic grasping tasks, especially for affordance-aware task? **RQ2:** How does the pre-trained language model perform compared with existing baseline methods such as the randomly initialized model or similarity-based method in 3D language grounding with object parts? **RQ3:** How much does the pre-trained language model empower the embodied inference ability given different-level prompt language? **RQ4:** For compositional generalization, given the fact that an object can usually be decomposed into a certain number of parts, how much does our proposed model perform in part grounding between different objects with at least one but not all similar parts? **RQ5:** For human intervention, how does our proposed 3D part language grounding method with human-in-the-loop perform in fine-grained grasping detection success rate and effectiveness? **RQ6:** For real-wolrd deployment, does our proposed method perform well on a real-world robot? ### _Data Organization_ To train and test our proposed models, we split our Lang-SHAPE dataset object-wise and the part-wise, respectively, named **Split Mode**. Object-wise, we split all examples in Lang-SHAPE by the object category (16 categories) with ratios \((8:1:1)\) for (training/validation/test) sets. Similarly, part-wise, we split all examples in Lang-SHAPE by the part category (35 categories) with ratios \((8:1:1)\) for (training/validation/test) sets. We further set up fine-grained language configurations, named **Language Mode**, defined in TABLE III. We provide two compositional generalization sets in TABLE VI. Two extra split modes are introduced: \(\bullet\)**related data** has two attributes. First, the object categories of examples in the training set do not occur in the test set. Second, at least one but not all parts of each example in the training set are similar to those in the test set. Nevertheless, the parts contained in the training set cover the parts in the test set. The details are shown in TABLE VI with **Compositional Factors**. In the first setup, chair, laptop, and skateboard examples in Lang-SHAPE are collected as the training set, in which they have at least one part such as leg or board. Table examples are used as the test set, which consists of legs and boards. In the second setup, guitar and pistol examples in Lang-SHAPE are used as the training set, in which they have at least one part such as body or handle. Mug examples are adopted as the test set, which consists of body and handle. \(\bullet\)**non_related data** The data in the training set does not contain any objects or parts that occur in the test set. ### _Evaluation Metrics_ Grounding evaluation and 3D grasping detection evaluation are performed in this paper, following [18, 42]. For 3D part language grounding, we use four metrics: \(\bullet\)**Accuracy:** Since we formulate 3D part language grounding as a binary classification problem, we calculate classification accuracy on points. \(\bullet\)**Part avg IoU:** We calculate the IoU of grounded points in each example [18] and average IoUs for each part category to calculate mIoUs. Finally, we average each part's mIoUs to calculate the _Part avg IoU_. \(\bullet\)**Class avg IoU:** We calculate the IoU of grounded points in each example and average all IoUs directly. For 3D grasping detection, we define three metrics: \(\bullet\)**Success Rate:** The percentage of grasps where both grasp points grounding is correct and pre-grasp prediction is successful. \(\bullet\)**Part-agnostic Success Rate:** The percentage of grasps that pre-grasp prediction is successful. \(\bullet\)**Trial Cost:** To get a high-quality grasp candidate for the grasp score module, how many grasp sampling trials are needed to perform in a standard antipodal grasping sampler (GPG) [19]. ### _Models_ We design 3 baselines for comparison: \(\bullet\)**Baseline 1:** For 3D part language grounding, inspired by [25], we compare our method with a zero-shot classifier using pre-trained models directly. Instead of finetuning BERT, we use cosine distance between visual and language features to predict whether each point is grounded or not. Visual encoder is from a pre-trained part segmentation model [42], while language encoder is BERT with frozen parameters. \(\bullet\)**Baseline 2:** For 3D part language grounding, we replace BERT in our proposed method in Fig. 2 with a Transformer encoder 5, and train the whole model from scratch. This is to verify whether the pre-trained model can provide useful prior knowledge for our task. Footnote 5: [https://github.com/pytorch/examples/tree/master/word_language_model](https://github.com/pytorch/examples/tree/master/word_language_model) (6 encoder_layers implemented) \(\bullet\)**Baseline 3:** For 3D grasp pose detection, we use PointNetGPD [10] without human 3D language grounding intervention during the sampling process as baseline to verify the priority of using language human intervention. We propose two models to solve 3D part language grounding and grasp pose detection problem: \(\bullet\)**PIONEER** is what we propose in Fig.2. \(\bullet\)**PIONEER-T5:** To empower the bidirectional ability of human-robot interaction, we introduce an extra generative pre-trained language model (T5 [43]) to infer 3D part language grounding based on prompt engineering. The instruction from human is first fed into T5 finetuned by \begin{table} \begin{tabular}{l l} \hline \hline **Language Mode** & **Definition** \\ \hline full\_data & all \(44\) sentences in Table II used in the process. \\ known\_all & sentences containing object name, part name and affordance, with index [1,3,5,15,17,19,29,31,33]. \\ object\_unknown & sentences not containing object name, with index [2,4,6,8,...,40,42,44]. \\ part\_unknown & sentences not containing part name, with index [7,8,9,10,11,12,21,22,23,24,25,26,35,36,37,38,39,40]. \\ part\_unknown\_part\_known & sentences not containing object name, but containing part name, with index [7,9,11,2],23,25,35,37,39]. \\ part\_specific & under human intervention to give an optimal grasp part for each object observation. \\ \hline \hline \end{tabular} \end{table} TABLE III: Multi-level difficulty language configuration. prompt learning to generate object-part description index **43** in TABLE II. Our prior experiments show that model using the naive object-part description can achieve very high performance, and thus we combine a T5 based on prompt learning and a PIONNER trained on index **43**. ### _Simulation Experiments on Lang-SHAPE_ Based on our proposed Lang-SHAPE dataset, we give a series of quantitative evaluations to answer research questions **RQ1-RQ5**. The model selection is followed by the maximum Instance avg IoU in the validation set. #### Iv-D1 3D Part Language Grounding To evaluate the overall performance of our proposed model, we compare our model with Baseline 1 and Baseline 2 in part wise and object-wise data split mode, shown in TABLE IV. The language mode used in model training is full_data. For **RQ1, RQ2**, our proposed model PIONNER outperforms in all metrics. Compared with Baseline 1 (\(0.2355\) in Part avg IoU), our model (\(0.6696\) in Part avg IoU) achieves more than double improvement relative to the zero-shot method. We attribute the poor performance by Baseline 1 to two points. The first is no learning process to adjust parameters from the prior domain to our Lang-SHAPE domain. The second is that the visual encoder and the language encoder are not trained jointly, which lacks shared feature space to fuse multimodal features. This also shows the usefulness of our proposed Lang-SHAPE dataset, which can be used for point cloud-language joint training. Compared with Baseline \(2\), our model achieves more than \(8\%\) improvement in Instance avg IoU with part wise. The results indicate the advantage of pre-trained language model over to the randomly initialized model (i.e. Transformer) in robustness and generalization. #### Iv-D2 Affordance Inference To evaluate the inference ability of the proposed model, we set up different corrupted language inputs to train models. Three models are used for comparisons: Baseline 2, PIONNER, and PIONNER-T5. The first two are trained following language mode and split mode in TABLE V. In PIONNER-T5, we introduce a finetuned T5 [43] with prompt engineering. Since T5 is an unimodal model and cannot perform effective inference when neither part nor object is unknown, we set up a more fine-grained language mode **part_unknown_object_known**, which is T5 input concatenated with a prompt question. We design four prompts familiar with [3]: '_what part should you grasp?_', '_which part should you take_', '_how can you grasp it for me?_', '_how can you take it for me?_', one of them is randomly selected. The object-part description index **43** in TABLE II is the groundtruth of T5 and input of PIONNER. T5 and PIONNER are trained respectively in PIONNER-T5, and the test process uses them as a cascade model. For **RQ2, RQ3**, in PIONNER, as we can see **known_all**, **object_unknown**, and **part_unknown**, with different object attributes being corrupted, the performance of models decreases obviously. From Accuracy and Instance avg IoU, we can find that when the object name is unknown, the model still performs better than the part name is unknown. Comparing PIONNER with Baseline 2 in **object_unknown** and **part_unknown**, we can find that pre-trained language model can infer the grounded part via affordance information effectively. For example, compared with Baseline 2, Instance avg IoU is increased from \(0.6253\) to \(0.7583\) in part_unknown, part-wise in TABLE V. For **RQ3**, we provide an explicit prompt-based model PIONNER-T5. From **part_unknown_object_known** in TABLE V, PIONNER-T5 achieves the best performance, which again shows the pre-trained language model with prompt learning can enhance the inference ability of the model. #### Iv-D3 Comsitional Generalization It is the ability to generalize systematically to a new data distribution by combining known components [44]. To measure the compositional generalization of our models, we propose two compositional generalization sets defined in Sec. VI-A, shown in TABLE VI. For **RQ4**, in PIONNER, with the same test set, the model trained using our collected set (related data) achieves better performance in all metrics compared with the model trained using non_related data. This indicates that our proposed model is effectively generalizable. In comparison with Baseline 1, results show that our proposed model performs better than the zero-shot method. #### Iv-D4 Grasping Detection and Cost To evaluate the effectiveness of our proposed method in fine-grained grasping detection, we test PIONNER on the whole Lang-SHAPE dataset (including 3D part language grounding and grasp data). We train 3D part language grounding and PointNetGPD respectively following the part-wise and object-wise splits respectively. Language mode is full_data in most evaluations. The results are shown in TABLE VII. The grasp sampling rule is that the sampler ends sampling at a maximum of \(150\) sample trials or gets \(20\) high-quality candidate grasps. It is noted that in the simulation grasping experiment, the sampling process is on point cloud data \begin{table} \begin{tabular}{l l l l l l l} \hline \hline \multicolumn{1}{c}{**Model**} & \multicolumn{1}{c}{**Split Mode**} & \multicolumn{1}{c}{**Accuracy**} & \multicolumn{1}{c}{**Part avg IoU**} & \multicolumn{1}{c}{**Class avg IoU**} & \multicolumn{1}{c}{**Instance avg IoU**} & \multicolumn{1}{c}{ \begin{tabular}{c} **Language** \\ **Encoder** \\ \end{tabular} } \\ \hline Baseline 1 & part-wise & 0.5179 & 0.2355 & 0.2659 & 0.2461 & BERT \\ Baseline 2 & part-wise & 0.8904 & 0.5344 & 0.5499 & 0.6953 & Transformer \\ PIONNER (ours) & part-wise & **0.9251** & **0.6696** & **0.6873** & **0.7826** & BERT \\ \hline Baseline 1 & object-wise & 0.5373 & 0.2076 & 0.2271 & 0.2153 & BERT \\ Baseline 2 & object-wise & 0.8603 & 0.4742 & 0.5091 & 0.6415 & Transformer \\ PIONNER (ours) & object-wise & **0.9226** & **0.6490** & **0.6805** & **0.7770** & BERT \\ \hline \hline \end{tabular} \end{table} TABLE IV: Overall results of 3D part language grounding in robustness and generalization. instead of object meshes which are used in the dataset collection process. The sampling process prefers to real-world setting although we perform experiments in simulation using dataset. For **RQ1, RQ5**, by comparing PIONEER with Baseline 3 in TABLE VII, we can see that our proposed model can realize part-grounded grasping with more than \(40\%\) success rate, while Baseline 3 can only get \(25\%\) approximately. This indicates the effectiveness of our proposed method in fine-grained grasping detection. We find that our method performs relatively weak in part-agnostic success rate and trial cost. We attribute the reason to the fact that our method constrains the sampling region, and some part region is difficult for grasping, which reduces the whole part-agnostic success rate and costs more time to sample until the terminal condition. To verify our suppose, we propose a new language mode **part_specific**, in which human specifies one grasping part for each object. From TABLE VII, we can see that our PIONEER improves broadly in all metrics with knowledge from human intervention. ### _Physical Robot Experiments_ To evaluate the effectiveness of our proposed method in the real world, we deployed our models on a real robot system to realize part-aware grasping following human instruction, which includes affordance and intentions. We choose a single-arm robot Kinova Jaco 7DOF with three fingers to perform manipulation. An eye-in-hand camera Intel RealSense SR300 is fixed on the wrist of end-effector. The system is deployed on a PC running Ubuntu 18.04 and ROS Melodic with one Intel Core i7-8700K and one NVIDIA Geforce GTX 1080Ti GPU. The intrinsic and extrinsic parameters of the camera are calibrated. We select our PIONEER model under the training configuration of part-wise (**Split Mode**) and full_data (**Language Mode**). Since our proposed method is to operate in part wise, it requires a more fine-grained perception of the target object. For point cloud collection, we design a multi-view (four) policy to collect each view point cloud and transform into robot base frame. All viewpoint point cloud are merged by ICP to get a relatively complete representation of the target object. We select three categories of household objects. Two (mug and table) are seen in our Lang-SHAPE, and another one (hammer) is unseen. The object is randomly placed on the table, multi-view point cloud collection is to obtain the outline of the object, and then the merged point cloud is fed into our PIONEER. The output of PIONEER is a grasp pose \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline **Split Mode** & **Model** & **Language Mode** & **Accuracy** & **Part avg IoU** & **Class avg IoU** & **Instance avg IoU** & **Language Encoder** \\ \hline \multirow{4}{*}{Part-wise} & \multirow{4}{*}{Baseline 2} & object\_unknown & 0.8616 & 0.4858 & 0.5056 & 0.6405 & \multirow{4}{*}{Transformer} \\ & & part\_unknown & 0.8579 & 0.4905 & 0.5017 & 0.6253 & & \\ & & part\_unknown & object\_known & 0.8548 & 0.4888 & 0.5044 & 0.6178 & \\ \cline{2-7} & \multirow{4}{*}{PIONEER (ours)} & \multirow{4}{*}{PIONEER (ours)} & \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & 0.9381 & 0.7009 & 0.7200 & 0.8089 \\ & & object\_unknown & 0.9210 & 0.6316 & 0.6530 & 0.7761 & \\ & & part\_unknown & **0.9122** & **0.6559** & **0.6820** & **0.7583** & \\ & & part\_unknown & 0.9158 & 0.6740 & 0.7011 & 0.7704 & \\ \cline{2-7} & \multirow{4}{*}{PIONEER-TS (ours)} & \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & 0.9158 & 0.6570 & 0.6884 & 0.7818 & \\ \cline{2-7} & & object\_unknown & 0.8894 & 0.5445 & 0.5778 & 0.7074 & \\ \cline{2-7} & & part\_unknown & 0.8427 & 0.4483 & 0.4783 & 0.6019 & \\ \cline{2-7} & \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & 0.8486 & 0.4761 & 0.4974 & 0.6060 & \\ \cline{2-7} & & known\_all & 0.9381 & 0.6943 & 0.7287 & 0.8116 & \\ \cline{2-7} & \multirow{4}{*}{PIONEER (ours)} & \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & 0.9230 & 0.6569 & 0.6782 & 0.7776 & \\ \cline{2-7} & & part\_unknown & **0.9118** & **0.6467** & **0.6754** & **0.7641** & \\ \cline{2-7} & & part\_unknown & 0.9165 & 0.6548 & 0.6936 & 0.7709 & \\ \cline{2-7} & \multirow{4}{*}{ \begin{tabular}{} \end{tabular} } & 0.9203 & 0.6868 & 0.7032 & 0.7853 & \\ \cline{2-7} & & & & & & & \\ \hline \hline \end{tabular} \end{table} TABLE V: Comparisons of inference performance with different corrupted languages. For **Split Mode**, the top half is in part wise. The bottom half is in object wise. For **Language Encoder**, Baseline 2 uses Transformer while our PIONEER uses BERT. \begin{table} \begin{tabular}{l l l l l l} \hline \hline **Model** & **Split Mode** & **Language Mode** & **Success Rate** & \begin{tabular}{} \end{tabular} \\ \hline Baseline 3 & part-wise & full\_data & 0.2511 & **0.633** & **12.1644** \\ \hline PIONEER (ours) & part-wise & full\_data & **0.4207** & 0.4876 & 15.5332 \\ \hline Baseline 3 & object-wise & full\_data & 0.2546 & **0.6327** & 11.4371 \\ \hline PIONEER (ours) & object-wise & full\_data & 0.4385 & 0.4992 & 16.1895 \\ PIONEER (ours) & object-wise & part-specific & **0.5258** & 0.5843 & **0.8096** \\ \hline \hline \end{tabular} \end{table} TABLE VII: Comparisons of fine-grained grasping detection. on robot base frame. For **Q6**, as shown in Fig. 3, our real robot experiments indicate the effectiveness of our proposed method in fine-grained grasping with human instruction including object, part, and affordance. More experiments are available in the attached video. ## VII Conclusion We investigated part-level affordance on fine-grained robotic grasping. The Lang-SHAPE dataset is constructed to facilitate the investigation, and a 3D part language grounding and a part-aware grasp pose detection model are proposed to allow fine-grained robotic grasping. Experiments show that our proposed method outperforms 3D part grasp grounding in inference and generalizability, and physical robot experiments show its effectiveness in the real world. These results show the promise of using the pre-trained language model in affordance grounding and fine-grained grasping for a robot.
2310.07447
Generalized solutions to semilinear elliptic equations with measure data
We address an open problem posed by H. Brezis, M. Marcus and A.C. Ponce in: Nonlinear elliptic equations with measures revisited. In: Mathematical Aspects of Nonlinear Dispersive Equations (J. Bourgain, C. Kenig, S. Klainerman, eds.), Annals of Mathematics Studies, 163 (2007). We prove that for any bounded Borel measure $\mu$ on a smooth bounded domain $D\subset\mathbb R^d$ and asymptotically convex non-decreasing non-negative continuous function $g$ on $\mathbb R$ the sequence of solutions to the semi-linear equation (P): $-\Delta u+g(u)=\rho_n\ast\mu$ ($\rho_n$ is a mollifier) that is subject to homogeneous Dirichlet condition, converges to the function that solves (P) with $\rho_n\ast\mu$ replaced by the reduced measure $\mu^*$ (metric projection onto the space of good measures). We also provide a corresponding version of this result without non-negativity assumption on $g$.
Tomasz Klimsiak
2023-10-11T12:50:16Z
http://arxiv.org/abs/2310.07447v2
# Generalized solutions to semilinear elliptic equations with measure data ###### Abstract. We address an open problem posed by H. Brezis, M. Marcus and A.C. Ponce in: _Nonlinear elliptic equations with measures revisited. In: Mathematical Aspects of Nonlinear Dispersive Equations (J. Bourgain, C. Kenig, S. Klainerman, eds.), Annals of Mathematics Studies,_ **163** _(2007)._ We prove that for any bounded Borel measure \(\mu\) on a smooth bounded domain \(D\subset\mathbb{R}^{d}\) and asymptotically convex non-decreasing non-negative continuous function \(g\) on \(\mathbb{R}\) the sequence of solutions to the semi-linear equation (P): \(-\Delta u+g(u)=\rho_{n}*\mu\) (\(\rho_{n}\) is a mollifier) that is subject to homogeneous Dirichlet condition, converges to the function that solves (P) with \(\rho_{n}*\mu\) replaced by the _reduced measure_\(\mu^{*}\) (metric projection onto the space of _good measures_). We also provide a corresponding version of this result without non-negativity assumption on \(g\). e-mail: [email protected] ## 1. Introduction Let \(D\subset\mathbb{R}^{d}\), \(d\geq 2\), be a bounded domain with smooth boundary, \(\mu\) be a bounded Borel measure on \(D\) and * \(f:D\times\mathbb{R}\to\mathbb{R}\) be a Caratheodory function that is non-increasing with respect to the second variable, and \(f(\cdot,y)\in L^{1}(D)\) for any \(y\in\mathbb{R}\). The present paper is concerned with the Dirichlet problem for nonlinear Poisson equation \[-\Delta u=f(\cdot,u)+\mu\quad\text{in }D,\quad u=0\quad\text{on }\partial D. \tag{1.1}\] In 1975 (see the introduction in [2]) Benilan and Brezis discovered that in general under merely condition (H) there may not exist a solution to (1.1) even if \(f\) admits a polynomial growth. It appears that if \(\mu\) is above some level of concentration (determined by the Newtonian capacity) then the study of (1.1) is highly non-trivial and non-existence phenomenon occurs. On the other hand, in many interesting models equations of type (1.1) with polynomial or exponential growth absorption term \(f\) and highly concentrated measure \(\mu\), as Dirac mass, appear (see, e.g., [2, 11] and the references therein). Brezis, Marcus and Ponce [4, page 24] posed the following natural problem. Consider the functions \(u_{n}\) solving the Dirichlet problems \[-\Delta u_{n}=f(\cdot,u_{n})+\rho_{n}*\mu\quad\text{in }D,\qquad u_{n}=0\quad \text{on }\partial D, \tag{1.2}\] where \((\rho_{n})\) is a sequence of smooth mollifiers. What can be said about the convergence of the sequence \((u_{n})\) and, in case of convergence, about the form of the equation satisfied by the limit function? Clearly, it cannot be (1.1) since in general there is no solution to (1.1). It is worth noting here that by Stampacchia's inequality (see (2.2)) and the Rellich-Kondrachov theorem, \((u_{n})\) is always convergent up to a subsequence. It turned out that this is a quite difficult problem and it remained open to this day. The subtlety of the problem is well exhibited by the fact that in general the limit of solutions \((u_{n})\) to (1.2) with \(\rho_{n}*\mu\) replaced by an approximation \((\mu_{n})\) of \(\mu\) in the narrow topology, if exists, may vary depending on the choice of the sequence \((\mu_{n})\) (see [13, Remark 10.1]). Interestingly, the problem is simplified when, instead of regularization of \(\mu\), an approximation \((f_{n})\) of \(f\) guaranteeing the unique solvability of the problem \[-\Delta v_{n}=f_{n}(\cdot,v_{n})+\mu\quad\text{in }D,\qquad v_{n}=0\quad \text{on }\partial D,\] is considered. In [3, 4] Brezis, Marcus and Ponce introduced the notion of _reduced measure_. They proved that under the additional assumption They proved that under the additional assumption * \(f(x,y)=0,\ y\leq 0\ m\)-a.e. \(x\in D\) there exists a maximal measure \(\mu^{*,f}\leq\mu\), called the reduced measure, for which there exists a unique solution to \[-\Delta u=f(\cdot,u)+\mu^{*,f}\quad\text{in }D,\qquad u=0\quad\text{on } \partial D, \tag{1.3}\] and moreover, independently of the approximate sequence \((f_{n})\), \(v_{n}\to u^{*,f}\) in \(L^{1}(D)\). This legitimates referring to the unique solution of (1.3) as a generalized solution to (1.1) (see comments preceding [5, Theorem 1]). In [4] the authors called _good measures_ (relative to \(f\)) those bounded (signed) Borel measures for which \(\mu=\mu^{*,f}\). In other words, in their terminology, good measures are exactly those bounded Borel measures for which there exists a solution to (1.1). We denote the class of good measures by \(\mathcal{G}(f)\). The present paper is devoted to the open problem posed by Brezis, Marcus and Ponce. Let us mention that a partial answer to it has been already given in [4, Theorem 4.11], where it is proved that if \(f\) satisfies (B) and additionally \(g:=-f\) is convex and independent of the spatial variable, then \[u_{n}\to u^{*,f}\quad\text{in }L^{1}(D).\] Let \(\mathcal{M}_{b}(D)\) denote the set of (signed) bounded Borel measures on \(D\) equipped with the metric determined by the total variation norm. The main result of the present paper states that if (H) and the following condition: * for any function \(w\in W^{1,q}_{0}(D)\), \(q\in[1,d/(d-1))\) that is non-negative or non-positive in \(D\) the following implication holds: \[\text{if}\quad f(\cdot,w)\in L^{1}(D)\quad\text{then}\quad(f(\cdot,\rho_{n}*w ))_{n\geq 1}\quad\text{is uniformly integrable},\] are satisfied, then \[u_{n}\to u^{\pi,f}\quad\text{in }L^{1}(D),\] where \(u^{\pi,f}\) is the unique solution to the problem \[-\Delta u=f(\cdot,u)+\Pi_{f}(\mu)\quad\text{in }D,\qquad u=0\quad\text{on }\partial D,\] (1.4) and the mapping \[\Pi_{f}:\mathcal{M}_{b}(D)\to\mathcal{G}(f)\] is the unique continuous metric projection onto \(\mathcal{G}(f)\) such that \[\Pi_{f}(\mu+\nu)=\Pi_{f}(\mu)+\Pi_{f}(\nu)\quad\text{for any $\mu,\nu\in\mathcal{M }_{b}(D)$ with $\mu\bot\nu$}.\] In particular, if \(f\) satisfies (B) (or, more generally, if there exists a subsolution to (1.1)), then \(\Pi_{f}(\mu)=\mu^{*,f}\). This means, in particular, that the convergence proved in [4, Theorem 4.11] holds with the convexity assumption on \(g\) replaced by (UI). As a by-product of our results, we have that \((u_{n})\) is convergent as a whole sequence. Let us mention here that if \(g\) is independent of the spatial variable and asymptotically convex (recall that \(g=-f\)), i.e. * there exists a convex function \(\varphi:\mathbb{R}^{+}\to\mathbb{R}^{+}\) such that \[\lim_{u\to\infty}\frac{g(u)}{\varphi(u)}=1,\] then (UI) holds (see Example 3.4). The usefulness of the aforementioned result, beyond its theoretical value, is that it provides practical tools for studying problems of type (1.1). This is because we already have a fairly satisfactory knowledge on the objects \(\mathcal{G}(f)\) and \(\Pi_{f}\) involved in (1.4). Firstly, in [9, Corollary 7.3] (see also [8, Theorem 5.13]) it has been proven that \[\overline{\mathcal{A}(f)}=\mathcal{G}(f),\] where the closure is taken in the total variation norm, and \[\mathcal{A}(f):=\{\mu\in\mathcal{M}_{b}(D):f(\cdot,G_{D}\mu)\in L^{1}(D)\}. \tag{1.5}\] Here \(G_{D}\) is Green's function for \(D\) and \(G_{D}\mu(x)=\int_{D}G_{D}(x,y)\,\mu(dy),\,x\in D\). As a result, for any function \(g\) satisfying (H) the following implication holds: \[\mathcal{A}(f)=\mathcal{A}(g)\quad\Rightarrow\quad\Pi_{f}=\Pi_{g}.\] The antecedent of the above implication can be verified directly using (1.5). Secondly, from [9, Theorem 6.23] (see also [4, Theorem 4.15]) it follows that for any \(\mu\in\mathcal{M}_{b}(D)\), \[\Pi_{f}(\mu)=(\mu^{+})^{*,f}-(\mu^{-})^{*,\tilde{f}},\] where \(\tilde{f}(\cdot,y):=-f(\cdot,-y),\,x\in D,y\in\mathbb{R}\). Therefore, to calculate \(\Pi_{f}(\mu)\), it is enough to focus on the reduction operator \(\mu\mapsto\mu^{*,f}\) that has been studied in several papers (see, e.g., [1, 4, 5, 7, 8, 9, 13, 17]). In the proof of the main result of the paper we utilize a series of results on the reduced measures and reduced limits proved in [4, 13] as well as the following fact proved recently in [9, Theorem 7.2]: if \(\mu\in\mathcal{G}(f)\), then \(w_{n}\to G^{D}\mu\) in \(L^{1}(D)\) and \(f(\cdot,w_{n})/n\to 0\) in \(L^{1}(D)\), where \[-\Delta w_{n}=\frac{1}{n}f(\cdot,w_{n})+\mu,\qquad w_{n}=0\quad\text{on $ \partial D$}.\] As a result, \(\nu_{n}:=\frac{1}{n}f(\cdot,w_{n})+\mu\in\mathcal{A}(f),\,n\geq 1\) and \(\|\nu_{n}-\mu\|_{v}\leq\frac{1}{n}\|f(\cdot,w_{n})\|_{L^{1}(D)}\to 0\), where \(\|\mu\|_{v}\) stands for the total variation norm of \(\mu\). ## 2. Notation and basic notions Throughout the paper, we fix a function \(f:D\times\mathbb{R}\to\mathbb{R}\) that satisfies (H). We denote by \(\mathcal{M}_{b}(D)\) the set of all bounded Borel measures on \(D\), and by \(\mathcal{M}_{b}^{+}(D)\) its subset consisting of positive measures \((\mu(A)\geq 0\), \(A\in\mathcal{B}(D))\). For \(\mu\in\mathcal{M}_{b}(D)\) we set \(\|\mu\|_{v}:=|\mu|(D)\), where \(|\mu|\) stands for the total variation measure of \(\mu\) (\(|\mu|=\mu^{+}+\mu^{-}\)). The set \(\mathcal{M}_{b}(D)\) with the norm \(\|\cdot\|_{v}\) is a Banach space. We say that \((\mu_{n})\in\mathcal{M}_{b}(D)\) converges narrowly to \(\mu\in\mathcal{M}_{b}(D)\) if \[\int_{D}\eta\,d\mu_{n}\to\int_{D}\eta\,d\mu,\quad\eta\in C_{b}(D).\] ### Definition of a solution, a priori estimates **Definition 2.1**.: A function \(u\in L^{1}(D)\) is a solution to (1.1) if \(f(\cdot,u)\in L^{1}(D)\) and for any \(\eta\in\mathcal{C}:=\{u\in C^{2}(\overline{D}):u=0\text{ on }\partial D\}\), \[-\int_{D}u\Delta\eta=\int_{D}f(\cdot,u)\eta+\int_{D}\eta\,d\mu.\] Below we recall some equivalent definitions. A function \(u\in L^{1}(D)\) is a solution to (1.1) if and only if (see, e.g., [16, Proposition 6.3]) \(f(\cdot,u)\in L^{1}(D)\), \(u\in W^{1,1}_{0}(D)\) and for any \(\eta\in C^{\infty}_{c}(D)\), \[-\int_{D}u\Delta\eta=\int_{D}f(\cdot,u)\eta+\int_{D}\eta\,d\mu.\] A function \(u\in L^{1}(D)\) is a solution to (1.1) if and only if (see, e.g., [14, Theorem 1.2.2]), \(f(\cdot,u)\in L^{1}(D)\) and for a.e. \(x\in D\), \[u(x)=\int_{D}G_{D}(x,y)f(y,u(y))\,dy+\int_{D}G_{D}(x,y)\,\mu(dy). \tag{2.1}\] Here \(G_{D}\) denotes Green's function for \(D\). The following results are well known. **Proposition 2.2**.: 1. _For any solution_ \(w\) _to (_1.1_) and any_ \(q\in[1,d/(d-1))\) _we have_ \[\|w\|_{W^{1,q}_{0}(D)}+\|f(\cdot,w)\|_{L^{1}(D)}\leq C(\|f(\cdot,0)\|_{L^{1}( D)}+\|\mu\|_{TV}),\] (2.2) _where_ \(C\) _depends only on_ \(q,d\) _and_ \(D\)_._ 2. _Let_ \(f_{1},f_{2}\) _satisfy_ (H) _and_ \(\mu_{1},\mu_{2}\in\mathcal{M}_{b}(E)\)_. Let_ \(u_{1},u_{2}\) _be solutions to (_1.1_) with_ \((f,\mu)\) _replaced by_ \((f_{1},\mu_{1}),(f_{2},\mu_{2})\)_, respectively. If_ \(\mu_{1}\leq\mu_{2}\) _and_ \(f_{1}(x,y)\leq f_{2}(x,y)\) _for any_ \(y\in\mathbb{R}\) _and a.e._ \(x\in D\)_, then_ \(u_{1}\leq u_{2}\) _a.e._ Proof.: For (i) see, e.g., [4, Appendix 4B], [16, Proposition 5.1, Proposition 21.5], [8, Proposition 4.8]). For (ii) see, e.g., [8, Proposition 4.2], [4, Corollary 4.B.2]). **Definition 2.3**.: A function \(u\in L^{1}(D)\) is a subsolution (supersolution) to (1.1) if \(f(\cdot,u)\in L^{1}(D)\) and for any \(\eta\in\mathcal{C}^{+}:=\{\eta\in\mathcal{C}:\eta(x)\geq 0,\,x\in D\}\), \[-\int_{D}u\Delta\eta\leq(\geq)\int_{D}f(\cdot,u)\eta+\int_{D}\eta\,d\mu.\] ### Reduced measures We denote by \(\mathcal{G}(f)\) the set of good measures (relative to \(f\)), i.e. the set of all measures \(\mu\in\mathcal{M}_{b}(D)\) for which there exists a solution to (1.1). It is well known that in general \(\mathcal{G}(f)\subsetneq\mathcal{M}_{b}(D)\) (see [2, Remark A.4]). We denote by \(\operatorname{cap}_{H^{1}}\) the Newtonian capacity on \(D\). It is well known (see, e.g., [4, Lemma 4.A.1]) that each measure \(\mu\in\mathcal{M}_{b}(D)\) admits the following unique decomposition \[\mu=\mu_{d}+\mu_{c},\] where \(\mu_{d},\mu_{c}\in\mathcal{M}_{b}(D)\) and \(\mu_{d}\ll\operatorname{cap}_{H^{1}}\), \(\mu_{c}\lcap_{H^{1}}\) (they are called the _diffuse part_ and the _concentrated part_ of \(\mu\), respectively). \(\mathcal{M}_{b}^{0}(D)\) stands for the set of \(\mu\in\mathcal{M}_{b}(D)\) such that \(\mu=\mu_{d}\). By [4, Corollary 4.B.3] (see also [10, Theorem 4.7]), \[\mathcal{M}_{b}^{0}(D)\subset\mathcal{G}(f),\] and as a result, \(L^{1}(D)\subset\mathcal{G}(f)\) (the last inclusion, however, follows directly from [6, 12]). Let \(\mathcal{G}_{\prec\mu}(f)\) denote the set of measures \(\nu\in\mathcal{G}(f)\) such that \(\nu\leq\mu\). If \(\mathcal{G}_{\prec\mu}(f)\neq\emptyset\), then there exists \(\mu^{*,f}\in\mathcal{G}_{\prec\mu}(f)\) such that \[\max\mathcal{G}_{\prec\mu}(f)=\mu^{*,f}\] (see, e.g., [4, Theorem 4.15], [8, Theorem 5.2], [9, Theorem 5.2]). The measure \(\mu^{*,f}\) is called the _reduced measure_. This notion was introduced in 2005 by Brezis, Marcus and Ponce [4] and further generalized to non-local operators in [8, 9]. Since \(f\) is a fixed, throughout the paper we mostly drop the superscript \(f\) on \(\mu^{*,f}\) and write simply \(\mu^{*}\). Occasionally, however, we will use full notation to emphasize the dependence of the reduction operator on \(f\). In the sequel, we frequently use the following properties of the reduction operator: \[(\mu^{+})^{*}=(\mu^{*})^{+},\qquad(\mu_{c})^{*}=(\mu^{*})_{c},\quad(\mu^{*})_ {d}=\mu_{d},\qquad|\mu^{*}|\leq|\mu|. \tag{2.3}\] For proofs we refer to [4, Theorem 4.10, Corollary 4.10] (see also [8, Theorem 5.10, Proposition 5.4], [9, Section 6.2]. ## 3. Preparatory results Throughout the paper, we fix a smooth function \(j:\mathbb{R}\to[0,\infty)\) such that \(j(x)>0\) if \(|x|<1\) and \(j(x)=0\) if \(|x|\geq 1\), and we let \[\rho_{n}(x):=cn^{d}j(n|x|),\quad x\in\mathbb{R}^{d}, \tag{3.1}\] where \(c:=1/\int_{0}^{1}j(r)\alpha_{d}(r)\,dr\) and \(\alpha_{d}(r):=(2\pi^{d/2}r^{d-1})/\Gamma(d/2)\) (the surface area of the \(d\)-dimensional sphere of radius \(r>0\)). For further study it will be convenient to introduce the following notion. We denote by \(\mathcal{G}_{\#}(f,\mu)\subset\mathcal{G}(f)\) the set of all \(\nu\) for which there exists a subsequence \((n_{k})\) such that \(u_{n_{k}}\to u\) in \(L^{1}(D)\), where \(u_{n_{k}}\) is the unique solution to \[-\Delta u_{n_{k}}=f(\cdot,u_{n_{k}})+\rho_{n_{k}}*\mu\quad\text{in }D,\qquad u _{n_{k}}=0\quad\text{on }\partial D, \tag{3.2}\] and \(u\) is the unique solution to \[-\Delta u=f(\cdot,u)+\nu\quad\text{in }D,\qquad u=0\quad\text{on }\partial D. \tag{3.3}\] In the remainder of this section, we also assume that \(f\) satisfies condition (B) formulated in the Introduction. **Lemma 3.1**.: _For any \(\nu\in\mathcal{G}_{\#}(f,\mu)\) we have \(\nu\leq\mu^{*}\)._ Proof.: Since \(\nu\in\mathcal{G}_{\#}(f,\mu)\) there exists \((u_{n_{k}})\) and \(u\) as described in (3.2), (3.3). By the definition of a solution, for any \(\eta\in\mathcal{C}\), \[-\int_{D}u_{n_{k}}\Delta\eta-\int_{D}f(\cdot,u_{n_{k}})\eta=\int_{D}\rho_{n_{k} }*\eta\,d\mu.\] By (2.2) and Fatou's lemma, \[-\int_{D}u\Delta\eta-\int_{D}f(\cdot,u)\eta\leq\int_{D}\eta\,d\mu,\quad\eta\in \mathcal{C}^{+}.\] On the other hand, by (3.3), \[-\int_{D}u\Delta\eta-\int_{D}f(\cdot,u)\eta=\int_{D}\eta\,d\nu,\quad\eta\in \mathcal{C}.\] Hence \(\nu\leq\mu\), and consequently \(\nu\leq\mu^{*}\) since \(\nu\in\mathcal{G}(f)\). **Proposition 3.2**.: _Suppose that \(\mathcal{G}_{\#}(f,\mu)=\{\mu^{*}\}\) for any \(\mu\in\mathcal{M}_{b}^{+}(D)\). Then \(\mathcal{G}_{\#}(f,\mu)=\{\mu^{*}\}\) for any \(\mu\in\mathcal{M}_{b}(D)\)._ Proof.: Let \(\mu\in\mathcal{M}_{b}(D)\) and \(\nu\in\mathcal{G}_{\#}(f,\mu)\). By the definition of \(\mathcal{G}_{\#}(f,\mu)\) there exists a subsequence \((n_{k})\) such that \(u_{n_{k}}\to u\) in \(L^{1}(D)\), where \(u\) solves (3.3) and \(u_{n_{k}}\) solves (3.2). Let \(w_{n_{k}}\) be the unique solution to \[-\Delta u=f(\cdot,w_{n_{k}})+\rho_{n_{k}}*\mu^{+}\quad\text{in }D,\qquad w_{n_{k} }=0\quad\text{on }\partial D.\] By the assumption that we made, \(w_{n_{k}}\to w\) in \(L^{1}(D)\), where \(w\) solves \[-\Delta w=f(\cdot,w)+\mu^{+}\quad\text{in }D,\qquad w=0\quad\text{on } \partial D.\] Consequently, by [13, Theorem 7.1], \(0\leq(\mu^{+})^{*}-\nu\leq\mu^{+}-\mu\). Hence \[\mu^{*}=(\mu^{+})^{*}-\mu^{-}\leq\nu.\] This when combined with Lemma 3.1 gives \(\mu^{*}=\nu\). For any \(\mu\in\mathcal{M}_{b}(D)\) and \(x\in D\) we let \[G_{D}\mu(x):=\int_{D}G_{D}(x,y)\,\mu(dy),\] whenever \(\int_{D}G_{D}(x,y)\,|\mu|(dy)<\infty\) and zero otherwise. Let us consider the following set of _admissible measures_: \[\mathcal{A}(f):=\{\mu\in\mathcal{M}_{b}(D):f(\cdot,G_{D}\mu)\in L^{1}(D)\}.\] **Proposition 3.3**.: _Assume (UI). If \(\mu\in\mathcal{A}(f)\), and \(G_{D}\mu\geq 0\), then \(\mathcal{G}_{\#}(f,\mu)=\{\mu\}\)._ Proof.: Let \((n_{k})\) be a subsequence such that \(u_{n_{k}}\to u\) for some \(u\in L^{1}(D)\), where \(u_{n_{k}}\) solves (3.2). Set \(w:=G_{D}\mu\geq 0\). By (2.1) and the assumptions made on \(f\), we have \[u_{n_{k}}(x)=[G_{D}f(\cdot,u_{n_{k}})](x)+[G_{D}(\rho_{n_{k}}*\mu)](x)\leq[G_{ D}(\rho_{n_{k}}*\mu)](x)\quad\text{in }D,\ m\text{-a.e.}\] (note that \(f\leq 0\) by (B)). Observe that \(v_{k}:=G_{D}(\rho_{n_{k}}*\mu)\) solves \[-\Delta v_{k}=\rho_{n_{k}}*\mu\quad\text{in }D,\qquad v_{k}=0\quad\text{on } \partial D,\] and \(\rho_{n_{k}}*w\) solves \[-\Delta(\rho_{n_{k}}*w)=\rho_{n_{k}}*\mu\quad\text{in }D,\qquad\rho_{n_{k}}*w\geq 0 \quad\text{on }\partial D.\] Thus, \(v_{k}\leq\rho_{n_{k}}*w\), which combined with the previous inequality yields \[u_{n_{k}}(x)\leq[\rho_{n_{k}}*w](x)\quad\text{in }D,\;m\text{-a.e.}\] Since \(\mu\) was assumed to be in \(\mathcal{A}(f)\), we have \(w\in L^{1}(D)\). By (UI), \((f(\cdot,\rho_{n_{k}}*w))\) is uniformly integrable. Consequently, by the Vitali convergence theorem, \(f(\cdot,u_{n_{k}})\to f(\cdot,u)\) in \(L^{1}(D)\) (here we also used condition (B)). Therefore, letting \(k\to\infty\) in the equation \[-\int_{D}u_{n_{k}}\Delta\eta=\int_{D}f(\cdot,u_{n_{k}})\eta+\int_{D}(\rho_{n_{k }}*\eta)\,d\mu,\quad\eta\in\mathcal{C},\] shows that \(u\) solves (1.1). Since the subsequence \((n_{k})\) was chosen arbitrarily, we conclude that \(\mathcal{G}_{\#}(f,\mu)=\{\mu\}\). Consider the following condition (weaker than (AC)) * there exists a convex function \(\varphi:\mathbb{R}^{+}\to\mathbb{R}^{+}\), and \(M,c_{1},c_{2}>0\) such that \[c_{1}\varphi(x)\leq g(x)\leq c_{2}\varphi(x),\quad x\geq M.\] **Example 3.4**.: In the present example we show that (A) implies (UI). Assume that (A) holds and \(f(w)\in L^{1}(D)\) for some non-negative \(w\in L^{1}(D)\). By the de la Vallee-Poussin lemma (see e.g. [15]) there exists a convex increasing function \(\psi:\mathbb{R}^{+}\to\mathbb{R}^{+}\) such that \(\psi(0)=0\), \(\lim_{x\to\infty}\psi(x)/x=\infty\), \(\psi(x+y)\leq a\psi(x)+a\psi(y)\), \(x,y\in\mathbb{R}^{+}\) for some \(a\geq 1\), and \[\int_{D}\psi(g(w))<\infty. \tag{3.4}\] By (A) \[c_{1}\varphi(u)-m\leq g(u)\leq c_{2}\varphi(u)+m,\quad u\geq 0,\] where \(m:=\sup_{|u|\leq M}\varphi(u)+\sup_{|u|\leq M}g(u)\). Thus, \[\psi\circ g(\rho_{n}*w) \leq c(a,c_{2})\Big{(}\psi\circ\varphi(\rho_{n}*w)+\psi(m)\Big{)}\] \[\leq c(a,c_{2})\Big{(}\rho_{n}*[\psi\circ\varphi(w)]+\psi(m) \Big{)}\leq c(a,c_{1},c_{2})\Big{(}\rho_{n}*[\psi\circ g(w)]+\psi(m)\Big{)}.\] This combined with (3.4) yields \[\sup_{k\geq 1}\int_{D}\psi\circ g(\rho_{n}*w)<\infty.\] By the de la Vallee-Poussin lemma again \((f(\rho_{n}*w))_{n\geq 1}\) is uniformly integrable. A function \(\phi:\mathbb{R}^{+}\to\mathbb{R}^{+}\) is said to satisfy \(\Delta_{2}\)-condition if there exists \(C\geq 0\) such that \(\phi(2x)\leq C\phi(x)\), \(x\geq 0\). **Example 3.5**.: Observe that if \(g\) satisfies \(\Delta_{2}\)-condition and there exists a strictly increasing convex function \(\phi\), with \(\phi(0)=0\), such that \(h(x):=g(x)/\phi(x)\) is increasing, then (A) holds. Indeed, assume first additionally that \(\phi(x)=x\), \(x\geq 0\). Observe that the function \[\varphi_{1}(x):=\int_{0}^{x}g(y)y^{-1}\,dy,\quad x\geq 0\] is increasing and convex. Furthermore, \[g(x)\geq\varphi_{1}(x)\geq\int_{x/2}^{x}g(y)y^{-1}\,dy\geq g(x/2)\geq C^{-1}g (x).\] Now, applying the above inequality to the function \(g_{\phi}\) in place of \(g\), where \[g_{\phi}(x):=g(\phi^{-1}(x)),\quad x\geq 0,\] we get \[C^{-1}\varphi_{1}(\phi(x))\leq g(x)\leq\varphi_{1}(\phi(x)),\quad x\geq 0.\] Thus, letting \(\varphi(x):=\varphi_{1}(\psi(x))\) we get (A). ## 4. Main results **Theorem 4.1**.: _Let \(\mu\in\mathcal{M}_{b}(D)\) and (B), (UI) hold. Let \(u_{n}\) be the unique solution to (1.2) and \(u^{*}\) be the unique solution to (1.3). Then_ \[\lim_{n\to\infty}\|u_{n}-u^{*}\|_{L^{1}(D)}=0.\] Proof.: By Proposition 3.2, without loss of generality we may assume that \(\mu\in\mathcal{M}_{b}^{+}(D)\). _Step 1._ Assume additionally that \(\mu\in\mathcal{G}(f)\). By [9, Corollary 7.3], for any \(k\geq 1\) there exists a function \(g_{k}\in L^{1}(D)\) such that \(\|g_{k}\|_{L^{1}(D)}\leq 1/k\) and \[\mu-g_{k}\in\mathcal{A}(f),\quad G_{D}(\mu-g_{k})\geq 0.\] Let \(u\) be the solution to (1.1), \(u_{n}^{k}\) be the unique solution to \[-\Delta u_{n}^{k}=f(\cdot,u_{n}^{k})+\rho_{n}*(\mu-g_{k})\quad\text{in }D, \qquad u_{n}^{k}=0\quad\text{on }\partial D,\] and \(u^{k}\) be the unique solution to \[-\Delta u^{k}=f(\cdot,u^{k})+\mu-g_{k}\quad\text{in }D,\qquad u^{k}=0\quad \text{on }\partial D.\] By (2.2), \[\|u-u_{n}\|_{L^{1}(D)}\leq\|u-u^{k}\|_{L^{1}(D)}+\|u^{k}-u_{n}^{k}\|_{L^{1}(D) }+\|u_{n}^{k}-u_{n}\|_{L^{1}(D)}\leq\frac{C}{k}+\|u^{k}-u_{n}^{k}\|_{L^{1}(D)}.\] For fixed \(k\geq 1\), by Proposition 3.3, \(\|u^{k}-u_{n}^{k}\|_{L^{1}(D)}\to 0\) as \(n\to\infty\). One easily concludes now that \(\|u-u_{n}\|_{L^{1}(D)}\to 0\), which shows that \(\mathcal{G}_{\#}(f,\mu)=\{\mu\}\). _Step 2._ The general case. Let \(\mu\in\mathcal{M}_{b}^{+}(D)\). Since \(\mu^{*}\leq\mu\), we have \(\rho_{n}*\mu^{*}\leq\rho_{n}*\mu\), \(n\geq 1\). Let \(w_{n}\) be the solution to \[-\Delta w_{n}=f(\cdot,w_{n})+\rho_{n}*\mu^{*}\quad\text{in }D,\qquad w_{n}=0 \quad\text{on }\partial D.\] Let \(\nu\in\mathcal{G}_{\#}(f,\mu)\) and \((n_{k})\) be a subsequence such that \(u_{n_{k}}\to v\) in \(L^{1}(D)\) and \(v\) solves (3.3). By _Step 1_, \(w_{n_{k}}\to u^{*}\). Hence, by [13, Theorem 7.1], \(0\leq\nu-\mu^{*}\), so \(\nu=\mu^{*}\) by Lemma 3.1. In [9, Section 6.4] it is shown that there exists a continuous metric projection onto \(\mathcal{G}(f)\) \[\Pi_{f}:\mathcal{M}_{b}(D)\to\mathcal{G}(f),\] i.e. \[\inf_{\nu\in\mathcal{G}(f)}\|\mu-\nu\|_{v}=\|\mu-\Pi_{f}(\mu)\|_{v}\] such that \(\Pi_{f}(\mu+\nu)=\Pi_{f}(\mu)+\Pi_{f}(\nu)\) for any \(\mu,\nu\in\mathcal{M}_{b}(D)\), with \(\mu\bot\nu\). Moreover, there is at most one continuous metric projection onto \(\mathcal{G}(f)\) having this property. Furthermore, we have shown that \(\Pi_{f}\) admits the following representation \[\Pi_{f}(\mu)=(\mu^{+})^{*,f}-(\mu^{-})^{*,\tilde{f}}. \tag{4.1}\] where \(\tilde{f}(x,y)\coloneqq-f(x,-y)\), \(y\in\mathbb{R}\), \(x\in D\). **Lemma 4.2**.: _Let \(\mu\in\mathcal{M}_{b}^{+}(D)\). Then_ * _For any functions_ \(f_{1},f_{2}\) _satisfying_ (H)_, and such that_ \(|f_{1}-f_{2}|\leq h\) _for some_ \(h\in L^{1}(D)\)_, we have_ \(\mu^{*,f_{1}}=\mu^{*,f_{2}}\)_._ * \(\mu^{*,f}=\mu^{*,-f^{-}}\)_._ Proof.: Observe that \(\mathcal{A}(f_{1})=\mathcal{A}(f_{2})\). Hence, by [9, Corollary 7.3], \(\mathcal{G}(f_{1})=\mathcal{G}(f_{2})\). Since \(\mu^{*,f_{i}}\) is the unique (see [4, Corollary 4.6]) metric projection onto \(\mathcal{G}(f_{i})\), \(i=1,2\), we get (i). As for (ii), we claim that for its proof we may assume without loss of generality that \(f(\cdot,0)\equiv 0\). Indeed, suppose that (ii) holds with \(f\) replaced by \(\phi\) satisfying (H) and such that \(\phi(\cdot,0)\equiv 0\). Set \(f_{0}(x)\coloneqq f(x,0)\), \(x\in D\). Then by (i), \[\mu^{*,f}=\mu^{*,f-f_{0}}=\mu^{*,-(f-f_{0})^{-}}=\mu^{*,-f^{-}}.\] This establishes the claim. Assume additionally that \(f(\cdot,0)\equiv 0\). Let \(u_{n}\) be the unique solution to \[-\Delta u_{n}=f(\cdot,u_{n})\vee(-n)+\mu\quad\text{in }D,\qquad u_{n}=0\quad \text{on }\partial D,\] By [4, Theorem 4.1], \(u_{n}\to v\) in \(L^{1}(D)\) and \[-\Delta v=f(\cdot,v)+\mu^{*,f}\quad\text{in }D,\qquad v=0\quad\text{on } \partial D.\] On the other hand, by Proposition 3.2 and the fact that \(\mu\) is positive, \(u_{n}\) solves the problem \[-\Delta u_{n}=-f^{-}(\cdot,u_{n})\vee(-n)+\mu\quad\text{in }D,\qquad u_{n}=0 \quad\text{on }\partial D.\] Hence, by [4, Theorem 4.1] again, \(u_{n}\to w\), where \[-\Delta w=f(\cdot,w)+\mu^{*,-f^{-}}\quad\text{in }D,\qquad w=0\quad\text{on } \partial D.\] Clearly \(v=w\), which implies that \(\mu^{*,f}=\mu^{*,-f^{-}}\). This completes the proof of (ii). Combining Lemma 4.2(ii) with (4.1), we obtain \[\Pi_{f}(\mu)=(\mu^{+})^{*,-f^{-}}-(\mu^{-})^{*,\overline{f^{+}}}. \tag{4.2}\] Before proceeding to the proof of the main theorem, let us make the following simple observations. Let \(\mu\in\mathcal{G}(f)\). Then there exists a unique solution \(u\) to (1.1). We therefore have \[-\Delta u=f^{+}(\cdot,u)+(\mu-f^{-}(\cdot,u)),\quad\text{in }D,\qquad u=0 \quad\text{on }\partial D,\] and \[-\Delta u=-f^{-}(\cdot,u)+(\mu+f^{+}(\cdot,u)),\quad\text{in }D,\qquad u=0 \quad\text{on }\partial D.\] Thus \(\mu-f^{-}(\cdot,u)\in\mathcal{G}(f^{+})\), \(\mu+f^{+}(\cdot,u)\in\mathcal{G}(-f^{-})\), so by [4, Corollary 4.7], \(\mu\in\mathcal{G}(f^{+})\), \(\mu\in\mathcal{G}(-f^{-})\). We may also write \[-\Delta(-u)=\tilde{f}(\cdot,-u)-\mu,\quad\text{in }D,\qquad-u=0\quad\text{on } \partial D,\] which shows that \(\mu\in\mathcal{G}(f)\) if and only if \(-\mu\in\mathcal{G}(\tilde{f})\). **Theorem 4.3**.: _Let \(\mu\in\mathcal{M}_{b}(D)\). Let \(u_{n}\) be the unique solution to (1.2) and \(u^{\pi}\) be the unique solution to_ \[-\Delta u=f(\cdot,u)+\Pi_{f}(\mu)\quad\text{in }D,\qquad u=0\quad\text{on } \partial D.\] _Then_ \[\lim_{n\to\infty}\|u_{n}-u^{\pi}\|_{L^{1}(D)}=0.\] Proof.: Let \(\nu\in\mathcal{G}_{\#}(f,\mu)\). By the very definition of the class \(\mathcal{G}_{\#}(f,\mu)\), there exist a subsequence \((n_{k})\) and functions \(u_{n_{k}},u\) solving (3.2) and (3.3), respectively, such that \(u_{n_{k}}\to u\) in \(L^{1}(D)\). Let \((v_{n_{k}})\) be the sequence of functions solving \[-\Delta u=-f^{-}(\cdot,v_{n_{k}})+\rho_{n_{k}}*\mu\quad\text{in }D,\qquad v_{n_{k}}=0 \quad\text{on }\partial D,\] and \((w_{n_{k}})\) be the sequence of functions solving \[-\Delta w_{n_{k}}=f^{+}(\cdot,w_{n_{k}})+\rho_{n_{k}}*\mu\quad\text{in }D, \qquad w_{n_{k}}=0\quad\text{on }\partial D.\] Observe that \(-w_{n_{k}}\) solves \[-\Delta(-w_{n_{k}})=\widehat{f^{+}}(\cdot,-w_{n_{k}})+\rho_{n_{k}}*(-\mu)\quad \text{in }D,\qquad-w_{n_{k}}=0\quad\text{on }\partial D.\] By Proposition 2.2, \(v_{n_{k}}\leq u_{n_{k}}\leq w_{n_{k}}\), \(k\geq 1\). By Theorem 4.1, \(v_{n_{k}}\to v\) and \(-w_{n_{k}}\to-w\), where \[-\Delta v=-f^{-}(\cdot,v)+\mu^{*,-f^{-}}\quad\text{in }D,\qquad v=0\quad\text{on } \partial D,\] and \[-\Delta(-w)=\widehat{f^{+}}(\cdot,-w)+(-\mu)^{*,\widehat{f^{+}}}\quad\text{in }D, \qquad-w=0\quad\text{on }\partial D.\] By the inverse maximum principle (see [13, Proposition 7.2]), \[\mu^{*,-f^{-}}\leq\nu\leq-(-\mu)^{*,\widehat{f^{+}}}.\] By (2.3), \[(\mu^{*,-f^{-}})_{d}=\mu_{d},\quad\big{(}-(-\mu)^{*,\widehat{f^{+}}})\big{)}_ {d}=\mu_{d}.\] Consequently, \(\nu_{d}=\mu_{d}\). Furthermore, by [4, Lemm 4.1] (see also [8, Theorem 5.2]), \[(\mu^{*,-f^{-}})_{c}=(\mu_{c}^{+})^{*,-f^{-}}-\mu_{c}^{-},\quad(-\mu)_{c}^{*, \widehat{f^{+}}}=(\mu_{c}^{-})^{*,\widehat{f^{+}}}-\mu_{c}^{+},\] which implies that \[(\mu_{c}^{+})^{*,-f^{-}}\leq\nu_{c}^{+}\leq\mu_{c}^{+},\quad(\mu_{c}^{-})^{*, \widehat{f^{+}}}\leq\nu_{c}^{-}\leq\mu_{c}^{-}.\] Since \(u\) solves (3.3) we have \(\nu\in\mathcal{G}(f)\). Hence \(\nu\in\mathcal{G}(-f^{-})\) and \(-\nu\in\mathcal{G}(\widehat{f^{+}})\) (see the comments preceding the theorem). Therefore, by [4, Theorem 4.6'] (see also [8, Theorem 5.11]), \(\nu_{c}^{+}\in\mathcal{G}(-f^{-})\) and \(\nu_{c}^{-}\in\mathcal{G}(\widehat{f^{+}})\). As a result, \[\nu_{c}^{+}=(\mu_{c}^{+})^{*,-f^{-}},\quad\nu_{c}^{-}=(\mu_{c}^{-})^{*, \widehat{f^{+}}}.\] Hence, by (4.2), \[\nu=\nu_{d}+\nu_{c}=\mu_{d}+\nu_{c}=\mu_{d}+(\mu_{c}^{+})^{*,-f^{-}}-(\mu_{c}^{ -})^{*,\widehat{f^{+}}}=\Pi_{f}(\mu).\] This concludes the proof of the theorem. ### Acknowledgements This work was supported by Polish National Science Centre (Grant No. 2017/25/B/ST1/00878).
2302.08565
Variance Sum Rule for Entropy Production
Entropy production is the hallmark of nonequilibrium physics, quantifying irreversibility, dissipation, and the efficiency of energy transduction processes. Despite many efforts, its measurement at the nanoscale remains challenging. We introduce a variance sum rule for displacement and force variances that permits us to measure the entropy production rate $\sigma$ in nonequilibrium steady states. We first illustrate it for directly measurable forces, such as an active Brownian particle in an optical trap. We then apply the variance sum rule to flickering experiments in human red blood cells. We find that $\sigma$ is spatially heterogeneous with a finite correlation length and its average value agrees with calorimetry measurements. The VSR paves the way to derive $\sigma$ using force spectroscopy and time-resolved imaging in living and active matter.
I. Di Terlizzi, M. Gironella, D. Herraez-Aguilar, T. Betz, F. Monroy, M. Baiesi, F. Ritort
2023-02-16T20:15:54Z
http://arxiv.org/abs/2302.08565v2
# Variance Sum Rule for Entropy Production ###### Abstract Nonequilibrium steady states (NESS) pervade nature, from climate dynamics [1] to living cells and active matter [2; 3]. A key quantity is the entropy production rate at which energy is dissipated to the environment, which is positive by the second law of thermodynamics [4; 5; 6]. Despite its relevance, entropy production measurements remain challenging, especially in microscopic systems with stochastic and spatially varying fluctuations and limited access to microscopic variables [7; 8; 9; 10]. We introduce a variance sum rule (VSR) for displacement and force variances that permits us to measure the local entropy production rate \(\sigma\) directly. We illustrate the VSR for an active Brownian particle in an optical trap where the \(\sigma\) varies over three decades 10-\(10^{4}k_{B}T/s\). We apply the VSR to human red blood cells (RBCs) in experiments with laser optical tweezers and ultrafast life-imaging microscopy. We find that \(\sigma\) is spatially heterogeneous with correlation length \(\xi=0.6(2)\mu m\) and heat flux density \(j_{\sigma}\sim 10^{4}k_{B}T/(s\cdot\mu m^{2})\). Our estimate \(\sigma_{\rm{RBC}}\sim 10^{6}k_{B}T/s\) per single RBC agrees with macroscopic calorimetry measurements. The VSR sets a new resource for exploiting fluctuations to measure entropy production rates across scales in active and living matter. The entropy production rate \(\sigma\) determines the efficiency of energy transduction in classical and quantum systems [11; 12], and the energetic costs and breakdown of detailed balance in living cells [13; 14; 15; 16; 17; 18]. It is an elusive quantity when forces and currents are experimentally unaffordable. Bounds can be obtained from time irreversibility [19; 20; 21; 22; 23; 24], the thermodynamic uncertainty relation [25; 26; 27; 28; 29; 30] and coarse-graining [31; 32; 33; 34; 35]. Here, we introduce a variance sum rule (VSR) to derive \(\sigma\) in experiments where a measurement probe is in contact with a system in a NESS (Fig. 1A). The total force acting on the probe \(F_{t}\) equals the sum of the force exerted by the measurement device, \(F_{t}^{M}\), plus a probe-system interaction, \(F_{t}^{I}\), \(F_{t}=F_{t}^{M}+F_{t}^{I}\) (arrows in Fig. 1A). In most experimental settings, \(F_{t}^{I}\) is out of reach, so \(F_{t}\) and \(\sigma\) cannot be directly measured. Our approach focuses on how observables \(Q_{t}\) on average spread in time, as quantified by their variance \(\mathcal{V}_{Q}(t)=\overline{Q_{t}^{2}}-\overline{Q_{t}}^{2}\) with \(\overline{(\cdot)}\) the dynamical average in the NESS. The VSR for position and force fluctuations in a probe of mobility \(\mu\) and diffusivity \(D\) reads \[\mathcal{V}_{\Delta x}(t)+\mu^{2}\mathcal{V}_{\Sigma_{F}}(t)=2Dt+\mathcal{S}(t )\,, \tag{1}\] where the lhs includes the variances of the displacements \(\Delta x_{t}=x_{t}-x_{0}\), and of time-cumulative forces (\(\Sigma_{F}(t)=\int_{0}^{t}dsF_{s}\)). The total variance \(\mathcal{V}_{T}(t)=\mathcal{V}_{\Delta x}(t)+\mu^{2}\mathcal{V}_{\Sigma_{F}}(t)\) equals the free diffusion term \(2Dt\) plus a nonequilibrium contribution \(\mathcal{S}(t)\) denoted as excess variance, \[\mathcal{S}(t)=2\mu\int_{0}^{t}ds\left[C_{xF}(s)-C_{Fx}(s)\right] \tag{2}\] that measures the breakdown of time-reversal symmetry, with \(C_{AB}(s)=\overline{A_{s}B_{0}}-\overline{A}_{s}\,\overline{B}_{0}\) the correlation function in the NESS. In equilibrium, \(\mathcal{S}(t)=0\) due to time-reversal symmetry. Figure 1B illustrates the VSR for a generic NESS. The proof of (1) for stochastic diffusive systems is given in section S1, Supp. Info. The VSR relates variances of fluctuating variables to \(\sigma\) (Sec.S2, Supp. Info.). From (2) one can prove that \(\sigma\) depends on the convexity of the excess variance \(\mathcal{S}(t)\) at \(t=0\), \[\sigma=\frac{v^{2}}{\mu}+\frac{1}{4\mu}\partial_{t}^{2}\mathcal{S}|_{t=0} \tag{3}\] where \(v=\overline{x}\) is the particle's average velocity and \(\sigma\) is expressed in power units (e.g. \(k_{B}T/s\)). By inserting (2) in (3) we derive the novel formula for the rate of entropy production, \[\sigma=\frac{v^{2}}{\mu}+\frac{1}{4\mu}\partial_{t}^{2}\mathcal{V}_{\Delta x }|_{t=0}+\frac{\mu}{2}\mathcal{V}_{F}\,, \tag{4}\] where \(\mathcal{V}_{F}=\overline{F^{2}}-\overline{F}^{2}\) is the variance of the force. To illustrate the VSR, we consider two examples of a NESS where \(F_{t}\) equals the force in the measurement device, \(F_{t}=F_{t}^{M}\), and \(F_{t}^{I}=0\). The first NESS is an optically trapped colloidal particle dragged through water (friction coefficient \(\gamma=1/\mu\)) at speed \(v\). Bead's dynamics can be analytically solved, and the VSR (1) verified (Sec.S3, Supp. Info.). Equation (4) readily follows with \(\mathcal{S}=0\) and \(\sigma=\gamma v^{2}\), as expected. Figure 1C shows the experimental validation of the VSR (1). The right inset shows measurements of \(\sigma-\gamma v^{2}\) for several realizations using (4), which give \(\sigma-\gamma v^{2}=-5(7)k_{B}T/s\). Notice that \(\mathcal{S}=0\) implies that the two rightmost terms in (4) are of equal magnitude but opposite sign compensating each other, \(\mu\mathcal{V}_{F}=-\frac{1}{2\mu}\partial_{t}^{2}\mathcal{V}_{\Delta x}|_{t=0 }=k_{B}T/\tau_{r}>0\) with \(\tau_{r}=\gamma/k=0.35ms\) the bead's relaxation time (\(k=70pN/\mu m\) being the trap stiffness). Notice that the value \(\mu\mathcal{V}_{F}\sim 3\cdot 10^{3}k_{B}T/s\) is almost three orders of magnitude larger than \(\sigma-\gamma v^{2}\) (\(\pm 7k_{B}T/s\)). The results \(\mathcal{S}=0\) and \(\sigma=\gamma v^{2}\) are not restricted to a harmonic well but hold for an arbitrary time-dependent potential \(U(x-vt)\). This gives a reversed thermodynamic uncertainty relation [29, 30] for the work exerted on the bead by the optical trap, \(W_{t}=v\Sigma_{F}(t)=v\int_{0}^{t}dsF_{s}\), and an upper bound for \(\sigma\) (Sec.S4, Supp. Info.), \[\frac{\sigma}{k_{B}T}\leq\frac{2\overline{W_{t}}^{2}}{t\,\mathcal{V}_{W}(t)}\,. \tag{5}\] In Fig 1C (left inset), we experimentally test (5). The upper bound becomes tight for \(t\gg\tau_{r}\), the difference between two terms in (5) vanishing like \(\tau_{r}/t\). The second NESS we consider is the stochastic switching trap (SST) [36], where an active force is applied to an optically trapped bead by randomly switching the trap position \(\lambda_{t}\) between two values (\(\lambda_{+},\lambda_{-}\)) separated by \(\Delta\lambda=\lambda_{+}-\lambda_{-}\) (Fig. 2A). Jumps occur at exponentially distributed times with switching rates \(w_{+},w_{-}\) at each position. The ratio \(w_{-}/w_{+}=q/(1-q)\) defines the probability \(q\) of the trap to be at position \(\lambda_{+}\). Figure 2B shows the measured bead's position \(x_{t}\) and force \(F_{t}=k(\lambda_{t}-x_{t})\) for three cases with \(q=1/2\) and varying \(\Delta\lambda\). The bead follows the movement of the trap (top), quickly relaxing to its new equilibrium trap position at every jump (force spikes, bottom). Figure 2C shows the total variance, \(\mathcal{V}_{T}(t)=\mathcal{V}_{\Delta x}(t)+\mu^{2}\mathcal{V}_{\Sigma_{F}} (t)\). \(\mathcal{V}_{T}\) deviates from \(2Dt\) (dashed line) between \(10^{-4}\)s and 1s, showing that \(\mathcal{S}\neq 0\) is comparable to \(\mathcal{V}_{T}\) (notice the log-log scale). The SST model is analytically solvable (Sec.S5, Supp. Info.), giving expressions for \(\mathcal{V}_{\Delta x}(t),\mathcal{V}_{\Sigma_{F}}(t)\) and \(\mathcal{S}(t)\). For the latter, we find \[\mathcal{S}(t)=4(\Delta\lambda)^{2}q(1-q)\frac{\alpha(1-e^{-w_{r}t})-\alpha^{ 2}(1-e^{-wt})}{1-\alpha^{2}} \tag{6}\] with \(w=w_{+}+w_{-}\), \(\alpha=w_{r}/w\), and \(w_{r}=1/\tau_{r}=k/\gamma\) (bead's relaxation rate for a resting trap). In Fig. 2C we test the VSR and (6) for three NESS conditions. The inset shows the two terms contributing to the total variance \(\mathcal{V}_{T}\). For large times, \(\mathcal{S}\) converges to a finite value, and \(\mathcal{V}_{T}\) merges with the equilibrium line \(2Dt\) (black dashed line) when plotted in log-log scale. Eqs.(3),(6) yield the theoretical prediction (\(v=0\)) \[\sigma_{\rm th}=(k\Delta\lambda)^{2}q(1-q)\mu\frac{w}{w+w_{r}}\,. \tag{7}\] Figure 2D shows values of \(\sigma\) measured in SST experiments with \(\Delta\lambda=280nm\) using (4). Their average \(\sigma_{\rm exp}=4.6(4)\cdot 10^{3}k_{B}T/s\) agrees with the theoretical prediction (7), \(\sigma_{\rm th}\sim 5.3\cdot 10^{3}k_{B}T/s\). Figure 2E compares \(\sigma_{\rm exp}\) with \(\sigma_{\rm th}\) (7) (black dashed line) for varying \(\Delta\lambda\). Experiment and theory agree over three decades of \(\sigma\). Until now, we have considered the case where the total force acting on the bead equals the measured force, \(F_{t}=F_{t}^{M}\) and \(F_{t}^{I}=0\). Quite often, however, a measurement probe (AFM tip, microbead, etc.) is in contact with a system in a NESS, such as a biological cell with metabolic activity, Fig. 1A. In this case, \(F_{t}^{I}\neq 0\) is experimentally inaccessible, and \(F_{t}=F_{t}^{M}+F_{t}^{I}\) cannot be measured, making the VSR (1) inapplicable. In other cases, only the position \(x_{t}\) is monitored, e.g., in particle tracking experiments [37, 38, 39, 40], or in detecting cellular fluctuations [41, 42, 43]. To apply the VSR in these situations, it is necessary to model the NESS by making assumptions about the nature of the interaction \(F_{t}^{I}\). Specifically, for a linear-response measuring device (\(F_{t}^{M}=-kx_{t}\)), a reduced VSR can be derived and expressed in terms of variances related to the position \(x_{t}\) only. In these conditions, the displacement variance, \(\mathcal{V}_{\Delta x}\), along with the variance of \(\Sigma_{x}(t)=\int_{0}^{t}ds\,x_{s}\), \(\mathcal{V}_{\Sigma_{x}}(t)\), satisfy (Sec.S6, Supp. Info.), \[\mathcal{V}_{\Delta x}(t)+\mu^{2}k^{2}\,\mathcal{V}_{\Sigma_{x}}(t)=2Dt+\tilde{ \mathcal{S}}(t)\,. \tag{8}\] Figure 1: **Variance sum rule (VSR)**. (**A**) Experimental setup for a NESS measured with optical tweezers. (**B**) Illustration of the VSR showing the different terms in (1). (**C**) Experimental test of the VSR for an optically trapped bead dragged through water at room temperature (bead radius \(R=1.5\mu m\), mobility \(\mu=4\cdot 10^{4}\)nm/pN\(\cdot\)s, speed \(v=10\mu m/s\), \(\gamma v^{2}=610k_{B}T/s\)). The lower inset plots \(\sigma-\gamma v^{2}=\frac{1}{4\mu}\partial_{t}^{2}\mathcal{V}_{\Delta x}|_{t =0}+\frac{\sigma}{2}\mathcal{V}_{F}\) from (4) for the experimental realizations; the horizontal red line shows the average over all experiments (\(-5(7)k_{B}T/s\)) with one standard deviation (red band). The black dashed line is the theoretical prediction \(\sigma-\gamma v^{2}=0\). The upper inset shows the experimental test of the inequality (5). Dashed vertical lines show the bead’s relaxation time \(\tau_{r}\). Equation (8) is a general result which, however, does not permit to derive a formula for \(\sigma\) like (3). Notice that \(\tilde{\mathcal{S}}\) differs from \(\mathcal{S}\) in (1) and does not vanish in equilibrium. \(\tilde{\mathcal{S}}\) can be expressed in terms of the generic interacting force \(F_{t}^{I}\), see Eq.(S30) in Supp. Info. In order to derive \(\sigma\) using (8), we need a solvable model for the experiment and a theoretical expression for \(\tilde{\mathcal{S}}\) and \(\sigma_{\rm th}\). By fitting the reduced-VSR (8) to the experimental data, we obtain the model parameters that we use to estimate \(\sigma\) from \(\sigma_{\rm th}\). For instance, let us consider an active Brownian particle (ABP) in an optical trap subject to a random time-correlated active force \(F_{t}^{I}\equiv f_{t}^{a}\) of amplitude \(\epsilon\), \(\overline{f_{t}^{a}}=0\), \(\overline{f_{t}^{a}f_{s}^{a}}=\epsilon^{2}e^{-|t-s|/\tau_{a}}\), with \(\tau_{a}\) the active correlation time (Fig. 3A, inset). Dynamics are described by the stochastic equation, \[\dot{x}_{t}=-k\mu x_{t}+\sqrt{2D}\eta_{t}+\mu f_{t}^{a} \tag{9}\] Figure 3: **Reduced-VSR.** (**A**) Test of (8) for the SST experimental data, equivalent to the active Brownian particle (ABP), in a harmonic trap (Eq.(9) and inset). Symbols are experimental values for \(\tilde{\mathcal{V}}_{T}(t)=\mathcal{V}_{\Delta x}(t)+\mu^{2}k^{2}\,\mathcal{V }_{\Sigma_{x}}(t)\), fitted to (8) for different \(\Delta\lambda\) (lines). Blue and red circles are the two contributions to \(\tilde{\mathcal{V}}_{T}(t)\) for \(\Delta\lambda=18nm\). **(B, C)** Fits of the reduced-VSR to \(\tilde{\mathcal{V}}_{T}(t)\) for the two-layer active model (Sec.S6, Supp. Info.). Panel B: healthy RBCs in OT-stretching experiments at three forces (Fig. 4A); Panel C: healthy and passive RBCs in OT-sensing experiments (Fig. 4B). Figure 2: **Stochastic switching trap (SST).** (**A**) Schematics of the experiment. (**B**) Traces of position and force for three \(\Delta\lambda\) values (see legend in C). (**C**) VSR (1) and total variance \(\mathcal{V}_{T}\): symbols are experimental data, and lines represent the theory with known parameters without fitting. The inset shows the different terms in the VSR. (**D**) Measurements of \(\sigma\) for \(w_{+}=w_{-}=10s^{-1}\) and \(\Delta\lambda=280nm\); we show different experimental realizations (squares), their average \(\sigma_{\rm exp}\) and the theoretical value \(\sigma_{\rm th}\) (7). (E) \(\sigma\) (red symbols) averaged over experimental realizations (orange circles) for \(\Delta\lambda=18nm,70nm,280nm\); black line is the analytical prediction (7). with \(k\) the trap stiffness, \(\mu\) the particle mobility, and \(D=k_{B}T\mu\) the diffusion constant. To test the reduced-VSR approach (8) for deriving \(\sigma\), we exploit the mapping of the ABP (9) to the SST model discussed previously (Fig. 2A). The mapping follows by identifying parameters \(\epsilon=k\Delta\lambda\sqrt{q(1-q)}\), \(\tau_{a}=1/w\), \(w_{r}=k\mu\) from which (7) follows (for \(q=1/2\), see also [44]). We have used (8) to analyze the data already used in the previous approach for the SST experiments (Fig. 2) with \(\tilde{\mathcal{S}}(t)=2\epsilon^{2}\mu^{2}\tau_{a}[t-\tau_{a}(1-e^{-t/\tau_{ a}})]\) (c.f. Eq.(S29)) where \(\epsilon\) and \(\tau_{a}\) are fitting parameters. Results are shown in Fig. 3A and residuals in Fig. S2A. Their values and \(\sigma\) agree with the expected ones (Table I and Fig. S3 in Sec.S9 A, Supp. Info.). Therefore, the reduced-VSR (8) permits us to infer NESS parameters and \(\sigma\) from \(x_{t}\) measurements only. Finally, we apply the reduced-VSR to the challenging case of human red blood cells (RBCs) [45]. RBCs metabolize glucose into ATP via the glycolytic pathway, producing the cell membrane's active flickering with a consequent entropy creation [46; 47; 16; 48]. The RBC membrane is dynamically attached to the spectrin cortex via multiprotein complexes, which actively bind and unbind in the phosphorylation step of the glycolytic pathway [49]. We have carried out experimental RBC measurements using three techniques (Fig. 4). Two of them use optical tweezers (OT) in different setups: _i) mechanical stretching_ of RBCs using beads non-specifically attached to the membrane (OT-stretching, Fig. 4A); _ii) mechanical sensing_ of a biotinylated RBC membrane using streptavidin functionalized beads [16] (OT-sensing, Fig. 4B). The third technique measures cell contour fluctuations by membrane flickering segmentation tracking of free-standing RBCs Figure 4: **Entropy production of RBCs.** (**A**) OT-stretching experiments. Video image of stretched RBC and schematics of contact area estimation (left); three selected bead position traces (right). (**B**) OT-sensing experiments. Experimental setup from Ref.[16] (left) and tracking bead position traces for a healthy (red) and passive (blue) RBC. (**C**) Ultrafast OM measurements: healthy RBC (upper images) and position traces (right) for three selected pixels (50nm\(\times\)50nm) along the cell contour with high (red), medium (yellow), and green (low) variance \(\mathcal{V}_{x}\); passive RBC (lower images) and cell contour traces for three selected pixels (blue, right). The right images also show a color variance map along the cell contour. The color bar denotes variance levels (red, highest; blue, lowest). (**D**) \(\sigma\) measurements for OT-stretching (A) at different forces for healthy RBCs (circles with error bars are averages with their error for each pulling force). (**E**) \(\sigma\) measurements for OT-sensing for healthy (red symbols) and passive (blue symbols) RBCs. (**F**) Colored \(\sigma\)-map for OM measurements along the equatorial cell contour, as in panel C, for a healthy RBC (circles) and a passive RBC (diamonds). The radial distance represents \(\sigma\) in a.u. The orange curve is the \(\sigma\)-smoothed profile. (**G**) Scatter plot of \(\sigma\) versus \(\mathcal{V}_{x}\) for the RBCs of panel F, showing they are partially anticorrelated. Orange circles are \(\sigma\) values averaged over windows of 50 nm\({}^{2}\) in \(\mathcal{V}_{x}\). (**H**) Spatial correlation functions for \(\sigma\) and position \(x\) are measured along the cell contour. (**I**) Values of \(\sigma_{\rm RBC}\) compared to calorimetry estimates. using ultrafast optical microscopy (OM) [48; 50] (Fig. 4C). As a first observation, a single-layer active model (9) with its \(\bar{\mathcal{S}}(t)\) in (8) does not describe the experimental data. Instead, we consider a two-layer model with one hidden position variable for the active membrane-cortex interaction that is linearly coupled to the membrane outer layer \(x\) (probe) (Sec.S7, Supp. Info.). The two-layer active model leads to a reduced-VSR of the form (8) that fits the experimental data; the fitting procedure is described in Sec.S8, Supp. Info. Some fits of the reduced-VSR are shown in Fig. 3B,C, and residuals of the fits are shown in Fig. S2B-F. Figures 4D,E show \(\sigma\) values obtained from OT-stretching data in the range [0-30]pN and OT-sensing data for healthy and ATP-depleted (passivated) RBCs. For OT-stretching, \(\sigma\) is force independent, with \(\sigma=2.4(1)\cdot 10^{4}k_{B}T/s\) averaged over RBCs and forces. For OT-sensing, \(\sigma=4.2(5)\cdot 10^{3}k_{B}T/s\) for healthy RBC, and larger than for passive RBC. The measured \(\sigma\) is extensive with the bead-RBC contact area. Estimations from video images (Fig. 4A and Methods) yield contact areas of \(a=0.8(2)\mu m^{2}\) for both OT experiments giving the heat flux density \(j_{\sigma}=\sigma/a=3(1)\cdot 10^{4}k_{B}T/(s\cdot\mu m^{2})\) for OT-stretching and \(j_{\sigma}=5(1)\cdot 10^{3}k_{B}T/(s\cdot\mu m^{2})\) for OT-sensing. For the OM experiments, we show in Fig. 4C the color map of the position variance \(\mathcal{V}_{x}=\overline{x^{2}}-\overline{x^{2}}\) (healthy, top; passive, bottom) while in Fig. 4F we show the color map of \(\sigma\) (circles, healthy; diamonds, passive), measured over pixels of size 50nm along the RBC contour. Both \(\sigma\) and \(\mathcal{V}_{x}\) reveal a RBC heterogeneous activity with average values \(\sigma=2.6(1)\cdot 10^{4}k_{B}T/s\) and \(\mathcal{V}_{x}=400(10)nm^{2}\). Molecular maps of heterogeneous RBC deformability have been previously reported [51]. Remarkably, in the active regime, \(\sigma\) and \(\mathcal{V}_{x}\) are anti-correlated (Pearson coefficient \(\sim-0.4\)) with high variance regions showing lower \(\sigma\) (Fig. 4G). Results for other RBCs are shown in Fig. S5. This counterintuitive result is related to the active timescale \(\tau_{a}\), which determines the active contribution to the total variance, \(\mathcal{V}_{x}=\mathcal{V}_{x}^{\text{passive}}+\alpha(\tau_{a})\sigma(\tau_{ a})\) with \(\alpha(\tau_{a})\) positive and monotonically increasing with \(\tau_{a}\). It can be shown that in the high-activity limit \(\tau_{a}\to 0\), \(\sigma(\tau_{a})\) saturates to a finite value whereas \(\alpha(\tau_{a})\sim\tau_{a}\), decreasing \(\mathcal{V}_{x}\) (Fig. S6). The \(\sigma\)-map of a single RBC determines the finite correlation length \(\xi\) for the spatially varying \(\sigma\)-field, a main prediction of active field theories [52; 53; 54; 55], and stochastic hydrodynamics [56]. For healthy RBCs, \(\xi\) has been estimated from the spatial correlation function \(C_{\sigma\sigma}(d)\), and \(C_{xx}(d)\) of the traces at a curvilinear distance \(d\) along the RBC contour, Fig. 4H. Functions can be fitted to an exponential \(\sim\exp(-d/\xi)\) with \(\xi_{\sigma\sigma}\sim 0.35(5)\mu\)m and \(\xi_{xx}\sim 0.82(2)\mu\)m, giving the median \(\xi\sim 0.6(2)\mu\)m. This value is larger than the lateral resolution of the microscope (200nm). The average heat flux density can be estimated as \(j_{\sigma}=\sigma/\xi^{2}=10(5)\cdot 10^{4}k_{B}T/(s\cdot\mu m^{2})\) with \(\xi^{2}\) the typical area of an entropy-producing region. In summary, for a RBC of typical surface area \(A\sim 130\mu m^{2}\), we get: \(\sigma_{\rm RBC}=j_{\sigma}\cdot A=13(6)\cdot 10^{6}k_{B}T/s\) (OM); \(\sigma_{\rm RBC}=4(1)\cdot 10^{6}k_{B}T/s\) (OT-stretching); and \(\sigma_{\rm RBC}=7(2)\cdot 10^{5}k_{B}T/s\) (OT-sensing). These values are compatible with calorimetric bulk measurements of packed RBCs, \(\sigma_{\rm RBC}^{\rm bulk}=2(1)\cdot 10^{6}k_{B}T/s\)[57; 58], and are several orders of magnitude larger than indirect measures based on the breakdown of the fluctuation-dissipation theorem and effective temperatures [16; 59]. The significantly low \(\sigma\) values obtained for passive RBCs (blue data in Figs.4E,F,G,I) validate our approach. Our estimates \(\sigma_{\rm RBC}\sim 10^{6}k_{B}T/s\) also contest the typically low estimates obtained through information-theoretic measures based on the breakdown of detailed balance [17; 21; 24]. This discrepancy might be due to the energy balance intrinsic to the VSR Eqs.(1),(8), a missing feature in the thermodynamic uncertainty relation and coarse-graining models [60; 61; 62; 63; 64]. The agreement between mechanical and bulk calorimetric estimates of the RBC metabolic energy turnover suggests that the heat produced in the glycolytic pathway is tightly coupled with membrane flickering due to active kickers. Tight mechanochemical coupling is critical to an efficient free energy chemical transduction. It has been observed in processive enzymes (e.g., polymerases, transport motors, etc.) [65] and in allosteric coupling in ligand binding [66]. Interestingly, tightly coupled processes are related to emergent cycles in cellular metabolism, and chemical reaction networks, particularly for the relevant glycolytic cycle of RBCs [67]. Besides molecular motors and living cells, the VSR should apply to time-resolved photoacoustic calorimetry [68] and enzyme catalysis, where the effective diffusion constant of the enzyme increases linearly with the heat released [69], a consequence of (1). Finally, developing novel experimental probes to directly measure fluctuating active forces would be ground-breaking, providing direct measurements of \(\mathcal{S}\) and \(\sigma\). ###### Acknowledgements. MG and FR are supported by the Spanish Research Council Grant PID2019-111148GB-100. DH and FM are supported by the Spanish Research Council Grant PID2019-108391RB-100. TB is supported by the European Research Council (consolidator grant 771201). MB is supported by the research grant BAIE_BIRD2021_01 of the University of Padova. FR is supported by ICREA Academia 2018. Figures 1a, 2a, S1 and inset of 3a have been created with BioRender.com.
2305.03497
Training Natural Language Processing Models on Encrypted Text for Enhanced Privacy
With the increasing use of cloud-based services for training and deploying machine learning models, data privacy has become a major concern. This is particularly important for natural language processing (NLP) models, which often process sensitive information such as personal communications and confidential documents. In this study, we propose a method for training NLP models on encrypted text data to mitigate data privacy concerns while maintaining similar performance to models trained on non-encrypted data. We demonstrate our method using two different architectures, namely Doc2Vec+XGBoost and Doc2Vec+LSTM, and evaluate the models on the 20 Newsgroups dataset. Our results indicate that both encrypted and non-encrypted models achieve comparable performance, suggesting that our encryption method is effective in preserving data privacy without sacrificing model accuracy. In order to replicate our experiments, we have provided a Colab notebook at the following address: https://t.ly/lR-TP
Davut Emre Tasar, Ceren Ocal Tasar
2023-05-03T00:37:06Z
http://arxiv.org/abs/2305.03497v1
# Training Natural Language Processing Models on Encrypted Text for Enhanced Privacy ###### Abstract _With the increasing use of cloud-based services for training and deploying machine learning models, data privacy has become a major concern. This is particularly important for natural language processing (NLP) models, which often process sensitive information such as personal communications and confidential documents. In this study, we propose a method for training NLP models on encrypted text data to mitigate data privacy concerns while maintaining similar performance to models trained on non-encrypted data. We demonstrate our method using two different architectures, namely Doc2Vec+XGBoost and Doc2Vec+LSTM, and evaluate the models on the 20 Newsgroups dataset. Our results indicate that both encrypted and non-encrypted models achieve comparable performance, suggesting that our encryption method is effective in preserving data privacy without sacrificing model accuracy. In order to replicate our experiments, we have provided a Colab notebook at the following address: [https://t.ly/IR-TP_](https://t.ly/IR-TP_) **Keywords:**_Natural language processing, encrypted text, data privacy, cloud computing, Doc2Vec, XGBoost, LSTM ## 1 Introduction Natural language processing (NLP) has gained significant attention in recent years due to its potential to revolutionize numerous applications, such as machine translation, sentiment analysis, information retrieval, and question-answering systems, among others [1, 2]. In parallel with the rapid advancements in NLP, the volume of digital text data has grown exponentially, leading to an increasing need for more scalable and cost-effective solutions for training and deploying NLP models. Consequently, cloud-based services have emerged as a popular choice for handling these large-scale machine learning tasks [3, 4]. Despite the many advantages of cloud-based services, they also raise several security and privacy concerns, particularly regarding the potential exposure of sensitive information during data transmission, storage, and processing [5, 6]. As NLP models often handle sensitive text data such as personal communications, confidential documents, and proprietary information, ensuring data privacy in the cloud becomes a critical issue [7, 8]. A promising approach to mitigate data privacy concerns in NLP is to train models on encrypted data, ensuring that the sensitive information remains secure even if it is accessed by unauthorized parties [9, 10]. Homomorphic encryption has been proposed as a potential solution for this purpose, as it allows computations to be performed on encrypted data without requiring decryption [11, 12]. However, homomorphic encryption techniques are still computationally expensive and may not be suitable for large-scale NLP tasks [13, 14]. In this work, we explore an alternative method for training NLP models on encrypted text data using word-level encryption. Our approach involves preprocessing and cleaning the text data, encrypting the text using a symmetric encryption method, and then training Doc2Vec models on both the encrypted and non-encrypted text data. We subsequently train classification models, specifically XGBoost and LSTM, on the Doc2Vec embeddings to evaluate the performance of our method. We demonstrate the effectiveness of our approach using the 20 Newsgroups dataset, a widely used dataset for text classification tasks. The main contributions of this study are as follows: We propose a novel method for training NLP models on encrypted text data using word-level encryption, addressing
2310.08774
PhyloGFN: Phylogenetic inference with generative flow networks
Phylogenetics is a branch of computational biology that studies the evolutionary relationships among biological entities. Its long history and numerous applications notwithstanding, inference of phylogenetic trees from sequence data remains challenging: the high complexity of tree space poses a significant obstacle for the current combinatorial and probabilistic techniques. In this paper, we adopt the framework of generative flow networks (GFlowNets) to tackle two core problems in phylogenetics: parsimony-based and Bayesian phylogenetic inference. Because GFlowNets are well-suited for sampling complex combinatorial structures, they are a natural choice for exploring and sampling from the multimodal posterior distribution over tree topologies and evolutionary distances. We demonstrate that our amortized posterior sampler, PhyloGFN, produces diverse and high-quality evolutionary hypotheses on real benchmark datasets. PhyloGFN is competitive with prior works in marginal likelihood estimation and achieves a closer fit to the target distribution than state-of-the-art variational inference methods. Our code is available at https://github.com/zmy1116/phylogfn.
Mingyang Zhou, Zichao Yan, Elliot Layne, Nikolay Malkin, Dinghuai Zhang, Moksh Jain, Mathieu Blanchette, Yoshua Bengio
2023-10-12T23:46:08Z
http://arxiv.org/abs/2310.08774v2
# PhyloGFN: Phylogenetic inference with generative flow networks ###### Abstract Phylogenetics is a branch of computational biology that studies the evolutionary relationships among biological entities. Its long history and numerous applications notwithstanding, inference of phylogenetic trees from sequence data remains challenging: the high complexity of tree space poses a significant obstacle for the current combinatorial and probabilistic techniques. In this paper, we adopt the framework of generative flow networks (GFlowNets) to tackle two core problems in phylogenetics: parsimony-based and Bayesian phylogenetic inference. Because GFlowNets are well-suited for sampling complex combinatorial structures, they are a natural choice for exploring and sampling from the multimodal posterior distribution over tree topologies and evolutionary distances. We demonstrate that our amortized posterior sampler, PhyloGFN, produces diverse and high-quality evolutionary hypotheses on real benchmark datasets. PhyloGFN is competitive with prior works in marginal likelihood estimation and achieves a closer fit to the target distribution than state-of-the-art variational inference methods. ## 1 Introduction Phylogenetic inference has long been a central problem in the field of computational biology. A broad set of methods has been developed to estimate the evolutionary history (phylogenetic tree) relating a set of biological entities. Accurate phylogenetic inference is critical for a number of important biological analyses, such as understanding the development of antibiotic resistance (Aminov and Mackie, 2007; Ranjbar et al., 2020; Layne et al., 2020), assessing the risk of invasive species (Hamelin et al., 2022), and characterizing tumor progression (Lahmemann et al., 2020). Accurate phylogenetic trees can also be used to improve downstream computational analyses, such as multiple genome alignment (Blanchette et al., 2004), ancestral sequence reconstruction (Ma et al., 2006), protein structure and function annotation (Celniker et al., 2013). Despite its strong medical relevance and wide applications in life science, phylogenetic inference has remained a standing challenge, in part due to the high complexity of tree space -- for \(n\) species, \((2n-5)!!\) unique unrooted bifurcating tree topologies exist. This poses a common obstacle to all branches of phylogenetic inference; both maximum-likelihood and maximum-parsimony tree reconstruction are NP-hard problems (Day, 1987; Chor and Tuller, 2005). Under the Bayesian formulation of phylogenetics, the inference problem is further compounded by the inclusion of continuous variables that capture the level of sequence divergence along each branch of the tree. One line of prior work considers Markov chain Monte Carlo (MCMC)-based approaches, such as MrBayes (Ronquist et al., 2012). These approaches have been successfully applied to Bayesian phylogenetic inference. However, a known limitation of MCMC is scalability to high-dimensional distributions with multiple separated modes (Tjelmeland and Hegstad, 2001), which arise in larger phylogenetic datasets. Recently, variational inference (VI)-based approaches have emerged, which offer computationally efficient and scalable alternatives to MCMC. Among these methods, some model only a limited portion of the space of tree topologies, while others are weaker in marginal likelihood estimation due to simplifying assumptions. In parsimony analysis, state-of-the-art methods such as PAUP* (Swofford, 1998) have extensively relied on heuristic search algorithms that are efficient but lack theoretical foundations and guarantees. Coming from the intersection of variational inference and reinforcement learning is the class of models known as generative flow networks (GFlowNets) (Bengio et al., 2021). The flexibility afforded by GFlowNets to learn sequential samplers for distributions over compositional objects makes them a promising candidate for performing inference over the posterior space of phylogenetic tree topologies and evolutionary distances. In this work, we propose **PhyloGFN**, the first adaptation of GFlowNets to the task of Bayesian and parsimony-based phylogenetic inference. Our contributions are as follows: 1. We design an acyclic Markov decision process (MDP) with fully customizable reward functions, by which our PhyloGFN can be trained to construct phylogenetic trees in a bottom-up fashion. 2. PhyloGFN leverages a novel tree representation inspired by Fitch and Felsenstein's algorithms to represent rooted trees without introducing additional learnable parameters. PhyloGFN is also coupled with simple yet effective training techniques such as using mixed on-policy and dithered-policy rollouts, replay buffers and cascading temperature-annealing. 3. PhyloGFN explores and samples from the entire phylogenetic tree space, achieving a balance between exploration in this vast space and high-fidelity modeling of the modes. While PhyloGFN performs on par with the state-of-the-art MCMC- and VI-based methods in the summary metric of marginal log-likelihood, it substantially outperforms these approaches in terms of its ability to estimate the posterior probability of suboptimal trees. ## 2 Related work Markov chain Monte Carlo (MCMC)-based algorithms are commonly employed for Bayesian phylogenetics, with notable examples including MrBayes and RevBayes (Ronquist et al., 2012; Hohna et al., 2016), which are considered state-of-the-art in the field. Amortized variational inference (VI) is an alternative approach that parametrically estimates the posterior distribution. VBPI-GNN (Zhang, 2023) employs subsplit Bayesian networks (SBN) (Zhang and Matsen IV, 2018) to model tree topology distributions and uses graph neural networks to learn tree topological embeddings (Zhang, 2023). While VBPI-GNN has obtained marginal log likelihood competitive with MrBayes in real datasets, it requires a pre-generated set of high-quality tree topologies to constrain its action space for tree construction, which ultimately limits its ability to model the entire tree space. There exist other VI approaches that do not limit the space of trees. VaiPhy (Koptagel et al., 2022) approximates the posterior distribution in the augmented space of tree topologies, edge lengths, and ancestral sequences. Combined with combinatorial sequential Monte Carlo (CSMC; Moretti et al., 2021), the proposed method enables faster estimation of marginal likelihood. GeoPhy (Mimori and Hamada, 2023) models the tree topology distribution in continuous space by mapping continuous-valued coordinates to tree topologies, using the same technique as VBPI-GNN to model tree topological embeddings. While both methods model the entire tree topology space, their performance on marginal likelihood estimation underperforms the state of the art. For the optimization problem underpinning maximum parsimony inference, PAUP* is one of the most commonly used programs (Swofford, 1998); it features several fast, greedy, and heuristic algorithms based on local branch-swapping operations such as tree bisection and reconnection. GFlowNets are a family of methods for sampling discrete objects from multimodal distributions, such as molecules (Bengio et al., 2021) and biological sequences (Jain et al., 2022), and are used to solve discrete optimization tasks such as NP-hard scheduling and optimization problems (Zhang et al., 2023; 20). With their theoretical foundations laid out in Bengio et al. (2023); Lahlou et al. (2023), and connections to variational inference established in Malkin et al. (2023); Zimmermann et al. (2023), GFlowNets have been successfully applied to tackle complex Bayesian inference problems, such as inferring latent causal structures in gene regulatory networks (Deleu et al., 2022; 2023), and, most similar to the problems we consider, parse trees in hierarchical grammars (Hu et al., 2023). ## 3 Background ### Phylogenetic inference Here we introduce the problems of Bayesian and parsimony-based phylogenetic inference. A weighted phylogenetic tree is denoted by \((z,b)\), where \(z\) represents the tree topology with its leaves labeled by observed sequences, and \(b\) represents the branch lengths. The tree topology can be either a rooted binary tree or a bifurcating unrooted tree. For a tree topology \(z\), let \(L(z)\) denote the labeled sequence set and \(E(z)\) the set of edges. For an edge \(e\in E(z)\), let \(b(e)\) denote its length. Let \(\mathbf{Y}=\{\mathbf{y}_{1},\mathbf{y}_{2}\ldots\mathbf{y}_{n}\}\in\Sigma^{n\times m}\) be a set of \(n\) observed sequences, each having \(m\) characters from alphabet \(\Sigma\), _e.g._, \(\{A,C,G,T\}\) for DNA sequences. We denote the \(i^{\text{th}}\) site of all sequences by \(\mathbf{Y}^{i}=\{\mathbf{y}_{1}[i],\mathbf{y}_{2}[i]\ldots\mathbf{y}_{n}[i]\}\). In this work, we make two assumptions that are common in the phylogenetic inference literature: (i) sites evolve independently; (ii) evolution follows a time-reversible substitution model. The latter implies that an unrooted tree has the same parsimony score or likelihood as its rooted versions, and thus the algorithms we introduce below (Fitch and Felsenstein) apply to unrooted trees as well. #### 3.1.1 Bayesian inference In Bayesian phylogenetic inference, we are interested in sampling from the posterior distribution over weighted phylogenetic trees \((z,b)\), formulated as: \[P(z,b\mid\mathbf{Y})=\frac{P(z,b)P(\mathbf{Y}\mid z,b)}{P(\mathbf{Y})}\] where \(P(\mathbf{Y}\mid z,b)\) is the likelihood, \(P(\mathbf{Y})\) is the intractable marginal, and \(P(z,b)\) is the prior density over tree topology and branch lengths. Under the site independence assumption, the likelihood can be factorized as: \(P(\mathbf{Y}\mid z,b)=\prod_{i}P(\mathbf{Y}^{i}\mid z,b)\), and each factor is obtained by marginalizing over all internal nodes \(a_{j}\) and their possible character assignment: \[P(\mathbf{y}_{1}[i]\ldots\mathbf{y}_{n}[i]\mid z,b)=\sum_{a_{n+1}^{i},\ldots,a_{2n-1}^ {i}}P(a_{2n-1})\prod_{j=n+1}^{2n-2}P(a_{j}^{i}|a_{\alpha(j)}^{i},b(e_{j}))\prod _{k=1}^{n}P(\mathbf{y}_{k}[i]|a_{\alpha(k)}^{i},b_{z}(e_{k}))\] where \(a_{n+1}^{i},\ldots a_{2n-2}^{i}\) represent the internal node characters assigned to site \(i\) and \(\alpha(i)\) represent the parent of node \(i\). \(P(a_{2n-1})\) is a distribution at the root node, which is usually assumed to be uniform over the vocabulary, while the conditional probability \(P(a_{j}^{i}|a_{\alpha(j)}^{i},b(e_{j}))\) is defined by the substitution model (where \(e_{j}\) is the edge linking \(a_{j}\) to \(\alpha(a_{j})\)). Felsenstein's algorithmThe likelihood of a given weighted phylogenetic tree can be calculated efficiently using Felsenstein's pruning algorithm (Felsenstein, 1973) in a bottom-up fashion through dynamic programming. Defining \(L_{u}^{i}\) as the leaf sequence characters at site \(i\) below the internal node \(u\), and given its two child nodes \(v\) and \(w\), the conditional probability \(P(L_{u}^{i}|a_{u}^{i})\) can be obtained from \(P(L_{v}^{i}|a_{v}^{i})\) and \(P(L_{w}^{i}|a_{w}^{i})\): \[P(L_{u}^{i}\mid a_{u}^{i})=\sum_{a_{v}^{i},a_{w}^{i}\in\Sigma}P(a_{v}^{i}\mid a _{u}^{i},b(e_{v}))P(L_{v}^{i}\mid a_{v}^{i})P(a_{w}^{i}\mid a_{u}^{i},b(e_{w }))P(L_{w}^{i}\mid a_{w}^{i}). \tag{1}\] This equation can be used to recursively compute \(P(L_{u}^{i}|a_{u}^{i})\) at all nodes of the tree and sites \(i\). The algorithm performs a post-order traversal of the tree, thus aggregating the conditional probabilities. Finally, the likelihood \(P(\mathbf{Y}^{i}|z,b)\) is calculated using the root-level conditional probability: \(P(\mathbf{Y}^{i}|z,b)=\sum_{a_{2n-1}^{i}\in\Sigma}P(a_{2n-1}^{i})P(\mathbf{Y}^{i}|a_{ 2n-1}^{i})\). The data structure used by Felsenstein's algorithm in the evaluation of the recurrence (1) is a representation of node \(u\) at site \(i\) by a real vector \(\mathbf{f}_{u}^{i}\in[0,1]^{|\Sigma|}\). For a leaf \(u\), \(\mathbf{f}_{u}^{i}\) is the one-hot vector encoding the symbol at the \(i^{\text{th}}\) site, and for non-leaves, \(\mathbf{f}_{u}^{i}\) encodes \(\mathbf{f}_{u}^{i}[c]=P(L_{u}^{i}|a_{u}^{i}=c)\) and is recursively computed by (1). This motivates our modeling choices in PhyloGFN (SS4.1). #### 3.1.2 Parsimony analysis The problem of finding the optimal tree topology under the maximum parsimony principle is commonly referred as the Large Parsimony problem, which is NP-hard. For a given tree topology \(z\), the parsimony score is the minimum number of character changes between sequences across branches obtained by optimally assigning sequences to internal nodes. Let \(M(z|\mathbf{Y})\) be the parsimony score of tree topology \(z\) with leaf labels \(\mathbf{Y}\). Due to site independence, \(M(z|\mathbf{Y})=\sum_{i}M(z|\mathbf{Y}^{i})\). The trees with the lowest parsimony score, or most parsimonious trees, are solutions to the Large Parsimony problem. Note that the Large Parsimony problem is a limiting case of the maximum likelihood tree inference problem, where branch lengths are constrained to be equal and infinitesimally short. Fitch algorithmGiven a rooted tree topology \(z\), the Fitch algorithm assigns optimal sequences to internal nodes and computes the parsimony score in linear time. At each node \(u\), the algorithm tracks the set of possible characters labeling for node \(u\) that can yield a most parsimonious solution for the subtree rooted at \(u\). This character set can be represented by a binary vector \(\mathbf{f}^{i}_{u}\in\{0,1\}^{|\mathcal{X}|}\) for site \(i\). As in Felsenstein's algorithm, this vector is a one-hot encoding of the sequences at the leaves and is computed recursively for non-leaves. Specifically, given a rooted tree with root \(u\) and two child trees with roots \(v\) and \(w\), the Fitch character set \(\mathbf{f}^{i}_{u}\) is calculated as: \[\mathbf{f}^{i}_{u}=\begin{cases}\mathbf{f}^{i}_{v}\wedge\mathbf{f}^{i}_{w}&\text{if }\mathbf{f}^{i}_{v}\cdot\mathbf{f}^{i}_{w}\neq 0\\ \mathbf{f}^{i}_{v}\lor\mathbf{f}^{i}_{w}&\text{otherwise}\end{cases},\] where \(\wedge\) and \(\lor\) are element-wise conjunctions and disjunctions. The algorithm traverses the tree two times, first in post-order (bottom-up) to calculate the character set at each node, then in pre-order (top-down) to assign optimal sequences. The total number of character changes between these optimal sequences along the tree's edges is counted as the parsimony score. ### GFlowNets Generative flow networks (GFlowNets) are algorithms for learning generative models of complex distributions given by unnormalized density functions over structured spaces. Here, we give a concise summary of the the GFlowNet framework. SettingA GFlowNet treats generation of objects \(x\) lying in a sample space \(\mathcal{X}\) as a sequential decision-making problem on an acyclic deterministic MDP with set of states \(\mathcal{S}\supset\mathcal{X}\) and set of actions \(\mathcal{A}\subseteq\mathcal{S}\times\mathcal{S}\). The MDP has a designated _initial state_\(s_{0}\), which has no incoming actions, and a set of _terminal states_ (those with no outgoing actions) that coincides with \(\mathcal{X}\). Any \(x\in\mathcal{X}\) can be reached from \(s_{0}\) by a sequence of actions \(s_{0}\to s_{1}\rightarrow\cdots\to s_{n}=x\) (with each \((s_{i},s_{i+1})\in\mathcal{A}\)). Such sequences are called _complete trajectories_, and the set of all complete trajectories is denoted \(\mathcal{T}\). A _(forward) policy_\(P_{F}\) is a collection of distributions \(P_{F}(s^{\prime}\mid s)\) over the children of each nonterminal state \(s\in\mathcal{S}\setminus\mathcal{X}\). A policy induces a distribution over \(\mathcal{T}\): \[P_{F}(\tau=(s_{0}\to s_{1}\rightarrow\cdots\to s_{n}))=\prod_{i=0}^{n-1}P_{ F}(s_{i+1}\mid s_{i}).\] A policy gives a way to sample objects in \(\mathcal{X}\), by sampling a complete trajectory \(\tau\sim P_{F}\) and returning its final state, inducing a marginal distribution \(P_{F}^{\top}\) over \(\mathcal{X}\); \(P_{F}^{\top}(x)\) is the sum of \(P_{F}(\tau)\) over all complete trajectories \(\tau\) that end in \(x\) (a possibly intractable sum). The goal of GFlowNet training is to estimate a parametric policy \(P_{F}(\cdot\mid\cdot;\theta)\) such that the induced \(P_{F}^{\top}\) is proportional to a given _reward function_\(R:\mathcal{X}\rightarrow\mathbb{R}_{>0}\), _i.e._, \[P_{F}^{\top}(x)=\frac{1}{Z}R(x)\quad\forall x\in\mathcal{X}, \tag{2}\] where \(Z=\sum_{x\in\mathcal{X}}R(x)\) is the unknown normalization constant (partition function). Trajectory balance objectiveThe direct optimization of \(P_{F}\)'s parameters \(\theta\) is impossible since it involves an intractable sum over all complete trajectories. Instead, we leverage the trajectory balance (TB) training objective (Malkin et al., 2022), which introduces two auxiliary objects: an estimate \(Z_{\theta}\) of the partition function and a _backward policy_. In our experiments, we fix the backward policy to uniform, which results in a simplified objective: \[\mathcal{L}_{\text{TB}}(\tau)=\left(\log\frac{Z_{\theta}\prod_{i=0}^{n-1}P_{ F}(s_{i+1}\mid s_{i};\theta)}{R(x)P_{B}(\tau\mid x)}\right)^{2},\quad P_{B}( \tau\mid x):=\prod_{i=1}^{n}\frac{1}{|\text{Pa}(s_{i})|}, \tag{3}\] where \(\text{Pa}(s)\) denotes the set of parents of \(s\). By the results of Malkin et al. (2022), there exists a unique policy \(P_{F}\) and scalar \(Z_{\theta}\) that simultaneously make \(\mathcal{L}_{\text{TB}}(\tau)=0\) for all \(\tau\in\mathcal{T}\), and at this optimum, the policy \(P_{F}\) satisfies (2) and \(Z_{\theta}\) equals the true partition function \(Z\). In practice, the policy \(P_{F}(\cdot\mid s;\theta)\) is parameterized as a neural network that outputs the logits of actions \(s\to s^{\prime}\) given a representation of the state \(s\) as input, while \(Z_{\theta}\) is parameterized in the log domain. \(\mathcal{L}_{\text{TB}}(\tau)\) is minimized by gradient descent on trajectories \(\tau\) chosen from a behaviour policy that can encourage exploration to accelerate mode discovery (see our behaviour policy choices in SS4.3). ## 4 Phylogenetic inference with GFlowNets ### GFlowNets for Bayesian phylogenetic inference This section introduces PhyloGFN-Bayesian, our GFlowNet-based method for Bayesian phylogenetic inference. Given a set of observed sequence \(\mathbf{Y}\), PhyloGFN-Bayesian learns a sampler over the joint space of tree topologies and edge lengths \(X=\{(z,b)|L(z)=\mathbf{Y}\}\) such that the sampling probability of \((z,b)\in\mathcal{X}\) approximates its posterior \(P_{r}^{T}(z,b)=P(z,b|\mathbf{Y})\). We follow the same setup as (Koptagel et al., 2022; Zhang, 2023; Mimori & Hamada, 2023): (i) uniform prior over tree topologies; (ii) decomposed prior \(P(z,b)=P(z)P(b)\); (iii) exponential (\(\lambda=10\)) prior over branch lengths; (iv) Jukes-Cantor substitution model. State representationTo represent a rooted tree in a non-terminal state, we compute features for each site independently by taking advantage of the Felsenstein features (SS3.1.1). Let \((z,b)\) be a weighted tree with root \(u\) which has two children \(v\) and \(w\). Let \(\mathbf{f}_{u}^{i},\mathbf{f}_{v}^{i},\mathbf{f}_{w}^{i}\in[0,1]^{|\Sigma|}\) be the Felsenstein feature on nodes \(u\), \(v\), \(w\) at site \(i\). The feature \(\rho_{u}^{i}\) for site \(i\) is computed as following: \[\rho_{u}^{i}=\mathbf{f}_{u}^{i}\prod_{e\in E(z)}P(b_{z}(e))^{\frac{1}{m}} \tag{4}\] where \(P(b_{z}(e))=\prod_{e\in b_{z}}P(b(e))\) is the edge length prior. The tree-level feature is the concatenation of site-level features \(\rho=[\rho^{1}\dots\rho^{m}]\). A state \(s=(z_{1}\dots z_{l})\), which is a collection of rooted trees, is represented by the set \(\{\rho_{1},\dots,\rho_{l}\}\). Representation powerAlthough the proposed feature representation \(\rho\) does not capture all the information of tree structure and leaf sequences, we show that \(\rho\) indeed contains sufficient information to express the optimal policy. The proposition below shows that given an optimal GFlowNet with uniform \(P_{B}\), two states with identical features set share the same transition probabilities. **Proposition 1**.: _Let \(s_{1}=\{(z_{1},b_{1}),(z_{2},b_{2})\dots(z_{l},b_{l})\}\) and \(s_{2}=\{(z_{1}^{\prime},b_{1}^{\prime}),(z_{2}^{\prime},b_{2})\dots(z_{l}^{ \prime},b_{l})\}\) be two non-terminal states such that \(s_{1}\neq s_{2}\) but sharing the same features \(\rho_{i}=\rho_{i}^{\prime}\). Let \(\mathbf{a}\) be any sequence of actions, which applied to \(s_{1}\) and \(s_{2}\), respectively, results in full weighted trees \(x=(z,b_{z}),x^{\prime}=(z^{\prime},b^{\prime})\), with two partial trajectories \(\tau=(s_{1}\to\dots\to x),\tau^{\prime}=(s_{2}\to\dots\to x^{\prime})\). If \(P_{F}\) is the policy of an optimal GFlowNet with uniform \(P_{B}\), then \(P_{F}(\tau)=P_{F}(\tau^{\prime})\)._ All proofs are in Appendix A. The proposition shows that our proposed features have sufficient representation power for the PhyloGFN-Bayesian policy. Furthermore, Felsenstein features and edge length priors are used in calculating reward by Felsenstein's algorithm. Therefore, computing these features does not introduce any additional variables, and computation overhead is minimized. ### GFlowNets for parsimony analysis This section introduces PhyloGFN-Parsimony, our GFlowNet-based method for parsimony analysis. We treat large parsimony analysis, a discrete optimization problem, as a sampling problem from the energy distribution \(\exp\left(\frac{-M(z|\mathbf{Y})}{T}\right)\) defined over tree topologies. Here, \(M(z|\mathbf{Y})\) is the parsimony score of \(z\) and \(T\) is a pre-defined temperature term to control the smoothness of distribution. With sufficiently small \(T\), the most parsimonious trees dominate the energy distribution. To state our goals formally, given observed sequences \(\mathbf{Y}\), PhyloGFN-Parsimony learns a sampling policy \(P_{F}\) over the space of tree topologies \(\{z|L(z)=\mathbf{Y}\}\) such that \(P_{F}^{\top}(z)\propto e^{-\frac{M(z|\mathbf{Y})}{T}}\). As \(T\to 0\), this target distribution approaches a uniform distribution over the set of tree topologies with minimum parsimony scores. PhyloGFN-Parsimony can be seen as a reduced version of PhyloGFN-Bayesian. The tree shape generation procedure is the same as before, but we no longer generate branch lengths. The reward is defined as \(R(z)=\exp\left(\frac{C-M(z|\mathbf{Y})}{T}\right)\), where \(C\) is an extra hyperparameter introduced for stability to offset the typically large \(M(z|\mathbf{Y})\) values. Note that \(C\) can be absorbed into the partition function and has no influence on the reward distribution. Similar to PhyloGFN-Bayesian, a state (collection of rooted trees) is represented by the set of tree features, with each tree represented by concatenating its site-level features. With \(z\) the rooted tree topology with root \(u\), we represent the tree at site \(i\) by its root level Fitch feature \(\mathbf{f}_{u}^{i}\). The proposition below, analogous to Proposition 1, shows the representation power of the proposed feature. **Proposition 2**.: _Let \(s_{1}=\{z_{1},z_{2},\dots z_{l}\}\) and \(s_{2}=\{z_{1}^{\prime},z_{2}^{\prime},\dots z_{l}^{\prime}\}\) be two non-terminal states such that \(s_{1}\neq s_{2}\) but sharing the same Fitch features \(\mathbf{f}_{zi}=\mathbf{f}_{zi}^{\top}\forall i\). Let \(\mathbf{a}\) be any sequence of actions, which, applied to \(s_{1}\) and \(s_{2}\), respectively, results in tree topologies \(x,x^{\prime}\in\mathcal{Z}\), with two partial trajectories \(\tau=(s_{1}\to\dots\to x),\tau^{\prime}=(s_{2}\to\dots\to x^{\prime})\). If \(P_{F}\) is the policy of an optimal GFlowNet with uniform \(P_{B}\), then \(P_{F}(\tau)=P_{F}(\tau^{\prime})\)_ This shows that the Fitch features contain sufficient information for PhyloGFN-Parsimony. Furthermore, the Fitch features are used in the computation of the reward by Fitch's algorithm, so their use in the policy model does not introduce additional variables or extra computation. Temperature-conditioned PhyloGFNThe temperature \(T\) controls the trade-off between sample diversity and parsimony scores. Following Zhang et al. (2023), we extend PhyloGFN-Parsimony by conditioning the policy on \(T\), with reward \(R(z;T)=\exp\left(\frac{C-M(z|Y)}{T}\right)\), and we learn a sampler such that \(P^{\top}(z;T)\propto R(z;T)\). See Appendix E for more details. ### Model architecture and training Parameterization of forward transitionsWe parameterize the forward transitions of tree topology construction using a Transformer-based neural network, whose architecture is shown in Fig. 1. We select Transformer because the input is a set and the model needs to be order-equivariant. For a state consisting of \(n\) trees, after \(n\) embeddings are generated from the Transformer encoder, \(\binom{n}{2}\) pairwise features are created for all possible pairs of trees, and a common MLP generates probability logits for jointing every tree pair. PhyloGFN-Bayesian additionally requires generating edge lengths. Once the pair of trees to join is selected, another MLP is applied to the corresponding pair feature to generate probability logits for sampling the edge lengths. More details are in Appendix D. Off-policy trainingThe action model \(P_{F}(\cdot\mid s;\theta)\) is trained with the trajectory balance objective. Training data are generated from two sources: (i) A set of trajectories constructed from the currently trained GFlowNet, with actions sampled from the policy with probability \(1-\epsilon\) and uniformly at random with probability \(\epsilon\). The \(\epsilon\) rate drops from a pre-defined \(\epsilon_{start}\) to near \(0\) during the course of training. (ii) Trajectories corresponding to the best trees seen to date (replay buffer). Trajectories are sampled backward from these high-reward trees with uniform backward policy. Temperature annealingFor PhyloGFN-Parsimony, it is crucial to choose the appropriate temperature \(T\). Large \(T\) defines a flat target distribution, while small \(T\) makes the reward landscape less smooth and leads to training difficulties. We cannot predetermine the ideal choice of \(T\) before training, as we do not know _a priori_ the parsimony score for the dataset. Therefore, we initialize the training with large \(T\) and reduce \(T\) periodically during training. See Appendix E for details. ### Marginal log-likelihood estimation To assess how well the GFlowNet sampler approximates the true posterior distribution, we use the following importance-weighted variational lower bound on the marginal log-likelihood (MLL): \[\log P(\mathbf{Y})\geq\mathbb{E}_{\tau_{1},\dots,\tau_{k}-P_{F}}\log\left(P(z) \frac{1}{K}\sum_{\begin{subarray}{c}\tau_{1}\\ \tau_{1}:z_{0}\dots+\dots\dots\left(z_{i},b_{i}\right)\end{subarray}}^{k} \frac{P_{B}(\tau_{1}|z_{i},b_{i})R(z_{i},b_{i})}{P_{F}(\tau_{i})}\right). \tag{5}\] Our estimator is computed by sampling a batch of \(K\) trajectories and averaging \(\frac{P(z)P_{B}(\tau_{1}|z_{i},b_{i})R(z_{i},b_{i})}{P_{F}(\tau)}\) over the batch. This expectation of this estimate is guaranteed to be a lower bound on \(\log P(\mathbf{Y})\) and its bias decreases as \(K\) grows (Burda et al., 2016). PhyloGFN-Bayesian models branch lengths with discrete multinomial distributions, while in reality these are continuous variables. To properly estimate the MLL and compare with other methods defined over continuous space, we augment our model to a continuous sampler by performing random perturbations over edges of trees sampled from PhyloGFN-Bayesian. The perturbation follows the uniform distribution \(\mathcal{U}_{\{-0.5\omega,0.5\omega\}}\), where \(\omega\) is the fixed bin size for edge modeling in PhyloGFN-Bayesian. The resulting model over branch lengths is then a piecewise-constant continuous distribution. We discuss the computation details as well as the derivation of (5) in Appendix B. ## 5 Experiments We evaluate PhyloGFN on a suite of 8 real-world benchmark datasets (Table S1 in Appendix C) that is standard in the literature. These datasets feature pre-aligned sequences and vary in difficulty (27 to 64 sequences; 378 to 2520 sites). In the following sections we present our results and analysis on Bayesian and parsimony-based phylogenetic inference. ### Bayesian phylogenetic inference PhyloGFN is compared with a variety of baselines in terms of sampling-based estimates of marginal log-likelihood (MLL; see details in SS4.4). The baselines we compare to are the MCMC-based MrBayes combined with the stepping-stone sampling technique (Ronquist et al., 2012), and three variational inference methods: VBPI-GNN (Zhang, 2023), \(\phi\)-CSMC proposed in VaiPhy (Koptagel et al., 2022), and GeoPhy (Mimori & Hamada, 2023). The sampling setup for MrBayes follows Zhang & Matsen IV (2018b) and otherwise show the highest MLL reported in the respective papers. The results are summarized in Table 1. Our PhyloGFN is markedly superior to \(\phi\)-CSMC across all datasets and outperforms GeoPhy on most, with the exception of DS2 and DS3 where the two perform similarly, and DS7 where GeoPhy obtains a better result. VBPI-GNN, is the only machine learning-based method that performs on par against MrBayes, the current gold standard in Bayesian phylogenetic inference. However, it should be emphasized that VBPI-GNN requires a set of pre-defined tree topologies that are likely to achieve high likelihood, and as a consequence, its training and inference are both constrained to a small space of tree topologies, thus severely undermining its applicability to, for example, postulating alternative phylogenetic theories. On the other hand, PhyloGFN operates on the full space of tree topologies and, in fact, achieves a closer fit to the true posterior distribution. To show this, for each dataset, we created three sets of phylogenetic trees with high/medium/low posterior probabilities and obtained the corresponding sampling probabilities from PhyloGFN and VBPI-GNN. The three classes of trees are generated from VBPI-GNN by randomly inserting uniformly sampled actions into its sequential tree construction process with 0%, 30%, or 50% probability, respectively, which circumvents VBPI-GNN's limitation of being confined to a small tree topology space. Table 2 and Fig. 2 show that PhyloGFN achieves higher Pearson correlation between the sampling log-probability and the unnormalized ground truth posterior log-density for the majority of datasets and classes of trees. In particular, while VBPI-GNN performs better on high-probability trees, its correlation drops significantly on lower-probability trees. On the other hand, PhyloGFN maintains a high correlation for all three classes of trees across all datasets, the only exception being the high-probability trees in DS7. See Appendix F for details and extended results. ### Parsimony-based phylogenetic inference As a special case of Bayesian phylogenetic inference, the parsimony problem is concerned with finding the most-parsimonious trees - a task which is also amenable to PhyloGFN. Here, we compare to the state-of-the-art parsimony analysis software PAUP* (Swofford, 1998). For all datasets, our PhyloGFN and PAUP* are able to identify the same set of optimal solutions to the Large Parsimony problem, ranging from a single optimal tree for DS1 to 21 optimal trees for DS8. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{MCMC} & \multicolumn{3}{c}{ML-based / amortized, full tree space} \\ \cline{2-6} Dataset & MrBayes & VBPI-GNN* & \(\phi\)-CSMC & GeoPhy & PhyloGFN \\ \hline DS1 & \(-7108.42\pm 0.18\) & \(-7108.41\pm 0.14\) & \(-7290.36\pm 7.23\) & \(-7111.55\pm 0.07\) & \(-7108.95\pm 0.06\) \\ DS2 & \(-26367.57\pm 0.48\) & \(-26367.73\pm 0.07\) & \(-30568.49\pm 131.34\) & \(-26368.44\pm 0.13\) & \(-26368.90\pm 0.28\) \\ DS3 & \(-33735.44\pm 0.5\) & \(-33735.12\pm 0.09\) & \(-33798.06\pm 6.62\) & \(-33735.85\pm 0.12\) & \(-33735.6\pm 0.35\) \\ DS4 & \(-13330.06\pm 0.54\) & \(-13329.94\pm 0.19\) & \(-13582.24\pm 35.08\) & \(-13337.42\pm 1.32\) & \(-13331.83\pm 0.19\) \\ DS5 & \(-8214.51\pm 0.28\) & \(-8214.64\pm 0.38\) & \(-8367.51\pm 8.87\) & \(-8233.89\pm 6.63\) & \(-8215.15\pm 0.2\) \\ DS6 & \(-6724.07\pm 0.86\) & \(-6724.37\pm 0.4\) & \(-7013.83\pm 16.59\) & \(-6733.91\pm 0.57\) & \(-6730.68\pm 0.54\) \\ DS7 & \(-37332.76\pm 2.42\) & \(-73733.04\pm 0.12\) & & \(-37350.77\pm 11.74\) & \(-37359.96\pm 1.14\) \\ DS8 & \(-8649.88\pm 1.75\) & \(-8650.65\pm 0.45\) & \(-9209.18\pm 18.03\) & \(-8660.48\pm 0.78\) & \(-8654.76\pm 0.19\) \\ \hline \hline \end{tabular} \end{table} Table 1: Marginal log-likelihood estimation with different methods on real datasets DS1-DS8. PhyloGFN outperforms \(\phi\)-CSMC across all datasets and GeoPhy on most. *VBPI-GNN uses predefined tree topologies in training and is not directly comparable. Figure 2: Model sampling log-density vs. unnormalized posterior log-density for high/medium/low-probability trees on DS1. We highlight that PhyloGFN-Bayesian performs significantly better on medium- and low-probability trees, highlighting its superiority in modeling the entire data space. Although the results are similar between PhyloGFN and PAUP*, once again we emphasize that PhyloGFN is based on a rigorous mathematical framework of fitting and sampling from well-defined posterior distributions over tree topologies. whereas PAUP*'s relies on heuristic algorithms. To put it more concretely, we show in Fig. S2 that PhyloGFN is able to (i) learn a smooth echelon of sampling probabilities that distinguish the optimal trees from suboptimal ones; (ii) learn similar sampling probabilities for trees within each group of equally-parsimonious trees; and (iii) fit all \(2n-3\) rooted trees that belong to the same unrooted tree equally well. Finally, Fig. 3 shows that a single temperature-conditioned PhyloGFN can sample phylogenetic trees of varying ranges of parsimony scores by providing suitable temperatures \(T\) as input to the model. Also, PhyloGFN is able to sample proportionately from the Boltzmann distribution defined at different temperatures and achieves high correlation between sampling log-probability and log-reward. Although the temperature-conditioned PhyloGFN has only been trained on a small range of temperatures between 4.0 and 1.0, Fig. 3 shows it can also approximate the distribution defined at temperature 8.0. Further results are presented in Appendix G. ## 6 Discussion and future work In this paper, we propose PhyloGFN, a GFlowNet-based generative modeling algorithm, to solve parsimony-based and Bayesian phylogenetic inference. We design an intuitive yet effective tree construction procedure to efficiently model the entire tree topology space. We propose a novel tree representation based on Fitch's and Felsenstein's algorithms to represent rooted trees without introducing additional learnable parameters, and we show the sufficiency of our features for the purpose of phylogenetic tree generation. We apply our algorithm on eight real datasets, demonstrating that \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{No random} & \multicolumn{2}{c}{30\% random} & \multicolumn{2}{c}{50\% random} \\ \cline{2-7} Dataset & PhyloGFN & VBPI-GNN & PhyloGFN & VBPI-GNN & PhyloGFN & VBPI-GNN \\ \hline DS1 & **0.994** & 0.955 & **0.961** & 0.589 & **0.955** & 0.512 \\ DS2 & 0.930 & **0.952** & **0.948** & 0.580 & **0.919** & 0.538 \\ DS3 & 0.917 & **0.968** & **0.963** & 0.543 & **0.950** & 0.499 \\ DS4 & 0.942 & **0.960** & **0.945** & 0.770 & **0.966** & 0.76 \\ DS5 & **0.969** & 0.965 & **0.937** & 0.786 & **0.939** & 0.773 \\ DS6 & **0.993** & 0.887 & **0.973** & 0.816 & **0.934** & 0.702 \\ DS7 & 0.624 & **0.955** & **0.787** & 0.682 & **0.764** & 0.678 \\ DS8 & **0.978** & 0.955 & **0.913** & 0.604 & **0.901** & 0.463 \\ \hline \hline \end{tabular} \end{table} Table 2: Pearson correlation of model sampling log-density and ground truth unnormalized posterior log-density for each dataset on high/medium/low posterior density trees generated by VBPI-GNN. PhyloGFN achieves a good fit on both high and low posterior density regions. Figure 3: A temperature-conditioned PhyloGFN is trained on DS1 using temperatures sampled between 4.0 and 1.0. **(A)** The distribution of parsimony scores can be controlled by varying the statistical temperature – an input variable to the PhyloGFN policy – from 8.0 to 1.0. 10,000 trees are randomly sampled at each temperature. **(B)** PhyloGFN achieves high Pearson correlation for trees sampled within each temperature range. PhyloGFN is competitive with or superior to prior works in terms of marginal likelihood estimation, while achieving a closer fit to the target distribution compared to state-of-the-art variational inference methods. Future work should consider continuous edge length sampling in place of the current binning procedure, _e.g._, using a continuous GFlowNet policy (Lahlou et al., 2023). In addition, we plan to explore the use of conditional GFlowNets to amortize the dependence on the sequence dataset itself. This would allow a single trained GFlowNet to sample phylogenetic trees for sets of sequences that were not seen during training. ## Acknowledgments We thank Oskar Kviman and Jens Lagergren for insightful discussions at the inception of this project.
2306.04870
KMT-2022-BLG-2397: Brown Dwarf at the Upper Shore of the Einstein Desert
We measure the Einstein radius of the single-lens microlensing event KMT-2022-BLG-2397 to be theta_E=24.8 +- 3.6 uas, placing it at the upper shore of the Einstein Desert, 9 < theta_E / uas < 25, between free-floating planets (FFPs) and bulge brown dwarfs (BDs). In contrast to the six BD (25 < theta_E < 50) events presented by Gould+22, which all had giant-star source stars, KMT-2022-BLG-2397 has a dwarf-star source, with angular radius theta_* ~ 0.9 uas. This prompts us to study the relative utility of dwarf and giant sources for characterizing FFPs and BDs from finite-source point-lens (FSPL) microlensing events. We find `dwarfs' (including main-sequence stars and subgiants) are likely to yield twice as many theta_E measurements for BDs and a comparable (but more difficult to quantify) improvement for FFPs. We show that neither current nor planned experiments will yield complete mass measurements of isolated bulge BDs, nor will any other planned experiment yield as many theta_E measurements for these objects as KMT. Thus, the currently anticipated 10-year KMT survey will remain the best way to study bulge BDs for several decades to come.
Andrew Gould, Yoon-Hyun Ryu, Jennifer C. Yee, Michael D. Albrow, Sun-Ju Chung, Cheongho Han, Kyu-Ha Hwang, Youn Kil Jung, In-Gu Shin, Yossi Shvartzvald, Hongjing Yang, Weicheng Zang, Sang-Mok Cha, Dong-Jin Kim, Seung-Lee Kim, Chung-Uk Lee, Dong-Joo Lee, Yongseok Lee, Byeong-Gon Park, Richard W. Pogge
2023-06-08T01:56:20Z
http://arxiv.org/abs/2306.04870v1
# KMT-2022-BLG-2397: Brown Dwarf at the Upper Shore of the Einstein Desert ###### Abstract We measure the Einstein radius of the single-lens microlensing event KMT-2022-BLG-2397 to be \(\theta_{\rm E}=24.8\pm 3.6\,\mu\)as, placing it at the upper shore of the Einstein Desert, \(9\lesssim\theta_{\rm E}/\mu\)as \(\lesssim 25\), between free-floating planets (FFPs) and bulge brown dwarfs (BDs). In contrast to the six BD (\(25\lesssim\theta_{\rm E}\lesssim 50\)) events presented by Gould et al. (2022), which all had giant-star source stars, KMT-2022-BLG-2397 has a dwarf-star source, with angular radius \(\theta_{*}\sim 0.9\,\mu\)as. This prompts us to study the relative utility of dwarf and giant sources for characterizing FFPs and BDs from finite-source point-lens (FSPL) microlensing events. We find "dwarfs" (including main-sequence stars and subgiants) are likely to yield twice as many \(\theta_{\rm E}\) measurements for BDs and a comparable (but more difficult to quantify) improvement for FFPs. We show that neither current nor planned experiments will yield complete mass measurements of isolated bulge BDs, nor will any other planned experiment yield as many \(\theta_{\rm E}\) measurements for these objects as KMT. Thus, the currently anticipated 10-year KMT survey will remain the best way to study bulge BDs for several decades to come. gravitational lensing: micro ## 1 Introduction Isolated dark1 objects can only be studied with gravitational microlensing. In principle, their masses, distances, and transverse velocities can be determined by measuring seven parameters, which can be summarized as five quantities (two of which are vectors). These are the Einstein timescale (\(t_{\rm E}\)), the angular Einstein radius (\(\theta_{\rm E}\)), the microlens parallax vector (\(\mathbf{\pi}_{\rm E}\)), and the source parallax and proper motion, (\(\pi_{S}\) and \(\mathbf{\mu}_{S}\)). Here, Footnote 1: Nothing in the universe is truly “dark”. All objects with a surface reflect ambient light, and even black holes emit Hawking (1975) radiation. By “dark”, we mean specifically: “undetectable with current or planned instruments”. \[t_{\rm E}=\frac{\theta_{\rm E}}{\mu_{\rm rel}};\qquad\theta_{\rm E}=\sqrt{ \kappa M\pi_{\rm rel}};\qquad\kappa\equiv\frac{4\,G}{c^{2}\,{\rm au}}\simeq 8.14\,\frac{{\rm mas}}{M_{\odot}}, \tag{1}\] and \[\mathbf{\pi}_{\rm E}=\pi_{\rm E}\frac{\mathbf{\mu}_{\rm rel }}{\mu_{\rm rel}};\qquad\pi_{\rm E}\equiv\frac{\pi_{\rm rel}}{\theta_{\rm E}}, \tag{2}\] where \(M\) is the lens mass, (\(\pi_{\rm rel},\mathbf{\mu}_{\rm rel}\)) are the lens-source relative (parallax, proper-motion), and \(\mu_{\rm rel}\equiv|\mathbf{\mu}_{\rm rel}|\). Then, the lens mass, distance \(D_{L}\), and transverse velocity \({\bf v}_{\perp}\) are given by \[M=\frac{\theta_{\rm E}}{\kappa\pi_{\rm E}};\qquad D_{L}=\frac{{\rm au}}{\pi_{ \rm rel}+\pi_{S}};\qquad{\bf v}_{\perp}=D_{L}(\mathbf{\mu}_{S}+\mbox {\boldmath$\mu$}_{\rm rel,hel}), \tag{3}\] where \[\pi_{\rm rel}=\theta_{\rm E}\pi_{\rm E};\qquad\mathbf{\mu}_{\rm rel, hel}=\mathbf{\mu}_{\rm rel}+\frac{\pi_{\rm rel}}{{\rm au}}{\bf v}_{ \oplus,\perp};\qquad\mathbf{\mu}_{\rm rel}=\frac{\mathbf{\pi}_ {\rm E}}{\pi_{\rm E}}\frac{\theta_{\rm E}}{t_{\rm E}} \tag{4}\] and where \({\bf v}_{\oplus,\perp}\) is Earth's orbital velocity at the peak of the event projected on the sky. After 30 years of dedicated surveys that have cataloged more than 30,000 microlensing events, such complete characterizations have been carried out for exactly one isolated dark object: OGLE-2011-BLG-0462/MOA-2011-BLG-191, which is a black hole (BH, Sahu et al., 2022; Lam et al., 2022) with mass \(M=7.88\pm 0.82\,M_{\odot}\), distance2\(D_{L}=1.62\pm 0.15\,\)kpc, and \(v_{\perp}=43.4\pm 3.8\,\)km s\({}^{-1}\), with a direction (Galactic North through East) of \(\phi=-17^{\circ}\)(Mroz et al., 2022). Footnote 2: In their abstract, Mroz et al. (2022) quote a shorter distance based on an incorrect estimate of \(\pi_{S}\) that was adopted from Sahu et al. (2022), but they correct this in the penultimate paragraph of the main body of their paper. If \(\mathbf{\mu}_{S}\) cannot be measured or if the direction of \(\mathbf{\mu}_{\rm rel}\) is ambiguous (and if \(\pi_{S}\) can be measured or adequately estimated), then one loses the measurement of \({\bf v}_{\perp}\) but still retains those of \(M\) and \(D_{L}\). There are three other such mass measurements of isolated dark objects, all brown dwarfs (BDs). These are OGLE-2007-BLG-224, with \(M=58\pm 4\,M_{\rm jup}\), \(D_{L}=0.52\pm 0.04\,\)kpc (Gould et al., 2009) and OGLE-2015-BLG-1268, with \(M=47\pm 7\,M_{\rm jup}\) and \(D_{L}=5.9\pm 1.0\,\)kpc (Zhu et al., 2016), and OGLE-2017-BLG-0896 (Shvartzvald et al., 2019), with \(M=19\pm 2\,M_{\rm Jup}\), and \(D_{L}=4.0\pm 0.2\,\)kpc. For OGLE-2007-BLG-224, the lens is so much closer to the Sun than the source and has such high \(\mu_{\rm rel,hel}=43\,\)mas yr\({}^{-1}\), that \(\mathbf{\mu}_{S}\) plays very little role in assessing the kinematics of the lens. Moreover, its source star is in the _Gaia_(Gaia Collaboration et al., 2016, 2018) DR3 catalog. Although it does not have a proper-motion measurement in DR3, its proper motion could be measured in future _Gaia_ data releases. However, the other two events both have ambiguous directions for \(\mathbf{\mu}_{\rm rel}\), so the transverse velocities will remain unknown even if \(\mathbf{\mu}_{S}\) is later measured. These four mass-distance determinations of isolated dark objects nicely illustrate the three methods that have been used to date to measure \(\mathbf{\pi}_{\rm E}\), as well as two of the three methods that have been used to measure \(\theta_{\rm E}\). For OGLE-2011-BLG-0462, \(\mathbf{\pi}_{\rm E}\) was measured by annual parallax (Gould, 1992), while \(\theta_{\rm E}\) was measured using astrometric microlensing (Walker, 1995; Hog et al., 1995; Miyamoto & Yoshii, 1995). For OGLE-2007-BLG-224, \(\mathbf{\pi}_{\rm E}\) was measured by terrestrial parallax (Holz & Wald, 1996; Gould, 1997), while \(\theta_{\rm E}\) was measured from finite-source effects as the lens transited the source (Gould, 1994a; Witt & Mao, 1994; Nemiroff & Wickramasinghe, 1994). For OGLE-2015-BLG-1268 and OGLE-2017-BLG-0896, \(\mathbf{\pi}_{\rm E}\) was measured by satellite parallax (Refsdal, 1966; Gould, 1994b), while \(\theta_{\rm E}\) was again measured from finite-source effects. The other method that has been used to measure \(\theta_{\rm E}\) (but not yet applied to isolated dark objects) is interferometric resolution of the microlensed images (Delplancke et al., 2001; Dong et al., 2019; Cassan et al., 2021). For completeness, we note that there are two other events with mass measurements whose BD nature could be confirmed (or contradicted) by future adaptive-optics (AO) observations on extremely large telescopes (ELTs). One of these is OGLE-2015-BLG-1482 (Chung et al., 2017), which has two solutions, one with a BD lens (\(M=57\,M_{\rm jup}\), \(D_{L}=7.5\,{\rm kpc}\)) and the other with a stellar lens (\(M=100\,M_{\rm jup}\), \(D_{L}=7.2\,{\rm kpc}\)). If the latter is correct, then \(\mu_{\rm rel}\simeq 9\,{\rm mas\,yr^{-1}}\), so that at first AO light on ELTs, plausibly 2030, the lens and the clump-giant source will be separated by about 135 mas, which would permit the lens to be detected for the stellar-mass solution. Hence, if the lens is not detected, then the BD solution will be correct. The other is OGLE-2016-BLG-1045 (Shin et al., 2018), which has a unique solution at the star-BD boundary, (\(M=0.08\pm 0.01\,M_{\odot}\), \(D_{L}=5.02\pm 0.14\,{\rm kpc}\)). If the lens is a star (and so is luminous), then it can be detected in AO observations after it is sufficiently separated from the source. However, because the source is a giant, the proper motion is only \(\mu_{\rm rel}\sim 7\,{\rm mas\,yr^{-1}}\), and a stellar lens must be detectable down to very faint magnitudes in order to confirm a possible BD, this may not be feasible immediately after first AO light on ELTs, depending on the performance of these instruments. As we will discuss in Section 7, there are good long-term prospects for obtaining complete solutions for isolated dark objects in both regimes, i.e., both dark massive remnants and dark substellar objects. However, while the new instruments required to make progress in the high-mass regime are already coming on line, those required for the low-mass regime are several years to several decades away. Hence, for the present, techniques that yield only partial information are still needed to probe substellar-object populations. Two such methods have been developed to date: analysis of the \(t_{\rm E}\) distribution of detected microlensing events (Sumi et al., 2011; Mroz et al., 2017), and analysis of the \(\theta_{\rm E}\) (and \(\mu_{\rm rel}=\theta_{\rm E}/t_{\rm E}\)) distributions of the subset of single-lens events that have such measurements, so-called finite-source point-lens (FSPL) events (Kim et al., 2021; Gould et al., 2022). Each approach has its advantages and disadvantages. Before comparing these, it is important to note that both approaches must rely on Galactic models to interpret their results because the masses and distances of individual objects cannot be determined from either \(t_{\rm E}\) or \((\theta_{\rm E},\mu_{\rm rel})\) measurements alone. The great advantage of the \(t_{\rm E}\) approach is that very large samples of events are available, even accounting for the fact that only fields subjected to high-cadence observations can contribute substantially to the detection of low-mass objects. In particular, Mroz et al. (2017) leveraged this advantage to make the first detection of six members of a population of short, \(t_{\rm E}\sim 0.5\,{\rm day}\), events, which was separated by a gap from the main distribution of events, and which they suggested might be due to a very numerous class of free-floating planets (FFPs). However, while this new population is clearly distinct in \(t_{\rm E}\) space, it is difficult to constrain its properties based on \(t_{\rm E}\) measurements alone. In particular, there is no reason, a priori, to assume that its density distribution follows that of the luminous stars that define the Galactic model. The events just on the larger-duration side of the \(t_{\rm E}\) gap are almost certainly dominated by the lowest-mass part of the stellar-BD population. Because the luminous-star component of this distribution can be studied by other techniques, models of the luminous component can provide powerful constraints that facilitate disentangling the BD signal within the \(t_{\rm E}\) distribution, which is necessarily a convolution of BD and stellar components. Nevertheless, because the BD component may differ substantially in the Galactic bulge and/or the distant disk relative to the local one that can be directly studied, it may be difficult to disentangle the different populations, given that there is little information on the mass and distance of each lens and that these are convolved with the kinematics. See Equation (1). The \(\theta_{\rm E}\) approach, by contrast, has several orders of magnitude fewer events. For example, Gould et al. (2022) recovered just 30 giant-source FSPL events from 4 years of Korean Microlensing Telescope Network (KMTNet, Kim et al., 2016) data compared to an underlying sample of about 12,000 events. Nevertheless, this approach has several major advantages, particularly in the study of low-mass objects. The most important is that for a source of fixed angular radius, \(\theta_{*}\), the rate of FSPL events scales as \(\theta_{*}\)(i.e., is independent of lens mass, Gould & Yee, 2013), whereas the rate of microlensing events in general scales \(\propto\theta_{\rm E}\sim\sqrt{M}\). Thus, among the 30 FSPL events found by Gould et al. (2022), four were from the same FFP population from which Mroz et al. (2017) identified six members from a much larger sample. As in the Mroz et al. (2017) \(t_{\rm E}\) sample, these were separated by a gap from the main body of detections, which spans a factor of three in \(\theta_{\rm E}\) and which Kim et al. (2021) dubbed the "Einstein Desert". See Figure 4 from Gould et al. (2022). A second major advantage, which follows from the same \(\propto\theta_{*}\) scaling, but is perhaps less obvious, is that FSPL events are heavily weighted toward bulge lenses (at least for those above the Einstein Desert). This is because, just as \(\theta_{\rm E}\propto\sqrt{M}\), it is equally the case that \(\theta_{\rm E}\propto\sqrt{\pi_{\rm rel}}\). This effect can be seen in Figure 9 of Gould et al. (2022). Hence, in an FSPL sample (above the Einstein Desert), one is primarily studying bulge lenses, which renders the interpretation cleaner. The combination of these two effects implies that the six events found by Gould et al. (2022) in the range \(25<\theta_{\rm E}/\mu{\rm as}<50\) are likely to be overwhelmingly bulge BDs, with possibly some contamination by very low-mass stars. That is, scaling to a characteristic bulge-lens/bulge-source relative parallax3\(\pi_{\rm rel}\sim 10\,\mu\)as (corresponding to a distance along the line of sight \(D_{LS}\equiv D_{S}-D_{L}\sim 650\,\)pc), Footnote 3: Note that for the parent population of microlensing events, the characteristic separation is closer to \(D_{LS}\sim 1\,\)kpc, corresponding to \(\pi_{\rm rel}\sim 16\,\mu\)as because larger separations are favored by the \(\theta_{\rm E}\propto\sqrt{\pi_{\rm rel}}\) cross section. However, FSPL events do not have this cross-section factor. \[M=\frac{\theta_{\rm E}^{2}}{\kappa\pi_{\rm rel}}=32\,M_{\rm jup}\Big{(}\frac{ \theta_{\rm E}}{50\,\mu{\rm as}}\Big{)}^{2}\Big{(}\frac{\pi_{\rm rel}}{10\,\mu{ \rm as}}\Big{)}^{-1}. \tag{5}\] This brings us to the next advantage, which is that it is relatively straightforward to vet an FSPL sample of BD candidates for stellar "contamination". That is, in contrast to the underlying sample of events (with only \(t_{\rm E}\) measurements), \(\mu_{\rm rel}\) is known for each FSPL event. Hence, it is possible to know in advance both how long one must wait for AO observations that could potentially see the lens (assuming that it is luminous) and what is the annulus around the source that must be investigated in the resulting image. Such AO followup observations would also reveal whether the BD was truly isolated. In fact, Shan et al. (2021) carried out this test for OGLE-2007-BLG-224 and ruled out the possibility of a main sequence lens. Moreover, because there are relatively few BD/stellar FSPL objects in the Gould et al. (2022) sample, which likely includes roughly 8 BDs, one could afford to probe a substantially larger fraction of this sample than just the events that are most likely to be BDs. The lenses that were thereby revealed to be luminous would in themselves be useful because they could confirm (or contradict) the predictions of the Galactic model. That said, this third advantage is somewhat compromised for the giant-source FSPL sample of Gould et al. (2022) because even the 10-15 year interval between the events and ELT AO first light may not be enough for the source and lens to separate sufficiently to probe to the hydrogen burning limit. Given the source-magnitude distribution shown in Figure 1 of Gould et al. (2022), contrast ratios of \(\gtrsim 10^{4}\) in the \(K\) band would be required. While we cannot be too precise about instruments that have not yet been built, scaling from AO on Keck, this will probably require separations of \(\gtrsim 100\,\)mas even on the 39m European ELT. From Figure 5 of Gould et al. (2022), one sees that this will not occur until well after 2030 for many of the BD candidates. Here, we present KMT-2022-BLG-2397, the second4 FSPL bulge-BD candidate with a dwarf-star source. Because this source is about 4 mag fainter than the clump, the required contrast ratio will be about 250 rather than \(\gtrsim 10^{4}\). Based on its measured proper motion, \(\mu_{\rm rel}=6.7\pm 1.0\,{\rm mas\,yr}^{-1}\), the source and lens will be separated by about 50 mas in 2030 and 85 mas in 2035. The first will be sufficient to probe for typical luminous companions, while the second will be good enough to probe to the hydrogen-burning limit. After presenting the analysis of this event, we discuss the prospects for identifying a larger sample of bulge-BD candidates with dwarf-star sources in context of the ongoing development of other possible competing methods to probe bulge BDs. Relatedly, after the initial draft of this paper was essentially complete, Koshimoto et al. (2023) and Sumi et al. (2023) posted two papers on an FSPL search carried out based on 9 years of MOA data and combining measurements of the \(t_{\rm E}\) and \(\theta_{\rm E}\) distributions. This search is highly relevant but does not materially impact our investigations. Hence, we discuss this work in detail in Section 7.4, but do not otherwise modify the main body of the paper (except for adding footnotes 4 and 6). ## 2 Observations KMT-2022-BLG-2397 lies at (RA, Decl)\({}_{J2000}=\)(18:02:19.73,\(-\)26:53:28.10), corresponding to \((l,b)=(+3.64,-2.15)\). It was discovered using the KMT EventFinder (Kim et al., 2018) system, which is applied post-season to the data taken with KMTNet's three identical telescopes in Australia (KMTA), Chile (KMTC), and South Africa (KMTS). These telescopes have 1.6m mirrors and are equipped with \(4\,{\rm deg}^{2}\) cameras. The observations are primarily in the \(I\) band, but after every tenth such exposure, there is one in the \(V\) band for the purpose of measuring the source color. The event lies in the overlap of KMT fields BLG03 and BLG35. For KMTC, these are observed at nominal cadences of \(\Gamma=2\,{\rm hr}^{-1}\) and \(\Gamma=0.4\,{\rm hr}^{-1}\), while for KMTS and KMTA, \(\Gamma=3\,{\rm hr}^{-1}\) and \(\Gamma=0.3\,{\rm hr}^{-1}\) for the respective fields. The data were initially reduced using the KMT pySIS (Albrow et al., 2009) pipeline, which is a specific realization of difference image analysis (DIA, Tomaney & Crotts, 1996; Alard & Lupton, 1998) tailored to the KMT data. They were re-reduced for publication using a tender loving care (TLC) version of pySIS (Yang et al. 2023, in prep). Note that KMT-2022-BLG-2397 was not discovered by the in-season KMT AlertFinder (Kim et al., 2018) system, which is operated once per weekday during most of the observing season and which is responsible for a substantial majority of all KMT event detections. As the purpose of this system is to identify events that are suitable for follow-up observations, it is tuned to rising events. The KMT-2022-BLG-2397 microlensing event was undetectable in the data until about one hour before peak. Hence, by the time it would normally have been subjected to AlertFinder analysis 6 hours later, it was already falling. Moreover, this was on a Sunday, so there was no actual analysis until Monday, when the event was essentially at baseline. The KMTA data from BLG35 are of very poor quality and so are not included in the analysis. This exclusion has almost no effect for three reasons. First, the cadence for BLG03 is 10 times higher. Second, the KMTA data fall on an unconstraining part of the light curve. Third, conditions were poor on the one night when KMTA data would be relevant at all. KMT-2022-BLG-2397 was recognized as a potentially interesting event during the initial review of the roughly 500 new EventFinder events from 2022. ## 3 Light Curve Analysis Figure 1 shows a very short, classic FSPL event, in which the rising light curve first steepens as the lens starts to transit the source, and then flattens over peak, followed by a symmetric decline. The residuals show no systematic deviations over the peak. The peak is captured entirely by KMTS data, with 19 points from BLG03 and 2 from BLG35, over a total of 5.6 hours. The inset shows a point-source point-lens (PSPL) fit in black for comparison. It clearly cannot match the sharp rise and fall on the wing, nor the flattening over the peak, of the data (and the FSPL model). Formally it is rejected by \(\Delta\chi^{2}=128\). In addition, the source magnitude according to the PSPL model (when converted to the calibrated OGLE-III system, see Section 4) is substantially brighter than the baseline object from the OGLE-III catalog. This is further evidence against the PSPL model, but a full explanation would involve additional complications, and as it is not necessary for the rejection of the PSPL model, we do not pursue it. Table 1 shows the five fit parameters of the model \((t_{0},u_{0},t_{\rm E},\rho,I_{S})\). Here \(t_{0}\) is the time of the peak, \(u_{0}\) is the impact parameter normalized to \(\theta_{\rm E}\), \(\rho=\theta_{*}/\theta_{\rm E}\), and \(I_{S}\) is the source magnitude in the KMTC03 system. In addition, we show the four "invariant" parameters, \(t_{\rm eff}\equiv u_{0}t_{\rm E}\), \(t_{*}\equiv\rho t_{\rm E}\), \(z_{0}\equiv u_{0}/\rho\), and \(f_{S}t_{\rm E}\), where \(f_{S}\equiv 10^{-0.4(I_{S}-18)}\). In fact, in order to emphasize several key points, we have put the nonstandard parametrization \((t_{0},t_{\rm E},t_{\rm eff},f_{S}t_{\rm E})\) in the top rows of Table 1. The first point is that the unit of all five of these parameters is time. Second, the last three of these parameters each have errors of \(\lesssim 2.5\%\) compared to the \(\sim 9\%\) errors of the corresponding standard parameters \((u_{0},\rho,f_{S})\) from which they are derived. Thus, these larger errors are rooted in their correlation with \(t_{\rm E}\), which is \(8\%\). Finally, while the error in \(t_{0}\) appears to be impressively small (just 30 seconds), in fact this does not enable any precision physical measurements. For example, during this seemingly short interval, the lens sweeps across the observer plane by a distance \(({\rm au}/\pi_{\rm rel})\mu_{\rm rel}\sigma(t_{0})\sim 0.15\,R_{\oplus}(\pi_{\rm rel }/{\rm mas})^{-1}\). Hence, for the great majority of lenses, which have \(\pi_{\rm rel}\lesssim 0.1\,\)mas, there could not be even a \(1\,\sigma\) terrestrial parallax measurement (even assuming that the event had been observed over peak at another Earth location). We also note that while the error in the impact parameter \(u_{0}\) is 9%, its value relative to the source size, i.e., \(z_{0}\equiv u_{0}/\rho\) is measured to 2.4%. ## 4 Source Properties Because \(\rho\) was measured, we can, in principle, use standard techniques (Yoo et al., 2004) to determine the angular source radius, \(\theta_{*}\) and so infer \(\theta_{\rm E}\) and \(\mu_{\rm rel}\): \[\theta_{\rm E}=\frac{\theta_{*}}{\rho};\qquad\mu_{\rm rel}=\frac{\theta_{\rm E }}{t_{\rm E}}. \tag{6}\] In this approach, one measures the offset \(\Delta[(V-I),I]\) of the source star relative to the clump, add in the known dereddened color and magnitude of the clump \([(V-I),I]_{\rm cl,0}=(1.06,14.34)\)(Bensby et al., 2013; Nataf et al., 2013) to obtain the dereddened position of the source \([(V-I),I]_{\rm s,0}=[(V-I),I]_{\rm cl,0}+\Delta[(V-I),I]_{S,0}\). Then one applies a color/surface-brightness relation to determine \(\theta_{*}\). In our case, we use the \(V/K\) relations of Kervella et al. (2004) by first applying the \(VIK\) color-color relations of Bessell & Brett (1988) to transform from \(V/I\) to \(V/K\). However, the practical implementation of this approach poses greater challenges than is usually the case. The first challenge is that there is only one well-magnified \(V\)-band measurement to be used to measure \((V-I)_{S}\). This is partly a consequence of the fact that the event is extremely short (\(t_{\rm E}\sim 1.3\,\)day), that the source is faint (\(I_{S}\sim 21\)), that it is substantially magnified for only a few hours, and that only 9% of the observations are in the \(V\)-band. However, in this case, these problems were exacerbated by what is essentially a bug in the observation-sequence script. The script alternates between a series (roughly one per night) of 176 observations that contain no BLG02 or BLG03 \(V\)-band observations and another series that contains one \(V\)-band observation per five \(I\)-band observations. In the great majority of cases, this makes essentially no difference because events in these fields also lie in BLG42 or BLG43, for which the pattern is reversed. And the great majority of the events that do not have such overlap remain near their peak magnification for many days. However, neither of these "failsafes" applied to KMT-2022-BLG-2397. Fortunately, one of the two BLG35 \(I\)-band observations was complemented by a \(V\)-band observation, and it happened to be right at the peak of the event. See Figure 1. This permits a measurement of the \((V-I)\) color, but without a second observation as a check. While there is an additional \(V\)-band observation from KMTC03 on the falling wing, the source was by that time too faint to permit a useful measurement. A second major issue is that there is an unusually large amount of differential reddening in the neighborhood of the source. Figure 2 shows the clump region of the OGLE-III (Szymanski et al., 2011) color-magnitude diagram (CMD), centered on the event and with radii of 200\({}^{\prime\prime}\), 100\({}^{\prime\prime}\), 60\({}^{\prime\prime}\), and 30\({}^{\prime\prime}\). The red circle indicates our ultimately-adopted position of the clump centroid (see below), while the magenta line in the upper-left panel gives our estimate of the reddening track. It has a slope of \(R_{VI}\equiv\Delta A_{I}/\Delta E(V-I)=1.52\). Within the 200\({}^{\prime\prime}\) circle, the clump is quite extended and its centroid is substantially brighter and bluer than our ultimately-adopted position. For the 100\({}^{\prime\prime}\) circle, the clump remains extended, but its centroid is closer to our adopted position. For the 60\({}^{\prime\prime}\) circle, the clump centroid is close to our adopted position, although it is already very thinly populated. The 30\({}^{\prime\prime}\) circle confirms that the clump centroid is well localized near our adopted position, although one would not try to measure the clump position based on this panel alone. Figure 3 shows the full CMD within the 60\({}^{\prime\prime}\) circle. The blue and green points represent the position of the source as determined from the KMTS35 and KMTC03 fields respectively. The latter only qualitatively confirms the source color, but it does give an independent measurement of the source magnitude. We determined the source CMD parameters as follows. First, we reduced these two data sets using pyDIA (Albrow, 2017), which yields light-curve and field-star photometry on the same system. Next, we evaluated \(V_{\rm S,KMT}\) and \(I_{\rm S,KMT}\) by regression on the best-fit model from Section 3. Note that simple regression of the two light curves on each other should not be used to determine \((V-I)_{S}\) because the two bands are affected by different limb darkening when the lens is transiting the source, which is true of the one point that completely dominates the signal. We specify that we adopted linear limb-darkening coefficients \(\Gamma_{I}=0.440\) and \(\Gamma_{V}=0.621\), corresponding to a \(T=5500\)K star. However, we also note that the difference relative to the regression method is small compared to the statistical errors. We then determine the transformation from each of the two KMT instrumental systems to the OGLE-III calibrated system by matching their respective field stars. We find, in the OGLE-III calibrated system, \([(V-I),I]_{\rm cl}=(3.40,17.03)\pm(0.03,0.04)\) (where we have not yet included the effects of differential reddening), \([(V-I),I]^{\rm calib}_{\rm S,KMTS35}=(3.19,20.64)\pm(0.08,0.09)\), and \([(V-I),I]^{\rm calib}_{\rm S,KMTC03}=(3.14,20.72)\pm(0.36,0.09)\). We adopt \([(V-I),I]_{S}=(3.19,20.68)\pm(0.08,0.09)\), and so \([(V-I),I]_{S,0}=(0.85,17.99)\pm(0.09,0.10)\). Following the above-mentioned procedures of Yoo et al. (2004), this yields, \(\theta_{*}=0.92\pm 0.13\,\mu\)as, where we have added 5% in quadrature to account for systematic errors in the method. We estimate a possible additional uncertainty due to differential reddening as follows. If there is more (or less) extinction than we have estimated, \(\Delta A_{I}\), then the inferred dereddened CMD position of the source will be brighter and bluer (or fainter and redder) than we have estimated. The combined effect is that our estimate of \(\theta_{*}\) would then be displaced by \[\frac{d\ln\theta_{*}}{dA_{I}}=\frac{\ln 10}{5}-\frac{d\ln\theta_{*}/d(V-I)_{0}}{ R_{VI}}\rightarrow-0.46, \tag{7}\] where we have evaluated \(d\ln\theta_{*}/d(V-I)_{0}=1.4\) using the same procedures as above. Based on Figure 2, we estimate \(\sigma(A_{I})=0.1\) and therefore a contribution to \(\sigma(\ln\theta_{*})\) of 4.6%. Adding this in quadrature, we finally adopt \(\theta_{*}=0.92\pm 0.14\,\mu\)as, and hence \[\theta_{\rm E}=24.8\pm 3.6\,\mu{\rm as};\qquad\mu_{\rm rel}=6.69\pm 0.96\,{\rm mas \,yr^{-1}}. \tag{8}\] Next, we compare the source star to the baseline object as given by the OGLE-III catalog, in terms of both flux and astrometric position. First, the magnitude of the source on the OGLE-III system is \(I_{S}=20.68\pm 0.09\), while the baseline object from the OGLE-III catalog has \(I_{\rm base}=20.58\). That is, there is no evidence for blended light. In contrast to the situation for the source, the error on the baseline flux is driven primarily by surface brightness fluctuation due to undetected faint stars. Hence, blended light of \(f_{B}\lesssim 0.3\,f_{S}\) cannot be ruled out. If the putative blend were in the bulge, then this limit would still permit all main-sequence stars with masses \(M_{B}\lesssim 0.9\,M_{\odot}\). Therefore, this limit is only mildly constraining. We transform the source position (derived from centroiding the difference images of the magnified source) to the OGLE-III system for each of the KMTC03 and KMTS35 reductions. These differ by \(\Delta\mathbf{\theta}(E,N)=(24,27)\,{\rm mas}\), which leads to a rough estimate of the combined error of the pyDIA measurements and transformation of \(\sigma\simeq 18\,{\rm mas}\) for each measurement. Then comparing the average of these two determinations with the position of the baseline object, we find \(\mathbf{\theta}_{\rm base}-\mathbf{\theta}_{S}=(76,50)\,{ \rm mas}\). This difference is much larger than the 13 mas standard error of the mean of the two source-position measurements. In principle, the difference could be due to astrometric errors in the OGLE-III measurement of this faint star, which is also affected by surface brightness fluctuations. However, it is also possible that the baseline object has moved by \(\sim 90\,{\rm mas}\) relative to the bulge frame during the 16 years between the epoch of the OGLE-III catalog and the time of the event. In brief, all available information is consistent with the baseline object being dominated by light from the source. ## 5 Nature of the Lens In principle, the lens could lie anywhere along the line of sight, i.e., at any lens-source relative parallax, \(\pi_{\rm rel}\). Applying the scaling relation Equation (5) to the result from Equa tion (8) yields, \[M=\frac{\theta_{\rm E}^{2}}{\kappa\pi_{\rm rel}}=8\,M_{\rm jup}\Big{(}\frac{\pi_{ \rm rel}}{10\,\mu{\rm as}}\Big{)}^{-1}. \tag{9}\] Thus, if the lens is at the characteristic \(\pi_{\rm rel}=10\,\mu\)as of bulge lenses (for FSPL events), then it is formally a "planet" in the sense that it lies below the deuterium-burning limit. However, as the expected distribution of FSPL bulge-bulge microlensing events is roughly uniform in \(\pi_{\rm rel}\), it could also be more massive than this limit (so, formally, a "BD"). On the other hand, if the relative parallax had a value more typical of disk lenses, \(\pi_{\rm rel}\gtrsim 50\,\mu\)as, then the lens mass would be \(M\lesssim 1.6\,M_{\rm jup}\), i.e., clearly planetary. Nevertheless, the main interest of this object is how it relates to the Gould et al. (2022) statistical sample of FSPL events. The fact that KMT-2022-BLG-2397 is right at the upper shore of the Einstein Desert suggests that it is part of the dense population of objects lying just above this shore. As discussed in Section 1 with respect to Equation (5), these are likely to be primarily (or entirely) bulge BDs. The fact that their distribution is suddenly cut off implies a steeply rising mass function. Regardless of whether the threshold for this rise is above or below the deuterium-burning limit, its existence points to a formation mechanism that is distinct from planets, including FFPs. Therefore, the main question regarding KMT-2022-BLG-2397 is not its exact nature, but rather whether and how the ensemble of objects like it, i.e., low-\(\theta_{\rm E}\) FSPL events with dwarf-star sources, can contribute to our understanding of the BDs and FFPs that lie concentrated, respectively, above and below the Einstein Desert. ## 6 Limits on Hosts If the BD candidate had a host that was sufficiently close, it could leave trace features on the light curve, either a long-term "bump" directly due to the host or subtle distortions to the FSPL profile. We search for evidence of such features by a grid search over the three additional parameters required to describe binary-lens systems, \((s,q,\alpha)\). Here, \(q\) is mass ratio of the two components, \(s\) is their projected separation in units of their combined Einstein radius (which is \(\sqrt{q+1}\) larger than the Einstein radius associated with the FSPL event), and \(\alpha\) is the angle between the lens-source relative motion and the binary axis. We find that all such hosts with \(s<6.3\) are excluded. In fact, many hosts with \(s\lesssim 10\) are excluded, but there is a small "island" near \((s,q)\sim(6.6,20)\) that cannot be excluded (see Figure 4), and is nominally favored (after a full parameter search seeded at the grid-point values) by \(\Delta\chi^{2}\sim-2\) for 3 degrees of freedom. Because this is less than the improvement expected from pure Gaussian noise, it cannot be regarded as evidence in favor of a host. The projected separation of this putative host would be approximately \(\Delta\theta\simeq s\sqrt{q+1}\theta_{\rm E,FSPL}=0.75\,\)mas. Hence, if this putative host were detected in future high-resolution imaging (after the source and lens have separated on the sky), it would probably not be possible to distinguish its position from that of the FSPL object. Hence, it would not be possible to rule out that the detected star was the FSPL object rather than its host. For example, if the detected star were a late M dwarf in the bulge, \(M_{\rm star}=0.15\,M_{\odot}\), it could be either the host of the FSPL object, in which case the latter would have mass \(M_{\rm FSPL}\sim 8\,M_{\rm Jup}\) and with a very typical \(D_{LS}\sim 0.7\,\)kpc, or it could be the FSPL object itself, in which case it would have a very atypical \(D_{LS}\sim 30\,\)pc. Here we merely mention these possibilities in order to alert future observers to their existence. Unless and until there is a detection of stellar light that is close to the position of the FSPL object, it is premature to speculate on its interpretation. ## 7 Discussion KMT-2022-BLG-2397 was discovered serendipitously, i.e., in the course of by-eye perusal of the 2022 EventFinder sample. In contrast to the 10 FSPL giant-source events with \(\theta_{\rm E}<50\,\mu\)as summarized by Gould et al. (2022), it is not part of a systematic sample and therefore cannot be used to make statistical statements about the underlying populations of dark isolated objects. As conducting such systematic searches requires vastly greater effort than finding and analyzing some interesting events, it is appropriate to ask whether such a sample is likely to be worth the effort. This question can be broken down into three parts: * would such a survey likely contribute substantially to the total number of such small-\(\theta_{\rm E}\) events? (Section 7.1); * would they contribute qualitatively different information relative to the existing giant-source sample? (Section 7.2); * is it likely that the enhanced numbers and/or improved quality could be obtained before better experiments come on line to attack the same underlying scientific issues? (Section 7.3) Before addressing these questions, we note that of the 10 small-\(\theta_{\rm E}\) giant-source FSPL events from Gould et al. (2022), five were published either before or independent of the decision by Kim et al. (2021) and Gould et al. (2022) to obtain a complete sample of giant-source FSPL events. These included two of the four FFP candidates (OGLE-2016-BLG-1540 and OGLE-2019-BLG-0551, Mroz et al., 2018, 2020a) and three of the six BD candidates (MOA-2017-BLG-147, MOA-2017-BLG-241, Han et al., 2020, and OGLE-2017-BLG-0560, Mroz et al., 2019). Thus, serendipitous detections can play an important role in motivating systematic searches. ### Relative Detectability of FSPL Events From Dwarf and Giant Sources We begin by assessing the relative contributions of microlensing events with giant sources to those whose sources are main-sequence or subgiants (hereafter collectively referred to as "dwarfs"). We must start with events that meet three conditions: * they are actually detected by KMT (Section 7.1.2), * they objectively have the property that the lens transits the source (independent of whether there are any data taken during this interval; Section 7.1.3), and * \(\rho\) is measurable in the data (Section 7.1.1). These three conditions interact in somewhat subtle ways, so their investigation overlaps different sections and the divisions indicated above are only approximate. They are combined in Section 7.1.4. For the moment, we simply report the result that, with respect to BD FSPL events, dwarfs are favored over giants by a factor \(\sim 2.7\), while this factor is somewhat less for FFP FSPL events. #### 7.1.1 Is \(\rho\) Measurable? The first consideration is whether or not the data stream contains adequate data points to measure \(\rho\). For dwarfs, the chance that the data stream will contain points that are close enough to the peak to permit a \(\rho\) measurement is substantially smaller, simply because the duration of the peak is shorter by a factor \(t_{*,{\rm d}}/t_{*,{\rm g}}=\theta_{*,{\rm d}}/\theta_{*,{\rm g}}\sim 1/10\). For example, 11 of KMT's 24 fields have cadence \(\Gamma=0.4\,{\rm hr}^{-1}\) and 3 have cadence \(\Gamma=0.2\,{\rm hr}^{-1}\), but for a \(2t_{*}\sim 20\,{\rm hr}\) peak, these cadences can be quite adequate to measure \(\rho\). Indeed, five of the 30 giant-source FSPL events of Gould et al. (2022) came from these low-cadence fields, despite their dramatically lower overall event rate. See their Figure 2. More strikingly, 15 of the 30 came from the seven fields with \(\Gamma=1\,{\rm hr}^{-1}\). These would also have adequate coverage for events with dwarf-source events, provided that there were no gaps in the data due to weather or shorter observing windows in the wings of the season. To account for this, we estimate that one-third of these \(\Gamma=1\,{\rm hr}^{-1}\) events would be lost due to gaps. Dwarf-star source would be most robustly detected in the three prime fields, which have cadences \(\Gamma=2\)-\(4\,{\rm hr}^{-1}\), but these fields accounted for only 10 of the 30 giant-source FSPL events. Thus, one can expect that about two-thirds as many dwarf-source events would have adequate coverage compared to giant sources (relative to the numbers of actually detected events that have the property that the lens transits the source, as discussed in the first paragraph). #### 7.1.2 Signal-to-noise Ratio The second consideration is that in most cases (with one important exception), the overall signal-to-noise ratio (S/N) is lower for a dwarf source compared to a giant source for the "same" event, i.e., same parameters \((t_{0},t_{\rm E},z_{0},\theta_{\rm E})\). To elucidate this issue (as well as the one aspect for which dwarf sources have a clear advantage, see below), we follow Mroz et al. (2020a) and analyze the signal in terms of the mean surface brightness \(S=f_{S}/\pi\theta_{*}^{2}\) of the source. For purposes of illustration, we ignore limb darkening and consider the signal from an observation when the lens and source are perfectly aligned as representative. Hence, \(A_{\rm max}=\sqrt{1+4/\rho^{2}}\), and so the excess flux of the magnified image, \(\Delta F_{\rm max}=(A_{\rm max}-1)f_{S}\), which can be approximated, \[\Delta F_{\rm max}=2\pi S\theta_{\rm E}^{2}\quad(\rho\gg 1),\qquad\Delta F_{ \rm max}=2\pi S\theta_{\rm E}\theta_{*}\quad(\rho\ll 1). \tag{10}\] As giants are bigger than dwarfs, i.e., \(\rho_{\rm g}/\rho_{\rm d}>1\), there are three cases to consider: (1) \(1>\rho_{\rm g}>\rho_{\rm d}\);: (2) \(\rho_{\rm g}>1>\rho_{\rm d}\); (3) \(\rho_{\rm g}>\rho_{\rm d}>1\). These yield ratios, \[\Big{(}\frac{\Delta F_{\rm max,d}}{\Delta F_{\rm max,g}}\Big{)}_{1}=\frac{S_{ \rm d}}{S_{\rm g}}\frac{\theta_{*,{\rm d}}}{\theta_{*,{\rm g}}}.\quad\Big{(} \frac{\Delta F_{\rm max,d}}{\Delta F_{\rm max,g}}\Big{)}_{2}=\frac{S_{\rm d}} {S_{\rm g}}\rho_{\rm d};\quad\Big{(}\frac{\Delta F_{\rm max,d}}{\Delta F_{\rm max,g}}\Big{)}_{3}=\frac{S_{\rm d}}{S_{\rm g}}; \tag{11}\] To a good approximation5, the surface-brightness ratio in Equation (11) is given by Footnote 5: This is essentially the same approximation that underlies linear color-color relations in this regime, i.e., that the Planck factor is well-approximated by the Boltzmann factor. \[\frac{S_{\rm d}}{S_{\rm g}}=\exp\Bigl{[}\frac{hc}{k\lambda_{I}}\Big{(}\frac{1}{ T_{\rm g}}-\frac{1}{T_{\rm d}}\Big{)}\Bigr{]}\to 2.0. \tag{12}\] where \(\lambda_{I}=810\,{\rm nm}\) and where we have made the evaluation at representative temperatures \(T_{\rm d}=5800\,{\rm K}\) and \(T_{\rm d}=4700\,{\rm K}\) for dwarfs and giants, respectively. Thus, for \(\rho_{\rm g}<1\), the signal is about 5 times larger for giants than dwarfs. That is, the source is 10 times larger but the surface brightness is 2 times smaller. This is the regime of essentially all of the FSPL events from Gould et al. (2022), except for the four FFPs. The signals only approach equality for \(\rho_{\rm d}\sim 0.5\) (i.e., \(\rho_{\rm g}\sim 5\)). To date, the only FSPL events near or below this regime are the FFP candidates, OGLE-2012-BLG-1323 (\(\rho_{\rm g}=5.0\) Mroz et al. 2019), OGLE-2016-BLG-1928 (\(\rho_{\rm g}=3.4\) Mroz et al. 2020b), and OGLE-2019-BLG-0551 (\(\rho_{\rm g}=4.5\) Mroz et al. 2020a)6. Footnote 6: Recently, Koshimoto et al. (2023) have announced an FSPL FFP, MOA-9yr-5919, with \(\theta_{\rm E}=0.90\pm 0.14\,\mu\)as. The source is a subgiant, \(\theta_{\rm*}=1.26\pm 0.48\,\mu\)as, so \(\rho=1.4\). However, the underlying object can be considered to be in this regime because if the source had been a giant (\(\theta_{\rm*}\sim 6\,\mu\)as), then \(\rho\sim 6.7\). Finally, giants have an additional advantage that the duration of the peak region is 10 times longer, so that (at fixed cadence) there are 10 times more data points, which is a \(\sqrt{10}\sim 3\) advantage in S/N. However, when we considered the signals from excess flux \(\Delta F\), we ignored the fact that the giant signal is more degraded by photon noise compared to the dwarf signal, simply because the baseline giant flux is greater. This is the "important exception" mentioned above. Nevertheless, as we now show, while the importance of this effect depends strongly on the extinction, for typical conditions it is modest. For typical KMT seeing (\({\rm FWHM}\sim 4\) pixels) and background (\(B\sim 800\) flux units per pixel), keeping in mind the KMT photometric zero point of \(I_{\rm zero}=28\), and in the Gaussian PSF approximation, the baseline source flux and background light contribute equally to the noise at \(I_{S}=I_{\rm zero}-2.5\log(4\pi B\times{\rm FWHM}^{2}/\ln(256))=16.8\). Given typical extinction levels, \(1\lesssim A_{I}\lesssim 3\), dwarf (including subgiant) stars are almost always fainter than this threshold. On the other hand, clump giants (\(I_{S,0}\sim 14.5\)) at typical extinction (\(A_{I}\sim 2\)) have about equal photon noise from the source and the background, implying a reduction of \(\sqrt{2}\) in S/N. Only very bright giants suffer from substantially greater noise, but these also have a far greater signal than the typical estimates given above. Hence, the higher noise from giants does not qualitatively alter the basic picture that we presented above. In brief, for fixed conditions, the signal from the "same" event is substantially greater for giant sources than dwarf sources, except for the case \(\theta_{\rm E}\lesssim 2\theta_{\rm*,g}\sim 12\,\mu\)as, for which a declining fraction of the giant is effectively magnified. This is the regime of FFPs. #### 7.1.3 Relative Number of Events We close by examining the interplay between cross section, surface density, and magnification bias as they affect the relative number of giant-source and dwarf-source FSPL events, i.e., events that are both in the KMT sample and have the objective property that the lens transits the source. Figure 5 shows cumulative distributions of \(u_{0}\) for four classes of events drawn from the 2019 KMT web page: two groups of upper main-sequence stars, \(19.5<I_{0}<20.5\) and \(18.5<I_{0}<19.5\), \(16.5<I_{0}<18.5\) ("subgiants"), and \(13<I_{0}<16.5\) ("giants"). The parameters are derived from the automated fits of the KMT webpage. Events with \(u_{0}\geq 1\) are excluded because the automated fitter just assigns these \(u_{0}=1\). In addition, events with no tabulated extinction are also excluded. The four groups contain, respectively, 382, 726, 1286, and 459 events for a total of 2853. Additionally, there are 13 events with \(I_{0}<13\) and 280 others with \(I_{0}>20.5\) that are excluded from this study in order to keep it simple. The first point to note is that the giant-star sample is perfectly consistent with being uniform, as is rigorously expected for the underlying population of microlensing events, That is, the maximum difference between the giant-star curve and the yellow line is \(D=0.0344\), yielding a Kolmogorov-Smirnov (KS) statistic \(D\sqrt{N}=0.74\). The other curves all display "magnification bias": they are uniformly distributed in \(u_{0}\) up to a point but then bend toward the right. The respective break points for the three classes (fainter to brighter) are roughly \(u_{0}\sim(0.05,0.10,0.20)\). This is important because BD FSPL events take place at relatively high magnification, so the relative paucity of detected dwarf-source events at low magnification plays very little role. This feature is illustrated by the magenta and blue points, which represent the effective \(u_{0}\) equivalents for BDs at the boundaries of the region of interest. The lower boundary (\(25\,\mu\)as) is the upper shore of the Einstein Desert, while the upper boundary (\(50\,\mu\)as) is an approximate upper limit for relatively secure BD candidates. To make these identifications, we first assign a representative \(\theta_{*}=(0.5,0.6,2.0,6.0)\,\mu\)as to the four populations and then equate peak magnifications, i.e., \(A=\sqrt{1+4/\rho^{2}}\), and \(A=(u_{0}^{2}+2)/u_{0}\sqrt{u_{0}^{2}+4}\). In other words, \(u_{0,\rm eff}^{2}=\sqrt{4+\rho^{2}}-2\). That is, we are assuming that the structures seen in Figure 5 are due to peak-magnification bias. From Figure 5, the BD range is entirely in the linear regime for each of the 3 non-giant populations, while for giants, the entire distribution is linear. This means that the contributions of these populations can be estimated based on the slopes in these regimes. In other words, the detection of BDs would be exactly the same as if these regimes remained linear up to \(u_{0}=1\). We find that from faint to bright, the linear regimes of the four populations are in ratios of 6.2:6.2:4.8:1 (i.e., the observed low-\(u_{0}\) slopes of the normalized distributions from Figure 5 multiplied by the total population of each group). Multiplying these relative source frequencies by the \(\theta_{*}\) values (i.e., cross sections) listed above yields ratios of relative rates of 0.5:0.6:1.6:1. Hence, the ratio of rates of dwarf to giant source events is \((0.5+0.6+1.6)=2.7\). Taking account of the cadence-induced factor of 2/3 for the effectiveness of dwarf searches, as estimated above, the dwarfs have an overall advantage of \((0.5+0.6+1.6)/1.5\to 2.7/1.5=1.8\) relative to giants. For FFPs the situation is somewhat more complicated. The green and red points represent the lower shore of the Einstein Desert (\(\theta_{\rm E}=10\,\)mas) and the smallest Einstein radius in the Gould et al. (2022) sample (\(\theta_{\rm E}=4\,\)mas), respectively. Within this range, the four distributions are essentially in the linear regime, and hence the same argument given for BDs still basically applies. However, dwarf sources are potentially sensitive to yet smaller Einstein radii, i.e., \(\theta_{\rm E}<4\,\mu\)as, which correspond to an FFP population that is not detectable with giants. These are located at positions to the right of the red points on each curve. Because the curves start to turn over in these regions, sensitivity is lost relative to the approximately linear regimes to the left of the red points. This is particularly so for the two main-sequence populations. Nevertheless, substantial sensitivity remains until approximately \(\theta_{*}\sim 1\,\mu\)as (cyan points), beyond which the cumulative curves flatten, implying that the sensitivity declines catastrophically. In brief, within the \(\theta_{\rm E}\)-range of the FFPs probed by giant sources, the dwarf-to-giant-source ratio will be somewhat lower than for BDs because the curves in Figure 5 begin to deviate from linear. However, in contrast to the situation for the BDs, the dwarfs open up additional (and poorly characterized) parameter space for the FFPs. Hence, we expect that the BD and FFP relative dwarf-to-giant sensitivities are similar, while recognizing that the latter is more uncertain. It is of some interest to compare the slope ratios derived above for the \(u_{0}\ll 1\) regime (6.2:6.2:4.8:1) with what would be expected based on the relative number of sources as determined from the Holtzman et al. (1998) luminosity function (HLF), based on _Hubble Space Telescope (HST)_ images of Baade's Window. The HLF is effectively displayed only for \(M_{I}>-0.125\), corresponding to \(I_{0}>14.375\). Thus to make the comparison, we first impose this restriction on the KMT giant bin, which reduces it from 459 to 404. For reasons that will become clear, we normalize to the subgiants, rather than the giants. Then, the observed ratios are (1.31:1.30:1:0.18). By contrast, for the HLF, we find ratios (1.94:1.45:1:0.10). If we ignore the giants for the moment, then the following narrative roughly accounts for the relationship of these two sets of ratios: The KMT EventFinder and AlertFinder algorithms search the ensemble of difference images for microlensing events at the locations of cataloged stars. The great majority of subgiants are in the catalog, so their locations are searched and thus the great majority of high-magnification events are found. Most of the stars \(18.5<I_{0}<19.5\) are also in the catalog, unless they happen to be close to brighter stars, in which case their locations are also searched. However, some of these stars are far from any cataloged stars and still are not included in the underlying catalogs. Hence, the expected ratio (1.45:1) and observed ratio (1.30:1) are similar, but there is a slight deficit for the latter due to events concurring at unsearched locations. Then the same argument predicts that this shortfall will be greater for the \(19.5<I_{0}<20.5\) because these are much less likely to enter the catalogs even if they are isolated. Unfortunately, this narrative does not account for the discrepancy between expected and observed for giants, which should enter the catalogs similarly to subgiants. We conjecture that the narrative is essentially correct and that the giant-subgiant comparison suffers from some effect that we have not identified. This could be investigated by running the EventFinder algorithm more densely, say at \(0.5^{\prime\prime}\) steps, rather than just at the positions of cataloged stars. This would be prohibitive for the full \(\sim 100\deg^{2}\) survey, but might be possible on small subset. #### 7.1.4 Summary To summarize overall, BD and FFP lenses are about 2.7 times more likely to transit the source in main-sequence-star and subgiant (collectively "dwarf") cataloged events compared to cataloged giant-source events. However, they are roughly two-thirds as likely to have adequate data over peak, and are somewhat more difficult to characterize due to lower signal. Thus, after applying a "characterization penalty" to the factor of 1.8, we find that they are likely to contribute at perhaps 1.5 times the rate of giants to the overall detections of substellar FSPL events. ### Qualitatively Different Information? The main potential qualitative advantage of dwarf sources over giant sources for FSPL events is in the regime of FFPs. As shown in Section 7.1, the S/N for dwarf sources is comparable or higher on an observation-for-observation comparison for the "same" event. Formally, in the extreme limit (case (1) from Equation (11)), this is only a factor 2 in higher signal, with a typical further improvement of a factor of \(\sqrt{2}\) from lower noise. And hence, this advantage is approximately canceled by the fact that giants have \(\sim 10\) times more observations for the same cadence. However, in the extreme regime \(\rho_{\rm g}\gtrsim 7\) (i.e., \(A_{\rm max}\lesssim 1.04\)), it may not even be possible to recognize, let alone robustly analyze, a giant-source event because of confusion with potential source variability. Indeed, as of today, there are no such FFPs that have yet been identified. Thus, the first potentially unique feature of dwarf sources is their ability to probe to smaller \(\theta_{\rm E}\). Indeed, in this context, it is notable that the source of the smallest-\(\theta_{\rm E}\) FSPL event to date, OGLE-2016-BLG-1928 (\(\theta_{\rm E}=0.84\pm 0.06\,\mu\)as), is a low-luminosity giant, \(I_{S,0}=15.8\) (\(\theta_{*}=2.85\pm 0.20\,\mu\)as), i.e., 1.4 mag below the clump. A second potential advantage, as discussed in Section 1, is that dwarf-source FSPL events can be subjected to AO follow-up observations much earlier than giant-source events. Such observations are critically important for FFPs in order to determine whether they are truly "free floating" or they are in wide orbits around hosts that remain invisible under the glare of the source as long as the source and FFP stay closely aligned. This issue is also relevant to BDs. Moreover, for BD candidates, one would like to confirm that they are actually BDs, i.e., that their small values of \(\theta_{\rm E}=\sqrt{\kappa M\pi_{\rm rel}}\) are actually due to small \(M\) rather than small \(\pi_{\rm rel}\). For the case of BD candidates, one might ask how one could distinguish between two competing interpretations of the detection of stellar light associated with the event, i.e., that it comes from the lens itself (that is, the lens is actually a star with small \(\pi_{\rm rel}\)) or a host to the lens. This brings us to a third potential advantage of dwarf sources. The fact that the peak can have greater structure (caused by much smaller \(\rho\)) makes it easier to detect signatures of the host during the event. At the extreme end, such events may be dominated by this structure rather than finite source effects, as in the cases of MOA-bin-1 (Bennett et al., 2012) and MOA-bin-29 (Kondo et al., 2019). But even if host effects are not observed, in principle, stronger limits can be set on companions (hosts) as a function of mass ratio and separation. Nevertheless, we showed in Section 6 that, for the case of KMT-2023-BLG-2397, it would probably not be possible to distinguish between two hypotheses (lens or companion to the lens) if there were a future detection of stellar light at the position of the event. ### Context of Competing Approaches From the above summary, a systematic search for FSPL events with dwarf-star sources could plausibly contribute about 1.5 times as many measurements as the KMT giant-source search (Gould et al., 2022), which found 4 FFP candidates and 6 excellent BD candidates (defined as \(\theta_{\rm E}<50\,\mu\)as) in a 4-year search. Hence, plausibly, the full sample could be increased by a factor \(\sim 6\) by 2026. To the best of our knowledge, there will be no competing approaches that yield either more or qualitatively better information on these classes of objects on this timescale. Moreover, the part of this parameter space that is of greatest current interest, i.e., FFPs, is also the part that has the greatest unique potential for dwarf-star sources. Therefore, on these grounds alone, it appears worthwhile to conduct such searches. However, on somewhat longer timescales, there are several competing approaches that are either proposed or under development. We review these as they apply to dark isolated objects, with a focus on FFPs and BDs. #### 7.3.1 Prospects for Isolated-Object Mass-Distance Measurements The first point is that when the masses and distances of dark isolated objects can be "routinely" measured, the utility of partial information (e.g., \(\theta_{\rm E}\)-only measurements) will drastically decline. In this context, it is important to note that the technical basis for routine BH mass-distance measurements already exists. This may seem obvious from the fact that there has already been one such measurement (OGLE-2011-BLG-0462, Sahu et al., 2022; Lam et al., 2022; Mroz et al., 2022), which had a very respectable error of just 10%. However, the characteristics of this BH were extraordinarily favorable, so that the rate of comparable-quality BH mass measurements via the same technical path (annual parallax plus astrometric microlensing) is likely to be very low. First, OGLE-2011-BLG-0462 is unusually nearby7 (\(D_{L}=1.6\,\)kpc, \(\pi_{\rm rel}=0.50\,\)mas), which led to an unusually large Einstein radius (\(\theta_{\rm E}=\sqrt{\kappa M\pi_{\rm rel}}=5.7\pm 0.4\,\)mas) and (for a BH) unusually large microlens parallax (\(\pi_{\rm E}=\sqrt{\pi_{\rm rel}/\kappa M}=0.088\pm 0.008\)), i.e., both \(\propto\sqrt{\pi_{\rm rel}}\). Second, the errors in the parallax measurement, which to leading order do not depend on the measured values, were unusually small for two reasons. First, while BH events are drawn from \(\sim 100\,\)deg\({}^{2}\) of microlensing surveys, OGLE-2012-BLG-0462 happened to lie in the \(\sim 4\,\)deg\({}^{2}\) of the OGLE survey that was monitored at a high rate, \(\Gamma=3\,\)hr\({}^{-1}\). The next highest cadence (\(1\,\)hr\({}^{-1}\)) would have led to errors that would have been 1.7 times higher. Second, being nearby (so large \(\theta_{\rm E}\)), but having a typical relative proper motion (\(\mu_{\rm rel}=4.3\,\)mas yr\({}^{-1}\)), meant that the event was longer than one at a typical distance for a disk BH (\(\pi_{\rm rel}\sim 60\,\mu\)as), by a factor \(\eta\sim 2.9\). Such a shorter event would have had a larger error by \(\eta^{2}\sim 8.3\) for \(\pi_{{\rm E},E}\) (and larger for \(\pi_{{\rm E},N}\)), while (as just mentioned), \(\pi_{\rm E}\) itself would be a factor \(\eta\) smaller. Hence, this distance effect, by itself would increase the fractional error in \(\pi_{\rm E}\) by \(\gtrsim\eta^{3}\). That is, for the example given, the fractional error would be increased by a factor of 24 from 9% to 215%. Footnote 7: Note that in their abstract, Mroz et al. (2022) propagate the incorrect distance estimate of Sahu et al. (2022), but they correct this in their penultimate paragraph. The relative rarity of BH events with robustly measurable \(\pi_{\rm E}\) (in current experiments) interacts with the challenges of astrometric microlensing. In the case of OGLE-2011-BLG-0462, this required 8 years of monitoring by _HST_. If the fraction of BH events with measurable \(\pi_{\rm E}\) is small, then application of this laborious _HST_-based technique cannot yield "routine" measurements. This problem has already been partially solved by the development of GRAVITY-Wide VLTI interferometry, and will be further ameliorated when GRAVITY-Plus comes online. GRAVITY itself can make very precise (\(\sigma\sim 10\,\mu\)as) measurements (Dong et al., 2019) for Einstein radii as small as \(\theta_{\rm E}\gtrsim 0.5\) mas. The current and in-progress upgrades to GRAVITY do not improve this precision (which is already far better than required for this application), but they permit the observation of much fainter targets. In addition, results can be obtained from a single observation (or two observations). Moreover, interferometry has a little recognized, but fundamental advantage over astrometric microlensing: by separately resolving the images, it precisely measures \(\mathbf{\mu}_{\rm rel}\) including its direction. In the great majority of cases (although not OGLE-2011-BLG-0462), the light-curve based \(\mathbf{\pi}_{\rm E}\) measurements yield an effectively 1-D parallax (Gould et al., 1994), with errors that are of order 5-15 times larger in one direction than the other. By measuring the direction of lens-source motion, interferometry effectively reduces the error in the amplitude of the parallax, \(\pi_{\rm E}\), from that of the larger component to that of the smaller component (Ghosh et al., 2004; Zang et al., 2020). These advances in interferometry not only greatly increase the number of potential targets but also substantially ameliorate the difficulty of obtaining precise parallax measurements. Nevertheless, to obtain truly "routine" isolated-BH mass-distance measurements using this approach would require a dedicated parallax satellite in solar orbit, which could complement "routine" VLTI GRAVITY high-precision \(\theta_{\rm E}\) measurements, with "routine" satellite-parallax high-precision \(\pi_{\rm E}\) measurements. While there are draft proposals for such a satellite, there are no mature plans. Another path of "routine" BH mass measurements may open up with the launch of the _Roman_ space telescope. Gould & Yee (2014) argued that space-based microlensing observations alone could, in principle, return mass measurements for a substantial fraction of lenses by a combination of astrometric and photometric microlensing. Regarding dark objects, they explicitly excluded BDs and FFPs as unmeasurable. Hence, here we focus only on BHs. In this context, we note that Lam et al. (2020) predicted that _Roman_ "will yield individual masses of O(100-1000) BHs". We briefly show that the logic of both of these papers regarding _Roman_ BH mass measurements is incorrect, and that, in particular, _Roman_ will be mostly insensitive to bulge BHs. Nevertheless, _Roman_ could return masses for some disk BHs, although, as we will show below, this issue should be more thoroughly investigated by explicit calculations. Gould & Yee (2014) argued that because mass determinations require both \(\mathbf{\theta}_{\rm E}\equiv\mathbf{\mu}_{\rm rel}t_{\rm E}\) and \(\pi_{\rm E,\parallel}\) measurements, and because the former are generally substantially more difficult (assuming they are obtained via astrometric microlensing), one should just focus on the \(\mathbf{\theta}_{\rm E}\) measurement when assessing whether masses can be measured. However, the relative difficulty is, in fact, mass dependent, and while their assessment is valid for typical events with \(M\sim 0.5\,M_{\odot}\), it does not apply to BHs, with \(M\sim 10\,M_{\odot}\). In particular, while the ratio of errors \(\sigma(\theta_{\rm E})/\sigma(\pi_{\rm E,\parallel})\) essentially depends only on the observational conditions, the ratio of values scales directly with mass \(\theta_{\rm E}/\pi_{\rm E}=\kappa M\). Hence, the entire logic of the Gould & Yee (2014) approach does not apply to BHs. Regarding the Lam et al. (2020) estimate, it is rooted in very generous assumptions, as codified in their Table 4, and it explicitly ignores the large gaps in the _Roman_ data stream. In particular, they assume that all BHs with timescales \(90<t_{\rm E}/{\rm day}<300\), impact parameters \(u_{0}<1.7\) and source fluxes \(H_{\rm AB}<26\) (\(H_{\rm Vega}<24.6\)) will yield mass measurements. While a thorough investigation of _Roman_ sensitivity of BHs is beyond the scope of the present work, we have carried out a few calculations, both to check the general feasibility of this approach and (hopefully) to motivate a more systematic investigation. We modeled _Roman_ observations as taking place in two 72-day intervals, each centered on the equinoxes of a given year, and in three separate years that are successively offset by two years. We model the photometric and astrometric errors as scaling as \(A^{-1/2}\) because the faint sources for which this approximation does not apply are well below the threshold of reliable parallax measurements. We adopted timescales of \(t_{\rm E}=60\,{\rm day}\), \(t_{\rm E}=120\,{\rm day}\), and \(t_{\rm E}=180\,{\rm day}\) as representative of bulge BHs, typical-disk BHs, and nearby-disk BHs, respectively, and we considered events peaking at various times relative to the equinox and at various impact parameters. Given our assumptions, the errors in both \(\mathbf{\theta}_{\rm E}\) and \(\pi_{\rm E,\parallel}\) scale inversely as the square-root of source flux. For purposes of discussion we reference these results to sources with 1% errors, i.e., \(H_{\rm Vega}=20.4\), or roughly \(M_{H}\sim 5.3\), i.e., M0 dwarfs. For \(t_{\rm E}=60\,{\rm day}\), we find that \(\sigma(\pi_{\rm E,\perp})/\sigma(\pi_{\rm E,\parallel})\sim 10\)-20. Hence, essentially all the parallax information is in \(\pi_{\rm E,\parallel}\). Because the orientations are random, this means that one should aim for \(\pi_{\rm E}/\sigma(\pi_{\rm E,\parallel})\sim 10\) to obtain a useful mass measurement. For bulge BHs, i.e., \(\pi_{\rm rel}\sim 16\,\mu{\rm as}\) and \(M\sim 10\,M_{\odot}\), we expect \(\pi_{\rm E}\sim 0.014\), implying a need for \(\sigma(\pi_{\rm E,\parallel})\lesssim 0.0014\). We find that this can be achieved for our fiducial sources only for \(u_{0}\lesssim 0.4\) and only for offsets from the equinox of about 0 to 36 days in the direction of summer, i.e., after the vernal equinox or before the autumnal equinox. For typical-disk BHs, i.e., \(\pi_{\rm rel}\sim 50\,\mu{\rm as}\) and \(t_{\rm E}\sim 120\,{\rm day}\), \(\pi_{\rm E}\) is larger by a factor 1.75, implying that parallax errors that are 1.75 times larger are acceptable. Moreover, the longer timescales imply that the parallax measurements will be more precise. We find that even choosing sources that are 1.2 mag fainter (so 1.75 times larger photometric errors), the range of allowed \(t_{0}\) more than doubles to the entire interval from vernal to autumnal equinox, while the range of acceptable impact parameters increases to \(u_{0}\lesssim 0.6\). The combined effect of these changes roughly increases the fraction of events with measurable masses by a factor \(\sim 6\). We find qualitatively similar further improvements for near-disk lenses with \(t_{\rm E}=180\,\)day. Because of the restriction to relatively bright sources, the relatively small area covered by _Roman_, as well as the limited range of allowed \(t_{0}\) and \(u_{0}\), mass measurements of bulge BHs will be rare. The situation is substantially more favorable for disk BHs, but the restrictions remain relatively severe. We note that we find that whenever \(\pi_{\rm E,\parallel}\) is adequately measured, the nominal SNR for \(\theta_{\rm E}\) is much higher. However, we caution the reader to review the extensive discussion by Gould & Yee (2014) of the "known known", "known unknown", and "unknown unknown" systematic errors. The prospect for mass-distance measurements of isolated substellar objects are significantly dimmer than for isolated BHs. Regarding FFPs, there are proposals for new missions that would yield such measurements, but none approved so far. Regarding isolated BDs, there are not even any proposals. The only realistic way to measure \(\theta_{\rm E}\) for substellar objects is from finite-source effects. That is, the relevant values, \(\theta_{\rm E}\lesssim 50\,\mu\)as are at least an order of magnitude smaller than is feasible with VLTI GRAVITY and even less accessible to astrometric microlensing using current, or currently conceived, instruments. The event timescales are too short by 1-2 orders of magnitude to yield \(\pi_{\rm E}\) from annual parallax. Hence, they must be observed from two locations, i.e., two locations on Earth (terrestrial parallax), or from one or several observatories in space. Gould & Yee (2013) have already shown that the first approach can yield at most a few isolated-BD mass measurements per century. Hence, the requirement for a mass-distance measurement is that the two observers should be separated by some projected distance \(D_{\perp}\) that is substantially greater than an Earth radius and that they should simultaneously observe an event that is FSPL as seen from at least one of them. As a practical matter, this means that both observers would have to be conducting continuous surveys of the same field. The alternative would be to alert the second observatory prior to peak, based on observations from the first. Because the events have timescales \(t_{\rm E}\lesssim 2\,\)days, and given constraints on spacecraft operations, there are no prospects (also no plans) for such a rapid response at optical/infrared wavelengths at the present time8. We now assess the constraints on \(D_{\perp}\) to make such a measurement for substellar objects, from the standpoint of mission design. That is, we are not attempting to make detailed sensitivity estimates, but rather to determine how the regions of strong sensitivity depend on \(D_{\perp}\). There are three criteria for good sensitivity, which we express in terms of the projected Einstein radius, \(\tilde{r}_{\rm E}\equiv{\rm au}/\pi_{\rm E}\), 1. \(D_{\perp}\lesssim\max(1,\rho)\tilde{r}_{\rm E}\). 2. \(D_{\perp}\gtrsim 0.05\max(1,\rho)\tilde{r}_{\rm E}\). 3. \(\rho\lesssim 3\) (for dwarfs) or \(\rho\lesssim 5\) (for giants). The first criterion is that the second observer will see an event, i.e., that the lens will pass within the maximum of \(\sim\theta_{\rm E}\) and \(\sim\theta_{*}\) of the source on the source plane, which translates to condition (1) on the observer plane. While there are special geometries for which this condition is violated and both observatories will still see an event, our objective here is to define generic criteria, not to cover all cases. The second criterion ensures that the event looks sufficiently different from the two observatories that a reliable parallax measurement (in practice, \(\gtrsim 3\,\sigma\)) can be made. Note that Gould & Yee (2013) set this limit at 2% (rather than 5%) of the source radius for terrestrial parallax. However, they were considering the case of very high cadence followup observations of very highly magnified sources, albeit on amateur-class telescopes, whereas for the survey case that we are considering, there are likely to be only a handful of observations over peak. The third criterion ensures that the event is sufficiently magnified over peak for a reliable measurement. Figure 6 compares these constraints to the expected locations of the two targeted populations, i.e., FFPs (magenta) and BDs (green), for four values of \(D_{\perp}=(0.003,0.01,0.03,0.1)\,{\rm au}\). The axes (\(\theta_{\rm E}\) versus \(\pi_{\rm rel}\)) are chosen to highlight what is known about these two populations, with the central fact being that the FFPs lie below the Einstein Desert (\(\theta_{\rm E}\lesssim 10\,\mu{\rm as}\)), whereas the BDs lie above it (\(\theta_{\rm E}\gtrsim 25\,\mu{\rm as}\)). As discussed by Gould et al. (2022), there are strong reasons for believing that the BDs are in the bulge, which we have represented by a cutoff at \(\pi_{\rm rel}=0.03\,{\rm mas}\). We have also imposed a somewhat arbitrary mass limit on the FFPs of \(M<5\,M_{\rm Jup}\) in order to illustrate that only if these objects are fairly massive can they actually be in the bulge. To illustrate the role of dwarf and giant sources, we choose \(\theta_{*}=0.6\,\mu\)as and \(\theta_{*}=6\,\mu\)as, respectively. Before discussing the implications of Figure 6, we note that the basic form of the allowed region is a band defined by a constant range of \(\pi_{\rm E}\), i.e., \(0.05<D_{\perp}\pi_{\rm E}/{\rm au}<1\), with a somewhat complex threshold at \(\pi_{\rm rel}\gtrsim\theta_{*}\). All current ideas for making these measurements are close to the top-right panel, i.e., the Earth-L2 distance. These include placing a satellite at L2 to continuously observe one KMT field (Gould et al., 2021; Ge et al., 2022), making observations from Earth of the _Roman_ fields, observing the same fields simultaneously from _Roman_ and _Euclid_, both in L2 halo orbits, or observing the same field from _Roman_ in L2 and _CSST_ in low-Earth orbit. The third would be intermediate between the top-left and top-right panels. See Gould et al. (2021). From Figure 6, these proposed experiments would be well-matched to measuring FFP masses for the known population, including members of both the disk and the bulge. However, these experiments will not measure masses for the known BD population. This would require \(D_{\perp}\gtrsim 0.3\,{\rm au}\), i.e., more in the range of what is needed for BH mass measurements. In summary, while some experiments proposed for the coming decade could lead to FFP mass measurements, there are no current prospects for BD mass measurements. Hence, \(\theta_{\rm E}\)-only surveys will remain the only method for detailed probing of this population for at least several decades. #### 7.3.2 Prospects for FSPL Measurements Another possibility is that competing approaches might obtain a much larger number of FSPL measurements on one decade timescales compared to what can be achieved with current experiments. This would diminish, although it would not negate, the urgency of making such measurements based on current experiments. The main competing approach would come from the _Roman_ telescope, which is currently scheduled for launch in 2027 and would conduct a total of \(\sim 1.2\) years of observations of \(\sim 2\,{\rm deg}^{2}\) at a cadence of \(\Gamma\sim 4\,{\rm hr}^{-1}\), using a broad \(H\)-band filter on a 2.4m telescope at L2. Johnson et al. (2020) have comprehensively studied FFP detections by _Roman_, including detailed attention to finite-source effects, which yield FSPL events. They do not extend their mass range up to BDs, but it is not difficult to extrapolate from their maximum of \(10^{3}\,M_{\oplus}\) to the BD range. However, our interest here is not so much the _absolute_ number of such detections under various assumptions, but the _relative_ number compared to current experiments. In this section, we show that while _Roman_ will greatly increase the number of FFP and isolated BD PSPL (i.e., \(t_{\rm E}\)-only) events relative to what can be achieved from the ground, it will not be competitive in identifying FSPL events (i.e., with \(\theta_{\rm E}\) measurements), except in the regime \(0.1\lesssim\theta_{\rm E}/\mu{\rm as}\lesssim 1\). We anticipate that the combination of the large space-based PSPL sample with the smaller ground-based FSPL sample will be more informative than either sample separately. However, we do not explore that aspect here because our primary concern is to investigate the uniqueness of the ground-based sample. We begin by developing a new metric by which to compare the sensitivity of the KMT and _Roman_ samples: S/N as a function rank ordered source luminosity. We focus on the KMT prime fields (\(\sim 13\deg^{2}\)), which have cadences similar to that of the _Roman_ fields, i.e., \(\Gamma=4\,{\rm hr}^{-1}\). We will show that about 11 times more microlensing events take place in these fields during 10 years of KMT observations than take place in the _Roman_ fields during its observations. Here, we are not yet considering which of these events are actually detected by either project. Moreover, we are not yet restricting consideration to FSPL events. In this context, if we wish to compare performance on an event-by-event basis, the events should first be rank-ordered by source luminosity (which is the most important factor in S/N). So, for example, we will show that KMT sources with \(M_{I}=3.2\) should be compared to _Roman_ sources with \(M_{I}=6\), because there are the same number of microlensing events down to these two thresholds for the respective surveys. Because the lens-source kinematics of the two experiments are essentially the same, the ratio of the number of events is given by the ratio of the products \(\Omega\times\Delta t\times N_{S}\times N_{L}\), where 1. \(\Omega\). Area of survey: \((13\deg^{2}/2\deg^{2}=6.5)\) 2. \(\Delta t\). Duration of survey: \((10\times 4\,{\rm months}/6\times 72\,{\rm days}=2.8)\) 3. \(N_{S}\). Surface density of sources: 1/1.29. 4. \(N_{L}\). Surface density of lenses: 1/1.29. That is, an overall ratio KMT:Roman of \(6.5\times 2.8/1.29^{2}=11\). Here, we have adopted field sizes of \(13\deg^{2}\) for the KMT prime fields versus \(2\deg^{2}\) for _Roman_ and durations of 4 months per year for 10 years for KMT versus six 72-day campaigns for _Roman_. In particular, we note that the KMT survey is nominally carried out for 8 months per year, but in the wings of the season, there are huge gaps due to the restricted times that the bulge can be observed. Moreover, KMT is affected by weather and other conditions (such as Moon) that restrict the period of useful observations. Thus, we consider that the effective observations are 4 months per year. The mean latitude of the KMT prime fields is about \(\langle|b|\rangle_{\rm KMT}\sim 2.35^{\circ}\), compared to \(\langle|b|\rangle_{Roman}\sim 1.7^{\circ}\) for _Roman_. According to Nataf et al. (2013), this gives a factor of 1.29 advantage to _Roman_ in bulge source density. For lenses that are in the bulge (as BDs are expected to mainly be), the advantage is identical. For disk lenses, it is roughly similar. We use the HLF to calculate the cumulative distribution of sources. This distribution is given only for \(M_{I}\leq 9\). However, we extend it to \(M_{I}=12\) (i.e., to masses \(M\sim 0.1\,M_{\odot}\)) using the Chabrier (2005) mass function. We transform from mass to \(I\)-band luminosity using the \(V\) and \(K\) mass luminosity relations of Benedict et al. (2016) and the color-color relations of Bessell & Brett (1988). We refer to this combined luminosity function as the CHLF. We then adopt this as the unnormalized _Roman_ distribution and multiply it by 11 to construct the KMT distribution before matching the two distributions. Figure 7 shows the result. As anticipated above, \(M_{I}=6.00\)_Roman_ sources are matched to \(M_{I}=3.22\) KMT sources. Note that the viability of this approach depends on the fact that there are few useful sources outside the diagram. For _Roman_ this is not an issue because the diagram goes almost to the bottom of the main sequence. If the matched KMT luminosity had been, say \(M_{I,{\rm KMT}}=3.5\) at this point, then the diagram would be excluding many useful KMT sources. However, in fact, at the actual value (\(M_{I}=6.0\)) and for most applications, few such useful sources are being ignored. Nevertheless, these dim KMT sources can be important in some cases, so this must always be checked when applying this method of matched cumulative distributions. We now evaluate the ratio between the _Roman_ S/N and KMT S/N at each pair of matched values under the assumption that the source star is unblended and at various magnifications \(A\). In each case, we assume that what is being measured is some small change in magnification \(\Delta A\), so that the S/N ratios are \[({\rm S/N})_{X}=\frac{f_{X}\Delta A}{\sqrt{f_{X}A+B_{X}}} \tag{13}\] where the \(f_{X}\) (\(X=H\) or \(X=I\)) are the respective source flux of the matched sources, for _Roman_ and KMT, and the \(B_{X}\) are the respective backgrounds. Thus, their ratio, \[\frac{({\rm S/N})_{H}}{({\rm S/N})_{I}}=\frac{f_{H}/f_{I}}{\sqrt{(f_{H}A+B_{H })/(f_{I}A+B_{I})}}, \tag{14}\] is independent of \(\Delta A\). To make these evaluations, we adopt the following assumptions. Regarding KMT, we assume a zero point of 1 photon per (60 second) exposure at \(I_{\rm zero}=28.0\) and with a background \(I_{\rm back}=16.8\) as described above. Regarding _Roman_, we assume a zero point of 1 photon per (52 second) exposure at \(H_{\rm zero}=30.4\) on the Vega system and a background of \(H_{\rm back}=21.7\) per exposure. See Gould (2014). These backgrounds imply \(B_{I}=3.02\times 10^{4}\) and \(B_{H}=3.02\times 10^{3}\). Next, we assume that the sources lie at \(D_{S}=8\,{\rm kpc}\) and we adopt an \(I\)-band extinction \(A_{I}=2\), corresponding to \(A_{H}=0.23\,A_{I}=0.46\). Finally, to convert from the \(M_{I}\) (of the CHLF) to the required \(M_{H}\) (needed to calculate the _Roman_ source flux), we proceed as follows. For M dwarf sources (\(M_{I}\geq 6.8\)), we use the empirically calibrated mass-luminosity relations of Benedict et al. (2016) in \(V\) and \(K\), i.e., their Equation (10) and Table 12. We convert from \((V-K)\) to \((I-H)\) using the \(VIHK\) relation of Bessell & Brett (1988) and then evaluate \(M_{H}=M_{I}+(I-H)\). For the remainder of the CHLF, we use the following approximations \((I-H)=1.29-0.035\,M_{I}\), (\(0\leq M_{I}<2\)), \((I-H)=1.22-0.245\,(M_{I}-2)\), (\(2\leq M_{I}<4\)), (\(I-H)=0.73\,M_{I}\), (\(4\leq M_{I}<4.5\)), and (\(I-H)=0.73+0.402\,(M_{I}-4.5\)), (\(4.5\leq M_{I}<6.8\)). The results are shown in Figure 8 for three case: \(A=(1,10,100)\). The lower panel shows the S/N ratios as a function of \(M_{I,Roman}\) so they can be referenced to Figure 7. However, the implications are best understood from the upper panel, which shows these ratios as a function of the cumulative distribution. The filled circles along the (\(A=1\)) curve allow one to relate the two panels. These indicate (from right to left) \(M_{I,Roman}=(12,11,10,\ldots)\). Figure 8 also has important implications for the _Roman_ bound-planet"discovery space". However, the main focus of the present work is on isolated substellar objects. Because these objects are themselves dark, and because they do not have a host, the assumption of "no blending" will usually be satisfied. Figure 8 shows that over the entire CHLF and at all magnifications, _Roman_ has higher S/N than KMT, implying that at each matched luminosity, _Roman_ will detect at least as many PSPL isolated sub-stellar events as KMT. Hence, it will also detect at least as many from the CHLF as a whole. Because _Roman_ S/N superiority is substantial, especially for \(A=1\), over a substantial fraction of the CHLF, it may appear that it would detect many times more PSPL events. However, this proves to be the case only for low-mass FFPs, whereas the factor is more modest for isolated BDs. The fundamental reason is that reliable detections can only be made up to some limit, e.g., \(u_{0}<1\), regardless of S/N. For illustration, we assume that such a detection is possible for KMT assuming that the peak difference flux obeys \(\Gamma u_{0}t_{\rm E}({\rm S/N})^{2}_{\rm peak}>\chi^{2}_{\rm min}=2000\). Because KMT is background-limited, this can be written \(f_{I}>\sqrt{\chi^{2}_{\rm min}B_{I}/\Gamma u_{0}t_{\rm E}}/(A_{\rm max}-1) \to 2300(t_{\rm E}/{\rm day})^{-1/2}\). According to our assumption, \(A_{I}=2\), this corresponds to \(M_{I}<3.0+1.25\log(t_{\rm E}/{\rm day})\). Hence, adopting \(\theta_{\rm E}=40\,\mu{\rm as}\) for a typical BD and assuming \(\mu_{\rm rel}=6\,{\rm mas\,yr}^{-1}\) (so, \(t_{\rm E}=2.4\,{\rm day}\)), this implies \(M_{I,{\rm KMT}}<3.5\), which matches to 6.8 according to Figure 7. Carrying out a similar calculation for the _Roman_ threshold, and noting that in the relevant range it is also background dominated, this yields \(f_{H}>\sqrt{\chi_{\rm min}^{2}B_{H}/\Gamma u_{0}t_{\rm E}}/(A_{\rm max}-1)\to 7 35(t_{\rm E}/{\rm day})^{-1/2}\), i.e., \(M_{H}<8.3+1.25\log(t_{\rm E}/{\rm day})\). Thus, for BDs, \(M_{H}<8.8\), i.e., \(M_{I}<11.1\). From Figure 7, _Roman_ complete sensitivity to these BD PSPL events covers 4.7 times more of the cumulative fraction. Allowing for the gradual decline of \(u_{0,{\rm max}}\) for KMT for dimmer sources, we can roughly estimate that _Roman_ will detect 4 times more BD PSPL events, which is a relatively modest improvement. By contrast, for FFPs with \(\theta_{\rm E}=1\,\mu\)as, i.e., \(t_{\rm E}=0.06\,\)day, the corresponding limit for KMT would be \(M_{I}<1.5\), for which \(\theta_{*}\sim 3\,\mu\)as, i.e., \(\rho\sim 3\). Hence, these would not be PSPL events, but rather FSPL. We will discuss these further below. On the other hand, for _Roman_, \(M_{H}<6.8\), i.e., \(M_{I}<8.7\) which covers about 1/4 of the cumulative fraction. Hence, _Roman_ will be vastly more sensitive to PSPL FFPs at \(\theta_{*}=1\,\mu\)as. The method of matching cumulative source distributions cannot be used to compare _Roman_ and KMT FSPL substellar events. First, the matched sources have different \(\theta_{*}\), which is a fundamental parameter for FSPL events, Second, FSPL events are among the relatively rare class of applications for which "unmatched" KMT sources, i.e., \(M_{I}>6.0\), play a crucial role. Instead, we compare the returns of the two experiments by first setting a threshold \(\Delta\chi^{2}=2000\) for each, which we approximate as \(\Delta\chi^{2}=N_{\rm peak}[(A_{\rm max}-1)f_{X}]^{2}/(f_{X}+B_{X})\), where \(A_{\rm max}=\sqrt{1+4/\rho^{2}}\) and \(N_{\rm peak}=2\Gamma\theta_{*}/\mu_{\rm rel}\to 11.7(\theta_{*}/\mu\)as), and where we have made the evaluation by adopting \(\mu_{\rm rel}=6\,{\rm mas\,yr^{-1}}\). We further require \(N_{\rm peak}\geq 3\) to ensure that the finite-source effects are adequately characterized. Finally, we demand \(A_{\rm max}>1.06\), i.e., \(\rho<5.7\) because of the difficulty of distinguishing lower-amplitude events from giant-source variability. While these prescriptions are simplified, they are adequate to characterize the relative sensitivity of the two experiments. For the source radii, we adopt \(\theta_{*}=6\times 10^{-0.2\,M_{I}}\mu\)as, (\(0\leq M_{I}<2\)), \(\theta_{*}=10^{0.378-0.3.01(M_{I}-2)}\mu\)as, (\(2\leq M_{I}<4\)), \(\theta_{*}=(R_{\odot}/D_{S})\times 10^{-0.2\,(M_{I}-4.07)}\), (\(4\leq M_{I}<4.5\)), \(\theta_{*}=10^{-0.320-0.041(M_{I}-4.5)}\mu\)as, (\(4.5\leq M_{I}<6.8\)), and \(\theta_{*}=(M/M_{\odot})(R_{\odot}/D_{S})\), (\(M_{I}>6.8\)), where \(D_{S}=8\,\)kpc. The results are shown in Figure 9. There is a rapid transition at \(\theta_{\rm E}\sim 1\,\mu\)as: below this threshold, KMT loses all sensitivity, while _Roman_ retains constant sensitivity for almost a decade; above the threshold, KMT completely dominates the detections, reaching a factor 11 in the BD regime, \(\theta_{*}\gtrsim 30\,\mu\)as. The physical reason for this dominance is simple. At the adopted threshold of detectability, \(\theta_{*,{\rm thresh}}=(3/2)\mu_{\rm rel}/\Gamma=0.256\,\mu\)as, i.e., \(M_{I,{\rm thresh}}=8.25\) or \(I_{S}=24.75\), the source is magnified by \(A_{\rm max}=2/\rho\to 230\) to \(I\sim 18.8\), which creates a marginally detectable event. Thus, all sources (down to this threshold) yield detectable events from either KMT or _Roman_, but because the former has 11 times more events, it has 11 times more detections. In fact, the completeness analysis of Section 7.1 shows that only of order half of such high-magnification events from uncataloged sources are recovered by current KMT searches. This could be rectified by additional specialized searches for such "spike events" but even without such an effort, KMT will still dominate in this regime. By the same token, at \(\theta_{\rm E}=1\,\mu\)as, faint sources are insufficiently magnified to boost them to detectability. For example, for near-optimal sources, \(\theta_{*}=\theta_{\rm E}\), i.e., \(M_{I}=3.28\) or \(I_{S}=19.78\) and with peak magnification \(A_{\rm max}=\sqrt{5}\), the difference magnitude is only \(I_{\rm diff}=19.55\). However, based on its far greater flux counts and lower background, such events are easily detected by _Roman_. We emphasize that the "flat" form of the _Roman_ curve over 3 decades does not mean that one expects equal numbers of detections across this range: based on what we know today (Gould et al., 2022), there could be of order 40 times more FFPs at \(\theta_{*}=10^{-0.8}\mu\)as compared to \(\theta_{*}=10^{+0.8}\mu\)as. #### 7.3.3 Summary The only way to press forward the study of isolated bulge BDs is by FSPL events from ground-based surveys, mainly KMT. It is not possible to obtain a substantial number of full mass-distance measurements for these objects from any current, planned, or proposed experiments. Regarding FSPL BD events, there are no other current or currently planned experiments that could compete with KMT. The situation is more nuanced for FFPs. First, in the next decade, new experiments could yield mass-distance measurements, assuming that the current proposals for these are approved and implemented. Second, _Roman_ will be increasingly competitive for FFPs within the range \(2\gtrsim\theta_{\rm E}/\mu\)as \(\gtrsim 1\) and will be completely dominant for \(\theta_{\rm E}\lesssim 1\,\mu\)as. ### Comments on the Recent MOA FSPL Search As the present paper was being completed, Koshimoto et al. (2023) reported results from a comprehensive search for FSPL events in Microlensing Observations in Astrophysics (MOA) Collaboration data over the 9 years from 2006 to 2014. Here we comment on a few implications that relate to results and ideas that we have presented. The most important point is that Koshimoto et al. (2023) searched for FSPL events with both giant and dwarf sources, which is what we have broadly advocated here. In particular, the inclusion of "dwarf" (including main-sequence and subgiant) sources led to the discovery of a very small FFP, MOA-9yr-5919, with \(\theta_{\rm E}=0.90\pm 0.14\,\mu\)as. Being the second such discovery (after OGLE-2016-BLG-1928, \(\theta_{\rm E}=0.84\pm 0.06\,\mu\)as, Mroz et al. 2020b), it strongly implies that such objects are very common. That is, a single such discovery would be consistent with a low-probability, e.g., \(p=5\%\), detection from a relatively rare population. However, two such chance discoveries would occur only at \({\cal O}(p^{2})\). It was exactly this logic that led Gould et al. (2006) to conclude that "Cool Neptune-like Planets are Common" based on two detections, which was soon confirmed by Sumi et al. (2010) and then, subsequently, by of order two dozen other detections (Suzuki et al. 2016, Zang et al., in prep). The inclusion of dwarf sources was crucial to this discovery: if the same planet had transited a typical source from the Gould et al. (2022) giant-star survey, with \(\theta_{*}\sim 6\,\mu\)as, it would have had \(\rho\sim 6.7\) and hence excess magnification \(A-1\simeq 2/\rho^{2}\sim 4.5\%\) and would not have been detected. Sumi et al. (2023) show (their Table 3), that the MOA and KMT surveys are consistent in their constraints on the FFP population, including both the power-law index and its normalization \(Z\). In particular, at the same zero-point, they find \(Z=0.53^{+0.19}_{-0.40}\) versus \(0.39\pm 0.20\) FFPs per dex per (stars + BDs) for KMT. At first sight, one may wonder about the consistency of the detection rates of the KMT giant-source survey, which discovered 29 FSPL events satisfying \(I_{0}<16.5\), with the "giant component" of the MOA survey, with 7 such events. However, we now show that the ratio of detections is consistent with expectations. First, Koshimoto et al. (2023) note that they are insensitive to the biggest (\(\theta_{*}\gtrsim 10\,\mu\)as) sources from the KMT survey due to saturation. (For many of these bright sources, KMT recovered from saturation using \(V\)-band observations.) From Figure 2 of Sumi et al. (2023), such a cut would eliminate \(\sim 1/3\) of KMT events. Second, the MOA detector is about half the size of the KMT detectors (2.2 versus 4 square degrees), and in line with this fact, it surveys roughly half the area (i.e., mainly southern bulge versus full bulge). Third, the KMT survey employs three telescopes, whereas the MOA survey uses one. Moreover, while MOA and KMTA have comparable conditions, KMTS and KMTC have better conditions. If we were considering very short events, which are mainly localized to a single observatory, then this would give KMT a 4:1 advantage. However, because giant-source events are very long, often covering two or more observatories, we reduce this estimate to 3:1. Finally, the MOA survey covered 9 years while the KMT covered 4 years. Combining all factors, we expect a ratio of MOA-to-KMT giant-source FSPL events of \((2/3)\times(1/2)\times(1/3)\times(9/4)=25\%\) compared to an observed ratio of 24%, which is consistent. This research has made use of the KMTNet system operated by the Korea Astronomy and Space Science Institute (KASI) at three host sites of CTIO in Chile, SAAO in South Africa, and SSO in Australia. Data transfer from the host site to KASI was supported by the Korea Research Environment Open NETwork (KREONET). Work by C.H. was supported by the grants of National Research Foundation of Korea (2020R1A4A2002885 and 2019R1A2C2085965). J.C.Y., S.-J. C., and I.-G. S acknowledge support from US NSF Grant No. AST-2108414. Y.S. acknowledges support from BSF Grant No. 2020740. W.Z. and H.Y. acknowledge support by the National Science Foundation of China (Grant No. 12133005).
2305.02789
On factor copula-based mixed regression models
In this article, a copula-based method for mixed regression models is proposed, where the conditional distribution of the response variable, given covariates, is modelled by a parametric family of continuous or discrete distributions, and the effect of a common latent variable pertaining to a cluster is modelled with a factor copula. We show how to estimate the parameters of the copula and the parameters of the margins, and we find the asymptotic behaviour of the estimation errors. Numerical experiments are performed to assess the precision of the estimators for finite samples. An example of an application is given using COVID-19 vaccination hesitancy from several countries. Computations are based on R package CopulaGAMM.
Pavel Krupskii, Bouchra R Nasri, Bruno N Remillard
2023-05-04T12:45:34Z
http://arxiv.org/abs/2305.02789v2
# On factor copula-based mixed regression models ###### Abstract. In this article, a copula-based method for mixed regression models is proposed, where the conditional distribution of the response variable, given covariates, is modelled by a parametric family of continuous or discrete distributions, and the effect of a common latent variable of a cluster is modelled with a factor copula. We show how to estimate the parameters of the copula and the parameters of the margins and find the asymptotic behaviour of the estimation errors. Numerical experiments are performed to assess the precision of the estimators for finite samples. An example of an application is given using COVID-19 vaccination hesitancy data from several countries. All developed methodologies are implemented in CopulaGAMM, available in CRAN. Key words and phrases:Copula; clustered data; mixed linear regression; mixed additive regression; covariates; latent variables \({}^{*}\) The three authors contributed equally to all aspects of this paper. Partial funding in support of this work was provided by Mathematics for Public Health, the Natural Sciences and Engineering Research Council of Canada, the Fonds de recherche du Quebec - Nature et technologies, and the Fonds de recherche du Quebec - Sante Introduction The study of the effects of the model parameters are studied in the literature, and in the context of the model parameters. The model parameters are assumed to be the same as the model parameters, and the model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to be the same as the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to the model parameters. The model parameters are assumed to be the model parameters. The model parameters are assumed to the model parameters. Section 4 through finite sample simulations. Finally, in Section 5, the proposed methodology is applied to explain COVID-19 vaccination hesitancy factors from several countries. ## 2. Description of the model and its properties In what follows, for each cluster \(k\in\{1,\ldots,K\}\), the observations, denoted by \((Y_{ki},\mathbf{X}_{ki})\in\mathbb{R}^{1+d}\), \(i\in\{1,\ldots,n_{k}\}\), are exchangeable, and the distribution of \((Y_{k,1},\mathbf{X}_{k,1})\) is the same for all \(k\). Furthermore, for any \(k\in\{1,\ldots,K\}\) and \(i\in\{1,\ldots,n_{k}\}\), the conditional distribution of \(Y_{ki}\) given \(\mathbf{X}_{ki}=\mathbf{x}\) is \[P(Y_{ki}\leq y|\mathbf{X}_{ki}=\mathbf{x})=G_{\mathbf{h}(\boldsymbol{\alpha}, \mathbf{x})}(y), \tag{2.1}\] where \(G_{\mathbf{a}},\mathbf{a}\in\mathcal{A}\subset\mathbb{R}^{m}\) is a parametric family of univariate distributions, \(\mathbf{h}:\mathbb{R}^{m}\times\mathbb{R}^{d}\mapsto\mathcal{A}\) is a known link function, and each component can be a linear or non-linear function of \(\mathbf{x}\). Next, assume that the latent variables, denoted \(V_{1},\ldots,V_{K}\), are independent and uniformly distributed over \((0,1)\), and they are also independent of the covariates \(\mathbf{X}=(\mathbf{X}_{11}=\mathbf{x}_{11},\ldots,\mathbf{X}_{Kn_{K}}= \mathbf{x}_{Kn_{K}})\), similarly to what is assumed for GAMM. Finally, assume that conditionally to \((V_{1},\ldots,V_{K})\) and \(\mathbf{X}\), the responses \(Y_{ki}\), \(i\in\{1,\ldots,n_{k}\}\) are independent, with \[P(Y_{ki}\leq y|\mathbf{X}_{ki}=\mathbf{x},V_{k}=v) = \partial_{v}C_{\boldsymbol{\phi}(\boldsymbol{\theta},\mathbf{x}) }\left\{G_{\mathbf{h}(\boldsymbol{\alpha},\mathbf{x})}(y),v\right\}\] \[= \mathcal{C}_{\boldsymbol{\phi}(\boldsymbol{\theta},\mathbf{x})} \left\{G_{\mathbf{h}(\boldsymbol{\alpha},\mathbf{x})}(y),v\right\}, \tag{2.2}\] where \(C_{\boldsymbol{\varphi}},\boldsymbol{\varphi}\in\mathcal{O}\subset\mathbb{R}^ {p}\) is a parametric family of bivariate copulas with density \(c_{\boldsymbol{\varphi}}\), \(\boldsymbol{\varphi}:\mathbb{R}^{p}\times\mathbb{R}^{d}\mapsto\mathcal{O}\) is a known link function, each component can be a linear or non-linear function of \(\mathbf{x}\), and \(\boldsymbol{\theta}=(\boldsymbol{\alpha},\boldsymbol{\beta})\). This is required since the parameters of the copula might also include the parameters of the margin, as in the case of a mixed linear Gaussian regression with a random slope; see, e.g., Appendix A for details. It follows from the previous assumptions and Equation (2.2) that the conditional distribution function of \(\mathbf{Y}=(Y_{11},\ldots,Y_{Kn_{k}})\) given \(\mathbf{X}=\mathbf{x}\) is given by \[\prod_{k=1}^{K}\left[\int\left\{\prod_{i=1}^{n_{k}}\mathcal{C}_{\boldsymbol{ \phi}(\boldsymbol{\theta},\mathbf{x}_{ki})}\{G_{\mathbf{h}(\boldsymbol{\alpha},\mathbf{x}_{ki})}(y_{ki}),v_{k}\}\right\}dv_{k}\right], \tag{2.3}\] which is an extension of the so-called 1-factor copula model introduced by (Krupskii and Joe, 2013). As a result, the log-likelihood \(L(\boldsymbol{\theta})\) can be written as \(L(\boldsymbol{\theta})=\sum_{k=1}^{K}\log f_{\boldsymbol{\theta},k}( \mathbf{y}_{k})\), where \[f_{\boldsymbol{\theta},k}(\mathbf{y}_{k})=\int_{0}^{1}\left\{\prod_{i=1}^{n_{ k}}f_{\boldsymbol{\theta},ki}(y_{ki},v)\right\}dv, \tag{2.4}\] and for \(v\in(0,1)\), \(f_{\boldsymbol{\theta},ki}(y_{ki},v)\) is a density with respect to a reference measure \(\nu\) (Lebesgue's measure on counting measure on \(\mathbb{Z}\)), i.e., \(\int_{\mathbb{R}}f_{\boldsymbol{\theta},ki}(y,v)\nu(dy)=1\). In particular, in the continuous case, \[f_{\boldsymbol{\theta},ki}(y,v)=g_{\mathbf{h}(\boldsymbol{\alpha},\mathbf{x}_ {ki})}(y)c_{\boldsymbol{\phi}(\boldsymbol{\theta},\mathbf{x}_{ki})}\left\{G_{ \mathbf{h}(\boldsymbol{\alpha},\mathbf{x}_{ki})}(y),v\right\},\] where \(g_{\mathbf{a}}\) is the density of \(G_{\mathbf{a}}\), while in the discrete case, \[f_{\boldsymbol{\theta},ki}(y,v)=\mathcal{C}_{\boldsymbol{\phi}(\boldsymbol{ \theta},\mathbf{x}_{ki})}\left\{G_{\mathbf{h}(\boldsymbol{\alpha},\mathbf{x} _{ki})}(y),v\right\}-\mathcal{C}_{\boldsymbol{\phi}(\boldsymbol{\theta}, \mathbf{x}_{ki})}\left\{G_{\mathbf{h}(\boldsymbol{\alpha},\mathbf{x}_{ki})}( y-),v\right\}.\] As a by-product of (2.4), the conditional density of \(V_{k}\), given the observations in cluster \(k\), is given by \[\prod_{i=1}^{n_{k}}f_{\boldsymbol{\theta},ki}(y_{ki},v)\Big{/}f_{\boldsymbol{ \theta},k}(\mathbf{y}_{k}),\qquad v\in(0,1). \tag{2.5}\] Hence (2.5) can be used to predict \(V_{k}\). Note that in the continuous case, (2.5) simplifies to \[\prod_{i=1}^{n_{k}}c_{\boldsymbol{\phi}(\boldsymbol{\theta},\mathbf{x}_{ki})} \left\{G_{\mathbf{h}(\boldsymbol{\alpha},\mathbf{x}_{ki})}(y),v\right\} \Big{/}\int_{0}^{1}\left[\prod_{i=1}^{n_{k}}c_{\boldsymbol{\phi}(\boldsymbol{ \theta},\mathbf{x}_{ki})}\left\{G_{\mathbf{h}(\boldsymbol{\alpha},\mathbf{x}_ {ki})}(y),s\right\}\right]ds, \tag{2.6}\] \(v\in(0,1)\). Next, it follows that for observations in cluster \(k\), the conditional distribution, the conditional quantile, and the conditional expectation of \(Y\) given \(\mathbf{X}=\mathbf{x},V_{k}=v\), are given respectively for \(y\in\mathbb{R}\) and \(v\in(0,1)\) by \[P(Y\leq y|\mathbf{X}=\mathbf{x},V_{k}=v)=\mathcal{F}(y,\mathbf{x},v)= \mathcal{C}_{\boldsymbol{\phi}(\boldsymbol{\theta},\mathbf{x})}\left\{G_{ \mathbf{h}(\boldsymbol{\alpha},\mathbf{x})}(y),v\right\}, \tag{2.7}\] \[\mathcal{Q}(u,\mathbf{x},v)=G_{\mathbf{h}(\boldsymbol{\alpha}, \mathbf{x})}^{-1}\left\{\mathcal{C}_{\boldsymbol{\phi}(\boldsymbol{\theta}, \mathbf{x})}^{-1}\left(u,v\right)\right\},\quad u\in(0,1), \tag{2.8}\] \[E(Y|\mathbf{X}=\mathbf{x},V_{k}=v) = \int_{0}^{1}\mathcal{Q}(u,\mathbf{x},v)du \tag{2.9}\] \[= \int_{0}^{\infty}\left\{1-\mathcal{F}(y,\mathbf{x},v)\right\}dy -\int_{-\infty}^{0}\mathcal{F}(y,\mathbf{x},v)dy.\] In particular, if \(G_{\mathbf{a}}\) is a location-scale distribution, i.e., \(G_{\mathbf{h}(\boldsymbol{\alpha},\mathbf{x})}(y)=G_{0}\left(\frac{y-\mu_{ \mathbf{x},\mathbf{x}}}{\sigma_{\boldsymbol{\alpha},\mathbf{x}}}\right)\), then \[E(Y|\mathbf{X}=\mathbf{x},V_{k}=v)=\mu_{\boldsymbol{\alpha},\mathbf{x}}+\sigma _{\boldsymbol{\alpha},\mathbf{x}}\kappa_{\boldsymbol{\theta},\mathbf{x},v}, \tag{2.10}\] where \(\kappa_{\boldsymbol{\theta},\mathbf{x},v}=\int_{0}^{\infty}\left[1- \mathcal{C}_{\boldsymbol{\phi}(\boldsymbol{\theta},\mathbf{x})}\left\{G_{0}( z),v\right\}\right]dz-\int_{-\infty}^{0}\mathcal{C}_{\boldsymbol{\phi}(\boldsymbol{ \theta},\mathbf{x})}\left\{G_{0}(z),v\right\}dz.\) Appendix B provides examples to illustrate graphically the difference between GAMM and our proposed approach. ## 3. Estimation of parameters Let \(N_{K}=\sum_{k=1}^{K}n_{k}\) be the total number of observations. For the estimation of \(\mathbf{\theta}=(\alpha,\mathbf{\beta})\), one will consider the full parametric case, i.e., \[\mathbf{\theta}_{K}=\left(\mathbf{\alpha}_{K},\mathbf{\beta}_{K}\right)=\arg\max_{(\mathbf{ \alpha},\mathbf{\beta})\in\mathcal{P}_{1}\times\mathcal{P}_{2}}L(\mathbf{\alpha},\mathbf{ \beta}).\] Assume now that the link function \(\mathbf{h}(\mathbf{\alpha},\mathbf{x})\) is twice continuously differentiable with respect to \(\mathbf{\alpha}\) and the density \(g_{\mathbf{a}}\) (in the continuous case) or the cdf \(G_{\mathbf{a}}\) (in the discrete case) is twice continuously differentiable with respect to \(\mathbf{a}\), with first and second order derivatives denoted respectively by \(\dot{g}_{\mathbf{a}}\) and \(\ddot{g}_{\mathbf{a}}\), or \(\dot{G}_{\mathbf{a}}\) and \(\ddot{G}_{\mathbf{a}}\). Also assume that \(\mathbf{\phi}(\mathbf{\theta},\mathbf{x})\) is twice continuously differentiable with respect to \(\mathbf{\theta}=(\mathbf{\alpha},\mathbf{\beta})\) and \(c_{\mathbf{\varphi}}\) is twice continuously differentiable with respect to \(\mathbf{\varphi}\), with first and second order derivatives denoted respectively by \(\dot{c}_{\mathbf{\varphi}}\) and \(\ddot{c}_{\mathbf{\varphi}}\). Using (2.4), one has \(\dot{L}(\mathbf{\theta})=\nabla_{\mathbf{\theta}}L(\mathbf{\theta})=\sum_{k=1}^{K}\frac{ \dot{f}_{\mathbf{\theta},k}(\mathbf{y}_{k})}{f_{\mathbf{\theta},k}(\mathbf{y}_{k})}\), where \[\dot{f}_{\mathbf{\theta},k}(\mathbf{y}_{k})=\sum_{i=1}^{n_{k}}\int\left\{\prod_{j \neq i}f_{\mathbf{\theta},kj}(y_{kj},v)\right\}\dot{f}_{\mathbf{\theta},ki}(y_{ki},v )\;dv. \tag{3.1}\] Appendix C gives more details about the computation of the log-likelihood function. These are included in the R package CopulaGAMM available in CRAN (Krupskii et al., 2023). The proof of the following convergence result is given in Appendix D.3. It depends on auxiliary results and technical conditions stated in Appendices D.1-D.2. Before stating the result, set \[\mathbf{\eta}_{ki}=\frac{1}{f_{\mathbf{\theta}_{0},k}(\mathbf{y}_{k})}\int\left\{\prod _{j\neq i}f_{\mathbf{\theta}_{0},kj}(y_{kj},v)\right\}\left.\nabla_{\mathbf{\theta}} f_{\mathbf{\theta},ki}(y_{ki},v)\right|_{\mathbf{\theta}=\mathbf{\theta}_{0}}dv.\] **Theorem 3.1**.: _Assume that as \(K\rightarrow\infty\), \(\lambda_{K}=\frac{1}{N_{K}}\sum_{k=1}^{K}n_{k}^{2}\rightarrow\lambda\in[1,\infty)\) and \(\frac{1}{{N_{K}}^{2}}\sum_{k=1}^{K}n_{k}^{4}\to 0\). Further assume that Assumptions D.1 and D.2 hold. Then \(\mathbf{\Theta}_{K}={N_{K}}^{1/2}(\mathbf{\theta}_{K}-\mathbf{\theta}_{0})\) converges in law to a centred Gaussian random vector with covariance matrix \(\Sigma^{-1}\), where \(\Sigma=\Sigma_{11}+(\lambda-1)\Sigma_{12}\), with \(\Sigma_{11}=E\left(\mathbf{\eta}_{k1}\mathbf{\eta}_{k1}^{\top}\right)\) and \(\Sigma_{12}=E\left(\mathbf{\eta}_{k1}\mathbf{\eta}_{k2}^{\top}\right)\). Moreover, \(\Sigma\) can be estimated by \(\hat{\Sigma}=\frac{1}{N_{K}}\sum_{k=1}^{K}\frac{\dot{f}_{\mathbf{\theta}_{K},k} \dot{f}_{\mathbf{\theta}_{K},k}^{\top}}{f_{\mathbf{\theta}_{K},k}^{\theta_{K}}}\). If \(n_{k}\equiv n\), then the convergence result holds if one only assumes that Assumption D.1 holds._ ## 4. Performance of the estimators In this section, we assess the performance of the maximum likelihood estimator proposed in Section 3 in five numerical experiments. Note that the running time for each sample is 1-2 seconds, which is quite fast. For each of the five models we considered, we generated 1000 samples from Equation (2.3) and considered two scenarios. In the first scenario, \(n_{k}\equiv 5\) and \(K\in\{5,20,100,500\}\), while in the second scenario, \(K=5\), and \(n_{k}\equiv n\in\{5,20,100,500\}\). For the first numerical experiment, we considered Clayton copulas \(C_{\varphi}\) with parameter \(\varphi=2\) so that the Kendall's tau equals \(0.5\), and the margins \(G_{\mathbf{a}}\) are normal, with mean and variance parameters \(\mathbf{a}=(10,1)\). Figure 1 displays the boxplots of the estimated parameters for the two scenarios. Next, in the second numerical experiment, we used the same margins as in the first experiment, but we considered Clayton copulas \(C_{\boldsymbol{\phi}(\boldsymbol{\beta},\mathbf{x}_{ki})}\) with \(\boldsymbol{\beta}=(\beta_{1},\beta_{2})^{\top}=(1,-1.5)^{\top}\), \(\mathbf{x}_{ki}=(1,k/K)^{\top}\), and \(\boldsymbol{\phi}(\boldsymbol{\beta},\mathbf{x})=2e^{\boldsymbol{\beta}^{ \top}\mathbf{x}},\,k=1,\ldots,K\). Figure 2 shows the corresponding results. For the third numerical experiment, we used Clayton copulas \(C_{\boldsymbol{\phi}(\boldsymbol{\beta},\mathbf{x})}\) with \(\boldsymbol{\beta}=(\beta_{1},\beta_{2})^{\top}=(1,-1.5)^{\top}\), \(\mathbf{x}=\left(\mathbf{x}^{(1)},\mathbf{x}^{(2)}\right)\), \(\mathbf{x}_{ki}^{(1)}=(1,k/K)^{\top}\), \(\mathbf{x}_{ki}^{(2)}=(1,i/n_{k})^{\top}\), with \(\varphi(\boldsymbol{\beta},\mathbf{x})=2e^{\boldsymbol{\beta}^{\top}\mathbf{x} ^{(1)}}\), \(k\in\{1,\ldots,K\}\), and the marginal distribution \(G_{\mathbf{h}(\mathbf{a},\mathbf{x})}\) is normal with parameters \(\mathbf{a}=(a_{11},a_{12},a_{2})^{\top}=(5,5,1)^{\top}\), where \(h_{1}(\boldsymbol{\alpha},\mathbf{x})=(a_{11},a_{12})^{\top}\mathbf{x}^{(2)}\), so that the mean parameter is not constant and depends on the observations Figure 1. Boxplots of parameter estimates for scenario 1 (left) and scenario 2 (right), based on 1000 samples from Clayton bivariate copulas with parameter \(\varphi=2\) and \(N(10,1)\) marginals. index, which can be interpreted as being a linear function of time, since longitudinal data are a special case of our model. The results of this experiment for the two scenarios are displayed in Figure 3. Finally, for the fourth and fifth numerical experiment, we used the same copula as in the third experiment, but we considered discrete margins: Poisson margins with rate parameter \(h_{1}\left(\mathbf{a},\mathbf{x}^{(2)}\right)=\exp\left(\mathbf{a}^{\top} \mathbf{x}^{(2)}\right)\), and Bernoulli margins with parameter \(h_{1}(\mathbf{a},\mathbf{x}^{(2)})=\left\{1+\exp\left(-\mathbf{a}^{\top} \mathbf{x}^{(2)}\right)\right\}^{-1}\). In both cases, \(\mathbf{a}=(a_{11},a_{12})^{\top}=(2,-3)^{\top}\). The results of these experiments are displayed in Figures 4-5. Note that the Clayton copula is used in the simulations but the results are similar for other copula families. Figure 2. Boxplots of parameter estimates for scenario 1 (left) and scenario 2 (right), based on 1000 samples from Clayton bivariate copulas with parameters \(\boldsymbol{\phi}(\boldsymbol{\beta},\mathbf{x}_{ki})=2\exp(1-1.5k/K)\) and \(N(10,1)\) marginals. One can see from Figure 1 that in the first scenario, when the number of clusters is large, parameter estimates converge to their true values, as expected by Theorem 3.1. In fact, the estimation of the parameters is good as soon as \(K=20\). On the other hand, for the second scenario, the parameter estimates converge to the true values, as expected by Theorem 3.1. In fact, the estimation of the parameters is good as soon as \(K=20\). On the other hand, for the second scenario, the parameter estimates converge to the true values, as expected by Theorem 3. estimates seem asymptotically unbiased but are not consistent if the number of clusters is fixed and the cluster size is growing instead. This should not be surprising since the same behavior is also observed in simple mixed linear models. Next, according to Figures 2-5, similar conclusions can be reached for the two scenarios: when the number of clusters is large, parameter estimates converge to their true values, while when the number of clusters is small as in the second scenario, parameter estimates seem asymptotically unbiased in this case but they are still not consistent. One can also see from these figures that when \(K=20\), the estimation of \(\beta_{1},\beta_{2}\) can vary a lot, although the impact on the copula parameters Figure 4. Boxplots of parameter estimates for scenario 1 (left) and scenario 2 (right), based on 1000 samples from Clayton bivariate copulas with parameters \(\boldsymbol{\phi}(\boldsymbol{\beta},\mathbf{x}_{ki})=2\exp(1-1.5k/K)\) and Poisson marginals with rates \(h_{1}(\mathbf{a},\mathbf{x}_{k,i})=\exp(2-3i/n_{k})\). \(\phi(\mathbf{\beta},\mathbf{x}_{ki})\) might not be that significant. This is also true of the margin parameters, while their impact on the mean might not be significant. ## 5. Case study: COVID-19 vaccine hesitancy survey data from 23 countries As an example of application, we use survey data about COVID-19 vaccine hesitancy from 23 countries (Lazarus et al., 2022). These data have been recently analyzed by Awasthi et al. (2023) using linear mixed models combined with a Bayesian network. The data represent 23 countries that, include Brazil, Canada, China, Ecuador, France, Germany, Ghana, India, Italy, Kenya, Mexico, Nigeria, Peru, Poland, Figure 5. Boxplots of parameter estimates for scenario 1 (left) and scenario 2 (right), based on 1000 samples from Clayton bivariate copulas with parameters \(\mathbf{\phi}(\mathbf{\beta},\mathbf{x}_{ki})=2\exp(1-1.5k/K)\) and Bernoulli marginals with \(h_{1}(\mathbf{a},\mathbf{x}_{k,i})=\{1+\exp(-2+3i/n_{k})\}^{-1}\). Russia, Singapore, South Africa, South Korea, Spain, Sweden, Turkey, the UK, and the US. In this study, we are interested in understanding the main factors regarding vaccine hesitancy and their changes across the studied countries. Our variable of interest consists of responses to two questions "I will take the COVID-19 vaccine when it is available to me" (Q8 in the survey data) or "Have you received at least one dose of a COVID-19 vaccine?" (Q7 in the survey data). Q8 has four possible answers: Y=0 (Strongly disagree), Y=1 (Somewhat disagree or Unsure/No opinion), Y=2 (Somewhat agree), and Y=3 (Strongly agree or answered yes to Q7). We then defined a new variable of interest, namely \(Y_{1}=\mathbb{I}(Y\geq 2)\), i.e., \(Y_{1}=0\) if the person disagrees with taking the vaccine when available, and \(Y_{1}=1\) if the person agrees. In addition, we considered five covariates which are averages of questions related to the following scores: "Perception of risk" (Q1-Q4), "Trust" (Q5, Q6, Q24-Q26), "Restriction measures" (Q12-Q17), and "Financial status" (Q18, Q54, corresponding to "Loss of Income" and "Monthly House Income"), and "age" (Q27). Our data set contains 23135 observations. We considered a copula-based model with a binomial margin for \(Y_{1}\) and the five covariates. The best copula-based model is chosen based on the smallest BIC from 5 copula families (Frank, Gumbel, Clayton, Gaussian, Student) and a combination of covariates. The best copula model selected that fits our data is the Frank copula with covariates \(X_{1}\) (Restriction measures score), \(X_{2}\) (Trust score), and \(X_{3}\) (Perception of risk score). It turns out that "Financial status" and "Age" are not a significant predictors for vaccine hesitancy. The estimated Kendall's tau for the Frank copula is 0.178 (corresponding to a parameter \(\alpha=1.648971\)), while the equation for probability of the margin is the logit function evaluated at \(-4.425433+1.447809x_{1}+1.02689x_{2}+1.441219x_{3}\) which means that the estimated probability of agreeing to be vaccinated is an increasing function of these three scores. This is illustrated in Figure 6 where the estimated probability of agreeing to be vaccinated, given values of the three covariates, are displayed for South Africa (\(\hat{V}=0.0494\)), France (\(\hat{V}=0.4326\)), India (\(\hat{V}=0.7572\)), and Spain (\(\hat{V}=0.9519\)), where the values of the estimated \(V\) by country are given in Table 1. From Figure 6, we can see that when the level of trust and the perception of risk are both high, the probability of being vaccinated is almost independent from the level of the restriction measures, especially if the estimation of the latent variable is large. However, this factor becomes more important when the combination of trust and the perception of risk is not high enough. This observation holds true across the studied countries with some variation on the estimated values of the probability of agreeing to be vaccinated which depends on the latent variable \(V\). Note that for the Frank's copula, the conditional expectation is an increasing function of \(V\). This explains the results displayed in Figure 6. ## 6. Conclusion We showed how factor copulas can be used to model the dependence for clustered data. Using factor copulas, one can predict a continuous or a discrete response variable using a link function of covariates that are used to model the margins as well as modelling the copula parameters. Our proposed model can cover the GAMM model and generalised the existing copula-based approaches proposed in the literature. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Country & South Africa & Kenya & Ghana & Nigeria & Russia & Ecuador & South Korea & Turkey \\ \(V\) & 0.0494 & 0.0760 & 0.0786 & 0.0816 & 0.1041 & 0.1468 & 0.2025 & 0.3679 \\ \hline Country & Peru & France & Poland & Sweden & India & Mexico & Brazil & US \\ \(V\) & 0.3869 & 0.4326 & 0.5228 & 0.6338 & 0.7572 & 0.7795 & 0.7820 & 0.7848 \\ \hline Country & China & Canada & Italy & Singapore & Germany & UK & Spain & \\ \(V\) & 0.7900 & 0.8534 & 0.8892 & 0.8907 & 0.8954 & 0.8992 & 0.9519 & \\ \hline \end{tabular} \end{table} Table 1. Estimated value of the latent variable by country. Figure 6. Estimated probabilities to agree to be vaccinated by scores of restriction measures (\(X_{1}\)) by values of trust (\(X_{2}\)) and perception of risk (\(X_{3}\)) for South Africa (\(\hat{V}=0.0494\)), France (\(\hat{V}=0.4326\)), India (\(\hat{V}=0.7572\)), and Spain (\(\hat{V}=0.9519\)). We also prove the convergence of the estimators which depends on the size and the number of clusters in the model. All computations proposed in this paper are implemented in the R package CopulaGAMM which is avaible in CRAN. In this package, one can found the estimation of the conditional mean and the conditional quantiles for the proposed model using 10 copula families and their rotations, as well as most common discrete and the continuous margins for the variable of interest. ## Appendix A Recovering Mixed linear Gaussian model using the factor copula-based mixed model The mixed linear Gaussian model defined by (1.1), with \(\epsilon_{ki}\sim N(0,\sigma_{\epsilon}^{2})\) and \(\eta_{k}\sim N(0,\sigma_{\eta}^{2})\), corresponds to \(\eta_{k}=\rho\sigma\Phi^{-1}(V_{k})\), with a Gaussian cdf \(G_{\mathbf{h}(\boldsymbol{\alpha},\mathbf{x})}\) with mean \(h_{1}(\boldsymbol{\alpha},\mathbf{x})=\mathbf{b}^{\top}\mathbf{x}=\sum_{j=1}^{ d}\alpha_{j}x_{j}\), standard deviation \(\sigma=h_{2}(\boldsymbol{\alpha},\mathbf{x})=\sqrt{\sigma_{\eta}^{2}+\sigma_{ \epsilon}^{2}}=exp(\alpha_{d+1})\), and a normal copula with parameter \(\rho=\frac{\sigma_{\eta}}{\sigma}=\boldsymbol{\phi}(\boldsymbol{\theta}, \mathbf{x})=\left(1+e^{-\beta_{1}}\right)^{-1}\), so \(\boldsymbol{\phi}\) does not depend on \(\boldsymbol{\alpha}\). Here the parameters are \((\mathbf{b},\sigma,\rho)\), from which one obtains \(\sigma_{\eta}=\rho\sigma\) and \(\sigma_{\epsilon}^{2}=\sigma^{2}\left(1-\rho^{2}\right)\). As a result, setting with \(\bar{e}_{k}=\bar{y}_{k}-\mathbf{b}^{\top}\bar{\mathbf{x}}_{k}\) and \(\lambda_{k}=\frac{n_{k}\rho^{2}}{1-\rho^{2}+n_{k}\rho^{2}}\), \(k\in\{1,\ldots,K\}\), the cdf of the posterior of \(\eta_{k}\) given the observation in cluster \(k\) in the linear mixed model is Gaussian with mean \(\lambda_{k}\bar{e}_{k}\) and variance \(\sigma_{\eta}^{2}(1-\lambda_{k})\), while in the copula-based model, the cdf of the posterior of \(V_{k}\) given the observation in cluster \(k\) is \(\Phi\left(\frac{\Phi^{-1}(v)-\lambda_{k}\bar{e}_{k}/\sigma_{\eta}}{\sqrt{1- \lambda_{k}}}\right)\). Therefore, the posterior median \(m_{k}\) of \(V_{k}\) satisfies \(\Phi^{-1}(m_{k})=\frac{\lambda_{k}\bar{e}_{k}}{\sigma_{\eta}}\). Since the conditional distribution of \(Y_{ki}\) given \(V_{k}=v\) and \(\mathbf{X}_{ki}=\mathbf{x}_{ki}\) is Gaussian with mean \(\mathbf{b}^{\top}\mathbf{x}_{ki}+\sigma_{\eta}\Phi^{-1}(v)\) and standard deviation \(\sigma_{\epsilon}\), it follows that the prediction of \(Y_{ki}\) is \(\hat{Y}_{ki}=\mathbf{b}^{\top}\mathbf{x}_{ki}+\lambda_{k}\bar{e}_{k}\), which is the same prediction as in the mixed linear model. As for the popular model of mixed regression with random intercept and random slopes given by (A.1) \[Y_{ki}=\mathbf{b}^{\top}\mathbf{X}_{ki}+\boldsymbol{\eta}_{k}^{\top}\mathbf{Z }_{ki}+\varepsilon_{ki},\] where \(\boldsymbol{\eta}_{k}\) is a centred Gaussian vector with covariance matrix \(\Sigma_{\boldsymbol{\eta}}\), independent of \(\varepsilon_{ki}\), and the first column of \(\mathbf{Z}_{ki}\) is identically 1, it corresponds to the copula-based model with \(\boldsymbol{\eta}_{k}^{\top}\mathbf{z}=\left(\mathbf{z}^{\top}\Sigma_{ \boldsymbol{\eta}}\mathbf{z}\right)^{1/2}\Phi^{-1}(V_{k})\), where \(V_{k}\) is the associated latent variable, the cdf of \(Y_{ki}\) given \(\mathbf{X}_{ki}=\mathbf{x},\mathbf{Z}_{ki}=\mathbf{z}\) is Gaussian with mean \(h_{1}(\boldsymbol{\alpha},\mathbf{x})=\mathbf{b}^{\top}\mathbf{x}\), standard deviation \(h_{2}(\boldsymbol{\alpha},\mathbf{z})=\left(\sigma_{\varepsilon}^{2}+\mathbf{z }^{\top}\Sigma_{\boldsymbol{\eta}}\mathbf{z}\right)^{1/2}=\left(\mathbf{z}^{ \top}\Sigma\mathbf{z}\right)^{1/2}\), with \(\Sigma=\Sigma_{\boldsymbol{\eta}}+\sigma_{\epsilon}^{2}\mathbf{e}_{1}\mathbf{ e}_{1}^{\top}\), and the copula is a normal copula with parameter \(\rho(\mathbf{z})=\boldsymbol{\phi}(\boldsymbol{\theta},\mathbf{z})=\left( \frac{\mathbf{z}^{\top}\Sigma_{\boldsymbol{\eta}}\mathbf{z}}{\sigma_{ \varepsilon}^{2}+\mathbf{z}^{\top}\Sigma_{\boldsymbol{\eta}}\mathbf{z}}\right)^ {1/2}\). One can see that \(\boldsymbol{\alpha}\) is used to estimate \(b\) and \(\Sigma\), while \(\boldsymbol{\beta}=\beta_{1}\) is used to estimate \(r=\left(1+e^{-\beta_{1}}\right)^{-1}=\frac{\sigma_{\boldsymbol{\eta}}}{\sigma}\), viz. \(\rho^{2}(\mathbf{z})=1-\frac{\sigma^{2}\left(1-r^{2}\right)}{h_{2}^{2}( \boldsymbol{\alpha},\mathbf{z})}\), where \(\sigma_{\boldsymbol{\eta}}^{2}=(\Sigma_{\boldsymbol{\eta}})_{11}\), \(\sigma^{2}=\sigma_{\boldsymbol{\eta}}^{2}+\sigma_{\epsilon}^{2}=\Sigma_{11}\). In this case, one sees that the copula parameter really depends on both \(\boldsymbol{\alpha}\) and \(\boldsymbol{\beta}\), not just \(\boldsymbol{\beta}\). In addition, one gets \(\sigma_{\boldsymbol{\eta}}=r\sigma\), \(\sigma_{\epsilon}^{2}=\sigma^{2}\left(1-r^{2}\right)\), so \(\Sigma_{\boldsymbol{\eta}}=\Sigma-\sigma_{\epsilon}^{2}\mathbf{e}_{1}\mathbf{e}_{1 }^{\top}\). Note that for the mixed regression model described by Equation (A.1), the conditional distribution of \(\mathbf{\eta}_{k}\) given \((y_{ki},\mathbf{x}_{ki},\mathbf{z}_{ki})\), \(i\in\{1,\ldots,n_{k}\}\), is a Gaussian distribution with mean \(\mu_{k}=A_{k}\dfrac{1}{\sigma_{\epsilon}^{2}}\sum_{i=1}^{n_{k}}z_{ki}(y_{ki}-b^ {\top}x_{ki})\) and covariance matrix \(A_{k}=\left(\Sigma_{\mathbf{\eta}}^{-1}+\dfrac{1}{\sigma_{\epsilon}^{2}}\sum_{i=1 }^{n_{k}}z_{ki}z_{ki}^{\top}\right)^{-1}\). ## Appendix B Graphical illustrations of the difference between GAMM and factor copula-based approach On the one hand, consider a logistic model with one covariate, where the link prediction of the GAMM model satisfies \(\log\left\{\dfrac{P(Y=1|X=x,\eta)}{P(Y=0|X=x,\eta)}\right\}=s(x)+\eta\), where \(s(x)\) is a spline function. In this case, for different values of \(\eta\), the link predictions are random translations of each other. On the other hand, the link prediction of the copula-based model satisfies \(\log\left\{\dfrac{P(Y=1|X=x,V=v)}{P(Y=0|X=x,V=v)}\right\}=\log\left[\dfrac{1- \mathcal{C}\{1-p(x),v\}}{\mathcal{C}\{1-p(x),v\}}\right]\), with \(p(x)=P(Y=1|X=x)\), \(\log\left\{\dfrac{p(x)}{1-p(x)}\right\}=s(x)\), and \(\mathcal{C}(u,v)=\partial_{v}C(u,v)\), where \(C\) is the associated copula. In Figure 7, the link predictions for \(V\in\{0.1,0.5.0.9\}\), for a normal copula with parameter \(\rho=0.9\) and a non-central Gaussian copula (Nasri, 2020) with parameters \(\rho=0.9\), \(a_{1}=0.1\), \(a_{2}=2.9\) are displayed for the linear case \(s_{1}(x)=-1+0.5x\) and the non-linear case \(s_{2}(x)=\min(x,1)+2\max(0,x-1)\). For \(s_{1}\), one can see that for each copula, the three curves do not have constant slopes and they are not parallel. For \(s_{2}\), one can see that the three curves are not parallel. Also, changing the copula copula family (Gaussian copula vs non-central squared copula) modify the shape of the curves. ## Appendix C Analytical derivatives required for Newton-Raphson algorithm A Newton-Raphson-type algorithm can be used to obtain parameter estimates together with Gaussian quadrature method to integrate over the latent factors. Analytical derivatives of the log-likelihood function with respect to parameters can be obtained to speed up the computations. First, in the continuous case, for any \(\mathbf{u}_{k}\in(0,1)^{n_{k}}\), define \(c_{\mathbf{\theta},k}(\mathbf{u}_{k})=\int_{0}^{1}\left\{\prod_{i=1}^{n_{k}}c_{ \mathbf{\phi}(\mathbf{\theta},\mathbf{x}_{ki})}(u_{ki},v)\right\}dv\). Further, define \(g_{\mathbf{\alpha},k}(\mathbf{y}_{k})=\prod_{i=1}^{n_{k}}g_{\mathbf{h}(\mathbf{ \alpha},\mathbf{x}_{ki})}(y_{ki})\). Then, letting \(\mathbf{G}_{\mathbf{\alpha},k}(\mathbf{y}_{k})\) be the vector with components \(G_{\mathbf{h}(\mathbf{\alpha},\mathbf{x}_{ki})}(y_{ki})\), \(i\in\{1,\ldots,n_{k}\}\), one can write the full likelihood as (C.1) \[L(\mathbf{\alpha},\mathbf{\beta})=\sum_{k=1}^{K}\log g_{\mathbf{\alpha},k}(\mathbf{y}_{k}) +\sum_{k=1}^{K}\log c_{\mathbf{\theta},k}\left\{\mathbf{G}_{\mathbf{\alpha},k}( \mathbf{Y}_{k})\right\}.\] It follows that (C.2) \[\partial_{\mathbf{\alpha}}L(\mathbf{\alpha},\mathbf{\beta})=\sum_{k=1}^{K}\sum_{i=1}^{n_{k} }\dfrac{\dot{g}_{\mathbf{h}(\mathbf{\alpha},\mathbf{x}_{ki})}(y_{ki})}{g_{\mathbf{ h}(\mathbf{\alpha},\mathbf{x}_{ki})}(y_{ki})}\mathbf{h}^{\prime}(\mathbf{\alpha}, \mathbf{x}_{ki})+\sum_{k=1}^{K}\dfrac{\partial_{\mathbf{\alpha}}\left[c_{\mathbf{ \theta},k}\left\{\mathbf{G}_{\mathbf{\alpha},k}(\mathbf{y}_{k})\right\}\right]}{ c_{\mathbf{\theta},k}\left\{\mathbf{G}_{\mathbf{\alpha},k}(\mathbf{y}_{k})\right\}}\,,\] (C.3) \[\partial_{\mathbf{\beta}}L(\mathbf{\alpha},\mathbf{\beta})=\sum_{k=1}^{K}\frac{ \partial_{\mathbf{\beta}}\left[c_{\mathbf{\theta},k}\left\{\mathbf{G}_{\mathbf{\alpha},k}( \mathbf{y}_{k})\right\}\right]}{c_{\mathbf{\theta},k}\left\{\mathbf{G}_{\mathbf{\alpha},k}(\mathbf{y}_{k})\right\}}\,,\] where \[\partial_{\mathbf{\alpha}}\left[c_{\mathbf{\theta},k}\left\{\mathbf{G}_{ \mathbf{\alpha},k}(\mathbf{y}_{k})\right\}\right] = \sum_{i=1}^{n_{k}}\dot{G}_{\mathbf{h}(\mathbf{\alpha},\mathbf{x}_{ki} )}(y_{ki})\mathbf{h}^{\prime}(\mathbf{\alpha},\mathbf{x}_{ki})\] \[\times\int_{0}^{1}\left\{\prod_{j\neq i}^{n_{k}}c_{\mathbf{\phi}( \mathbf{\theta},\mathbf{x}_{kj})}(u_{kj},v)\right\}\times\partial_{u_{ki}}c_{\bm {\phi}(\mathbf{\theta},\mathbf{x}_{ki})}(u_{ki},v)dv,\] \[\qquad+\sum_{i=1}^{n_{k}}\int_{0}^{1}\left\{\prod_{j\neq i}^{n_{ k}}c_{\mathbf{\phi}(\mathbf{\theta},\mathbf{x}_{kj})}(u_{kj},v)\right\}\times\dot{c}_{ \mathbf{\phi}(\mathbf{\theta},\mathbf{x}_{ki})}(u_{ki},v)dv\] \[\qquad\qquad\qquad\times\nabla_{\mathbf{\alpha}}\mathbf{\phi}(\mathbf{\theta },\mathbf{x}_{ki}),\] and \[\partial_{\mathbf{\beta}}c_{\mathbf{\theta},k}\left\{\mathbf{G}_{\mathbf{ \alpha},k}(\mathbf{y}_{k})\right\} = \sum_{i=1}^{n_{k}}\int_{0}^{1}\left\{\prod_{j\neq i}^{n_{k}}c_{ \mathbf{\phi}(\mathbf{\theta},\mathbf{x}_{kj})}(u_{kj},v)\right\}\times\dot{c}_{\mathbf{ \phi}(\mathbf{\theta},\mathbf{x}_{ki})}(u_{ki},v)dv\] (C.5) \[\qquad\times\nabla_{\mathbf{\beta}}\mathbf{\phi}(\mathbf{\theta},\mathbf{x}_{ ki}),\] Figure 7. Graphs of the link prediction for \(s_{1}(x)=\beta_{0}+\beta_{1}x=-1+0.5x\) (panels a,b) and \(s_{2}(x)=\min(x,1)+2\max(0,x-1)\)(panels c, d), for \(v=0.1\) (dashed red line), \(v=0.5\) (twodashed green line) and \(v=0.9\) (solid blue line). The left panels (a,c) are computed with the Gaussian copula, while the right panels (b,d) are computed with the non-central Gaussian copula with parameters \(\rho=0.9\), \(a_{1}=0.1\), \(a_{2}=2.9\). with \(u_{ki}=G_{\mathbf{h}(\boldsymbol{\alpha},\mathbf{x}_{ki})}(y_{ki})\). Note that if \(\boldsymbol{\phi}\) does not depend on \(\boldsymbol{\alpha}\), then the last term in (C.4) is \(0\). In the discrete case, (C.6) \[\partial_{\boldsymbol{\alpha}}f_{\boldsymbol{\theta},ki}(y_{ki},v) = \Big{\{}c_{\boldsymbol{\phi}(\boldsymbol{\theta},\mathbf{x}_{ki} )}(u_{ki},v)\dot{G}_{\mathbf{h}(\boldsymbol{\alpha},\mathbf{x}_{ki})}(y_{ki})\] \[\quad-c_{\boldsymbol{\phi}(\boldsymbol{\theta},\mathbf{x}_{ki})} (\underline{u_{ki}},v)\dot{G}_{\mathbf{h}(\boldsymbol{\alpha},\mathbf{x}_{ki} )}(y_{ki}-)\Big{\}}\,\mathbf{h}^{\prime}(\boldsymbol{\alpha},\mathbf{x}_{ki})\] \[\quad+\Big{\{}\dot{\mathcal{C}}_{\boldsymbol{\phi}(\boldsymbol{ \theta},\mathbf{x}_{ki})}(u_{ki},v)-\dot{\mathcal{C}}_{\boldsymbol{\phi}( \boldsymbol{\theta},\mathbf{x}_{ki})}(\underline{u_{ki}},v)\Big{\}}\] \[\qquad\times\nabla_{\boldsymbol{\alpha}}\boldsymbol{\phi}( \boldsymbol{\theta},\mathbf{x}_{ki}),\] (C.7) \[\partial_{\boldsymbol{\beta}}f_{\boldsymbol{\theta},ki}(y_{ki},v)=\Big{\{} \dot{\mathcal{C}}_{\boldsymbol{\phi}(\boldsymbol{\theta},\mathbf{x}_{ki})}(u _{ki},v)-\dot{\mathcal{C}}_{\boldsymbol{\phi}(\boldsymbol{\theta},\mathbf{x}_ {ki})}(\underline{u_{ki}},v)\Big{\}}\,\nabla_{\boldsymbol{\beta}}\boldsymbol{ \phi}(\boldsymbol{\theta},\mathbf{x}_{ki}),\] where \(\underline{u_{ki}}=G_{\mathbf{h}(\boldsymbol{\alpha},\mathbf{x}_{ki})}(y_{ki }-)\). Next, we provide analytical expressions for the derivatives required to compute the gradient of the log-likelihood function for some commonly used bivariate linking copula densities \(c_{\boldsymbol{\phi}}\), marginal distribution functions \(G_{\mathbf{a}}\), and their densities \(g_{\mathbf{a}}\). ### Bivariate linking copulas Here, we present some examples of computations for linking copulas. More copula families are implemented in the R package CopulaGAMM Krupskii et al. (2023). #### c.1.1. Normal copula For this copula family with parameter \(\varphi\in(-1,1)\), \[\partial_{\varphi}\log c_{\varphi}(u,v) = (1-\varphi^{2})^{-2}\left\{(1+\varphi^{2})\tilde{u}\tilde{v}- \varphi(\tilde{u}^{2}+\tilde{v}^{2})\right\}+\varphi(1-\varphi^{2})^{-1},\] \[\partial_{u}\log c_{\varphi}(u,v) = (1-\varphi^{2})^{-1}\varphi(\tilde{v}-\varphi\tilde{u})/\varphi_{ 1}(\tilde{u}),\] where \(\tilde{u}=\Phi_{1}^{-1}(u)\), \(\tilde{v}=\Phi_{1}^{-1}(v)\), \(\Phi_{1}^{-1}\) is the inverse cdf of a standard normal distribution, and \(\varphi_{1}\) is its pdf. Here, the case \(\varphi=0\) corresponds to the independence copula. #### c.1.2. Frank's copula For this copula family with parameter \(\varphi\in\mathbb{R}\setminus\{0\}\), \[\partial_{\varphi}\log c_{\varphi}(u,v) = \frac{1}{\varphi}+\frac{\varphi_{e}}{1-\varphi_{e}}+u+v+2\times \frac{uv_{e}+vu_{e}+\varphi_{e}(1-u-v)}{\varphi_{e}-u_{e}-v_{e}+u_{e}v_{e}},\] \[\partial_{u}\log c_{\varphi}(u,v) = -\varphi\times\frac{\varphi_{e}+u_{e}-v_{e}-u_{e}v_{e}}{\varphi_{ e}-u_{e}-v_{e}+u_{e}v_{e}}\,,\] where \(u_{e}=e^{-\varphi u}\), \(v_{e}=e^{-\varphi v}\) and \(\varphi_{e}=e^{-\varphi}\). The limiting case \(\varphi=0\) yields the independence copula. #### c.1.3. Clayton's copula For this copula family with parameter \(\varphi\in(0,\infty)\), \[\partial_{\varphi}\log c_{\varphi}(u,v) = \frac{1}{\varphi}-\tilde{u}-\tilde{v}+\frac{1}{\varphi^{2}}\log(u _{\varphi}+v_{\varphi}-1)+\left(\frac{1}{\varphi}+2\right)\frac{\tilde{u}u_{ \varphi}+\tilde{v}v_{\varphi}}{u_{\varphi}+v_{\varphi}-1}\,,\] \[\partial_{u}\log c_{\varphi}(u,v) = -\frac{\varphi+1}{u}+\frac{u_{\varphi}}{u}\times\frac{2\varphi+1 }{u_{\varphi}+v_{\varphi}-1}\,,\] where \(\tilde{u}=\log(u)\), \(\tilde{v}=\log v\), \(u_{\varphi}=u^{-\varphi}\) and \(v_{\varphi}=v^{-\varphi}\). The limiting case \(\varphi=0\) is the independence copula. #### c.1.4. Gumbel's copula For this copula family with parameter \(\varphi\in[1,\infty)\), \[\partial_{\varphi}\log c_{\varphi}(u,v) = \log\tilde{u}+\log\tilde{v}-\frac{1}{\varphi^{2}}\log\xi_{\varphi }+\left(\frac{1}{\varphi}-2\right)\times\frac{\tilde{u}^{\varphi}\log\tilde{u} +\tilde{v}^{\varphi}\log\tilde{v}}{\xi_{\varphi}}\] \[\qquad-\zeta_{\varphi}+\frac{\zeta_{\varphi}+1}{\xi_{\varphi}^{1/ \varphi}+\varphi-1}\,,\] \[\partial_{u}\log c_{\varphi}(u,v) = -\frac{\varphi-1}{u\tilde{u}}-\frac{1}{u}+\frac{2\varphi+\xi_{ \varphi}^{1/\varphi}-1}{u}\times\frac{\tilde{u}^{\varphi-1}}{\xi_{\varphi}}\] \[\qquad+\frac{\tilde{u}^{\varphi-1}}{u}\times\frac{\xi_{\varphi}^{ 1/\varphi-1}}{\xi_{\varphi}^{1/\varphi}+\varphi-1}\,,\] where \(\tilde{u}=-\log(u)\), \(\tilde{v}=-\log v\), \(\xi_{\varphi}=\tilde{u}^{\varphi}+\tilde{v}^{\varphi}\) and \[\zeta_{\varphi}=\xi_{\varphi}^{1/\varphi}\left\{-\frac{1}{\varphi^{2}}\log\xi _{\varphi}+\frac{1}{\varphi}\times\frac{\tilde{u}^{\varphi}\log(\tilde{u})+ \tilde{v}^{\varphi}\log(\tilde{v})}{\xi_{\varphi}}\right\}\,.\] The case \(\varphi=1\) yields the independence copula. ### Marginal distributions Here, we present some examples of computations for marginal distributions. More margins are implemented in the R package CopulaGAMM Krupskii et al. (2023). #### c.2.1. Normal distribution For this family of distributions with mean \(\mu\) and variance \(\sigma^{2}\) and \(\mathbf{a}=(\mu,\sigma)\), \[\dot{G}_{\mathbf{a}}(y)=\partial_{\mathbf{a}}G_{\mathbf{a}}(y)=-g_{\mathbf{a} }(y)(1,\ y_{c}),\quad\dot{g}_{\mathbf{a}}(y)=\partial_{\mathbf{a}}g_{\mathbf{ a}}(y)=\frac{1}{\sigma}g_{\mathbf{a}}(y)(y_{c},\ y_{c}^{2}-1),\] where \(y_{c}=(y-\mu)/\sigma\). **Remark C.1** (Special case for the variance).: _Suppose that the standard deviation of a normal distribution is given by \(\sigma(\mathbf{x})=\left(\mathbf{x}^{\top}V\mathbf{x}\right)^{1/2}\), where \(V\) is a \(d\times d\) symmetric positive definite matrix to be estimated. A simple way to parameterise \(V\) is to consider the Cholesky's decomposition \(V=\mathbf{a}^{\top}\mathbf{a}\), where \(\mathbf{a}\) is triangular superior, i.e., \(a_{ij}=0\) if \(i>j\). For this parametrisation, the only constraints are that \(a_{ii}>0\), \(i\in\{1,\ldots,d\}\). This is easily taken care for by setting \(a_{ii}=e^{\alpha_{ii}}\), and \(a_{ij}=\alpha_{ij}\) otherwise. This way, there are \(d(d+1)/2\) unconstrained parameters. Since \(V_{lm}=\sum_{k=1}^{\min(l,m)}a_{kl}a_{km}\), it follows that for \(i<j\) and \(l<m\)_ \[\partial_{a_{ij}}V_{lm} = \left\{\begin{array}{cc}a_{im},&j=l,\\ a_{il},&j=m,i\leq l,\\ 0,&j=m,i>l,\\ 0,&j\neq l,m\end{array}\right.,\qquad\partial_{a_{ij}}V_{ll}=\left\{\begin{array} []{cc}2a_{ij},&j=l,\\ 0,&j\neq l\end{array}\right.,\] \[\partial_{a_{ii}}V_{lm} = \left\{\begin{array}{cc}a_{im},&i=l,\\ 0,&i\neq l\end{array}\right.,\qquad\partial_{a_{ii}}V_{ll}=\left\{\begin{array} []{cc}2a_{ii}x_{i}^{2},&i=l,\\ 0,&i\neq l\end{array}\right..\] _As a result, for any \(i\leq j\), \(i,j\in\{1,\ldots,d\}\), \(\partial_{a_{ij}}\sigma(\mathbf{x})=\dfrac{x_{j}(a\mathbf{x})_{i}}{\sigma( \mathbf{x})}\). It then follows that \(\partial_{\alpha_{ij}}\sigma(\mathbf{x})=\dfrac{x_{j}(a\mathbf{x})_{i}}{ \sigma(\mathbf{x})}\), \(i<j\), while \(\partial_{\alpha_{ii}}\sigma(\mathbf{x})=a_{ii}\dfrac{x_{i}(a\mathbf{x})_{i}} {\sigma(\mathbf{x})}\), \(i\in\{1,\ldots,d\}\). Finally, if \(\boldsymbol{\theta}=(\mu_{1},\ldots,\mu_{d},\alpha_{11},\ldots,\alpha_{1d}, \ldots,\alpha_{dd})\), then \(a_{ij}=\alpha_{ij}=\theta_{j+id-\frac{i(i-1)}{2}}\), \(i<j\), while \(a_{ii}=e^{\alpha_{ii}}\), where \(\alpha_{ii}=\theta_{1+id-\frac{(i-1)(i-2)}{2}}\), \(i\in\{1,\ldots,d\}\)._ #### c.2.2. Weibull distribution For this family of distributions with shape \(\gamma\) and scale \(\sigma\) and \(\mathbf{a}=(\gamma,\sigma)\), \[\dot{G}_{\mathbf{a}}(y) = \{1-G_{\mathbf{a}}(y)\}y_{c}^{\gamma}\left(\ln y_{c},\ -\dfrac{\gamma}{\sigma}\right),\] \[\dot{g}_{\mathbf{a}}(y) = g_{\mathbf{a}}(y)\left(\dfrac{1}{\gamma}+\log y_{c}-y_{c}^{ \gamma}\log y_{c},\ \dfrac{\gamma}{\sigma}y_{c}^{\gamma}-\dfrac{\gamma}{\sigma}\right),\] where \(y_{c}=y/\sigma\). #### c.2.3. Multinomial distribution For this family of distributions with values in \(\{1,\ldots,L\}\), the \((L-1)\times k_{2}\) margin parameters are denoted \(\mathbf{mpar}=(a_{21},\ldots,a_{2k_{2}},\ldots,a_{L,1},\ldots,a_{L,k_{2}})\), \(P_{i1}=1-\sum_{j=2}^{L}P_{ij}\), and \[P_{ij}=P(Y_{i}=j|\mathbf{X}_{i}=\mathbf{x}_{i})=\dfrac{\exp\left(\sum_{l=1}^{ k_{2}}a_{jl}x_{il}\right)}{1+\sum_{m=2}^{L}\exp\left(\sum_{l=1}^{k_{2}}a_{ml}x_{il} \right)},\qquad j\in\{2,\ldots,L\}.\] As a result, \(\partial_{a_{jl}}P_{ij}=x_{il}P_{ij}(1-P_{ij})\), and \(\partial_{a_{kl}}P_{ij}=-x_{il}P_{ij}P_{ik}\), if \(j\neq k\), with \(j\in\{1,\ldots,L\}\) and \(k\in\{2,\ldots,L\}\). Also, setting \(F_{ij}=\sum_{m=1}^{j}P_{im}\), one gets \(\partial_{a_{kl}}F_{ij}=-x_{il}F_{ij}P_{ik}\), for \(k>j\), and \(\partial_{a_{kl}}F_{ij}=-x_{il}F_{ij}P_{ik}+x_{il}P_{ik}\), for \(k\leq j\), for \(j\in\{1,\ldots,L\}\) and \(k\in\{2,\ldots,L\}\). ## Appendix D Central limit theorem for a copula-based mixed model ### Auxiliary results **Lemma 1**.: _Let \(\mathbf{X}\) be a copula-based mixed model. Then, for any \(i\leq j\), \(i,j\in\{1,\ldots,d\}\), \(\partial_{a_{ij}}\sigma(\mathbf{x})=\dfrac{x_{j}(a\mathbf{x})_{i}}{\sigma( \mathbf{x})}\), \(i<j\), and \(i<j\), \(i<j\), \(i<j\), \(j\in\{1,\ldots,d\}\)._ Proof.: Let \(\mathbf{X}\) be a copula-based mixed model. Let \(\mathbf{X}^{\prime}\) be a copula-based mixed model. **Theorem D.1** (Lindeberg's condition).: _Suppose that \(\boldsymbol{\xi}_{1},\ldots,\boldsymbol{\xi}_{K}\) are independent, and as \(K\to\infty\), \(N_{K}\to\infty\),_ (D.1) \[\frac{1}{N_{K}}\sum_{k=1}^{K}\mathrm{var}\ (\boldsymbol{\xi}_{k})\to\Sigma,\] _and_ (D.2) \[\frac{1}{N_{K}}\sum_{k=1}^{K}E\left\{\|\boldsymbol{\xi}_{k}\|^{2}\mathbb{I} \left(\|\boldsymbol{\xi}_{k}\|>\epsilon\sqrt{N_{K}}\right)\right\}\to 0,\quad \text{ for any }\epsilon>0\.\] _Then \(\frac{1}{N_{K}^{1/2}}\sum_{k=1}^{K}\boldsymbol{\xi}_{k}\) converges in law to a centred Gaussian random vector with covariance matrix \(\Sigma\)._ **Corollary D.2**.: _If \(\boldsymbol{\xi}_{k}=\sum_{i=1}^{n_{k}}\boldsymbol{\xi}_{ki}\), where \(\{\boldsymbol{\xi}_{ki}:\ 1\leq i\leq n_{k}\}\) are exchangeable and square integrable, and \(\lambda_{K}=\frac{1}{N_{K}}\sum_{k=1}^{K}n_{k}^{2}\to\lambda\in[1,\infty)\), then (D.1) holds with \(\Sigma=\Sigma_{11}+(\lambda-1)\Sigma_{12}\), and either of the following assumptions imply (D.2):_ (D.3) \[\sum_{k=1}^{K}\frac{n_{k}^{2}}{N_{K}}E\left\{\|\xi_{k1}\|^{2} \mathbb{I}\left(\|\boldsymbol{\xi}_{k}\|>\epsilon\sqrt{N_{K}}\right)\right\} \to 0,\] (D.4) \[\max_{1\leq k\leq K}E\left\{\|\boldsymbol{\xi}_{k1}\|^{2}\mathbb{I }\left(\|\boldsymbol{\xi}_{k}\|>\epsilon\sqrt{N_{K}}\right)\right\} \to 0.\] _If, in addition, \(n_{k}\equiv n\), then (D.2) holds since \(\boldsymbol{\xi}_{1}\) is square integrable and \(\lambda=n\). In particular, \(\frac{1}{N_{K}^{1/2}}\sum_{k=1}^{K}\boldsymbol{\xi}_{k}\) converges in law to a centred Gaussian random vector with covariance matrix \(\Sigma\)._ **Remark D.1**.: _Statistics such as \(\sum_{k=1}^{K}\boldsymbol{\xi}_{k}=\sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\boldsymbol {\xi}_{ki}\) are called multisample statistics in Vaillancourt (1995). Note that (D.2) holds if for some \(p>2\), \(\|\boldsymbol{\xi}_{ki}\|^{p}\) is integrable and \(\frac{1}{N_{K}r^{p/2}}\sum_{k=1}^{K}n_{k}^{p}\to 0\). This condition is only useful if the \(n_{k}\)'s are not equal._ ### Additional assumptions **Assumption D.1**.: _Set \(\boldsymbol{\eta}_{\boldsymbol{\theta},ki}=\frac{\int_{0}^{1}\left\{\prod_{j \neq i}f_{\boldsymbol{\theta},kj}(Y_{kj},v)\right\}\dot{f}_{\boldsymbol{\theta },ki}(Y_{ki},v)\;dv}{f_{\boldsymbol{\theta},k}(\mathbf{Y}_{k})}\). In some bounded neighbourhood \(\mathcal{N}\) of \(\boldsymbol{\theta}_{0}\), \(\boldsymbol{\mu}(\boldsymbol{\theta})=E_{\boldsymbol{\theta}_{0}}\left[ \boldsymbol{\eta}_{\boldsymbol{\theta},ki}\right]=0\) iff \(\boldsymbol{\theta}=\boldsymbol{\theta}_{0}\), its Jacobian \(\dot{\boldsymbol{\mu}}(\boldsymbol{\theta})\) exists, is continuous, and is positive definite or negative definite at \(\boldsymbol{\theta}_{0}\). Also, \(\limsup_{K\to\infty}\max_{1\leq k\leq K}\max_{1\leq i\leq n_{k}}\sup_{ \boldsymbol{\theta}\in\mathcal{N}}E_{\boldsymbol{\theta}_{0}}\left(\| \boldsymbol{\eta}_{\boldsymbol{\theta},ki}\|^{2}\right)<\infty\), for some neighbourhood \(\mathcal{N}\) of \(\boldsymbol{\theta}_{0}\)._ The following condition might also be needed. **Assumption D.2**.: \(\limsup_{K\to\infty}\max_{1\leq k\leq K}\max_{1\leq i\leq n_{k}}\sup_{ \boldsymbol{\theta}\in\mathcal{N}}E_{\boldsymbol{\theta}_{0}}\left(\| \boldsymbol{\eta}_{\boldsymbol{\theta},ki}\|^{4}\right)<\infty\)_, for some neighbourhood \(\mathcal{N}\) of \(\boldsymbol{\theta}_{0}\)._ ### Proof of Theorem 3.1 Recall that the log-likelihood \(L_{K}(\boldsymbol{\theta})\) can be written as \(L(\boldsymbol{\theta})=\sum_{k=1}^{K}\log f_{\boldsymbol{\theta},k}(\mathbf{y}_{ k})\), where \(f_{\boldsymbol{\theta},k}(\mathbf{y}_{k})=\int_{0}^{1}\left\{\prod_{j=1}^{n_{k}}f_{ \boldsymbol{\theta},kj}(y_{kj},v)\right\}dv\), and for \(v\in(0,1)\), \(f_{\boldsymbol{\theta},kj}(y_{kj},v)\) is a density with respect to the reference measure \(\nu\). As a result, \(\dot{L}(\boldsymbol{\theta}_{0})=\left.\nabla_{\boldsymbol{\theta}}L( \boldsymbol{\theta})\right|_{\boldsymbol{\theta}=\boldsymbol{\theta}_{0}}= \sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\boldsymbol{\eta}_{ki}\), where \[\boldsymbol{\eta}_{ki}=\frac{1}{f_{\boldsymbol{\theta}_{0},k}(\mathbf{y}_{k}) }\int\left\{\prod_{j\neq i}f_{\boldsymbol{\theta}_{0},kj}(y_{kj},v)\right\} \left.\nabla_{\boldsymbol{\theta}}f_{\boldsymbol{\theta},ki}(y_{ki},v)\right|_ {\boldsymbol{\theta}=\boldsymbol{\theta}_{0}}dv.\] Now, for any \(i\in\{1,\ldots,n_{k}\}\), \(\left.\int_{\mathbb{R}}\dot{f}_{\boldsymbol{\theta}_{0},ki}(y,v)\nu(dy)= \left.\nabla_{\boldsymbol{\theta}}\left\{\int_{\mathbb{R}}\dot{f}_{\boldsymbol {\theta},ki}(y,v)\nu(dy)\right\}\right|_{\boldsymbol{\theta}=\boldsymbol{ \theta}_{0}}=\left.\nabla_{\boldsymbol{\theta}}\left\{1\right\}\right|_{ \boldsymbol{\theta}=\boldsymbol{\theta}_{0}}=0\). Next, for bounded measurable functions \(\Psi_{j}\), \(j\neq i\), one has \[E\left[\boldsymbol{\eta}_{ki}\prod_{j\neq i}\Psi_{j}(Y_{kj})\right]\] \[=\int\int_{0}^{1}\left[\prod_{j\neq i}\left\{f_{\boldsymbol{ \theta}_{0},kj}(y_{j},v)\Psi_{j}(y_{j})\right\}\right]\dot{f}_{\boldsymbol{ \theta}_{0},ki}(y_{i},v)dv\ \nu(dy_{1})\cdots\nu(dy_{n_{k}})=0.\] As a result, \(E\left(\left.\boldsymbol{\eta}_{ki}\right|Y_{kl}\), \(l\neq i\right)=0\). In particular, \(E(\boldsymbol{\eta}_{ki})=0\). Note that this does not imply that \(E\left(\boldsymbol{\eta}_{ki}\boldsymbol{\eta}_{kj}^{\top}\right)=0\), \(i\neq j\), since \(\boldsymbol{\eta}_{kj}\) is not measurable with respect to \(\{Y_{kl}:l\neq i\}\)! However, \(\boldsymbol{\eta}_{k,1},\ldots,\boldsymbol{\eta}_{k,n_{k}}\) are exchangeable with mean \(0\). Further set \(\Sigma_{11}=E\left(\boldsymbol{\eta}_{k1}\boldsymbol{\eta}_{k1}^{\top}\right)\), \(\Sigma_{12}=E\left(\boldsymbol{\eta}_{k1}\boldsymbol{\eta}_{k2}^{\top}\right)\). Next, \[\ddot{L}(\boldsymbol{\theta}_{0})=\sum_{k=1}^{K}\frac{\ddot{f}_{\boldsymbol{ \theta}_{0},k}(\mathbf{y}_{k})}{f_{\boldsymbol{\theta}_{0},k}(\mathbf{y}_{k}) }-\sum_{k=1}^{K}\frac{\dot{f}_{\boldsymbol{\theta}_{0},k}(\mathbf{y}_{k}) \dot{f}_{\boldsymbol{\theta}_{0},k}(\mathbf{y}_{k})^{\top}}{f_{\boldsymbol{ \theta}_{0},k}^{2}(\mathbf{y}_{k})},\] where \[\ddot{f}_{\boldsymbol{\theta}_{0},k}(\mathbf{y}_{k}) = \sum_{i=1}^{n_{k}}\int_{0}^{1}\left\{\prod_{j\neq i}f_{\boldsymbol {\theta}_{0},kj}(y_{kj},v)\right\}\ddot{f}_{\boldsymbol{\theta}_{0},ki}(y_{ki },v)dv\] \[\qquad+\sum_{i\neq j}\int_{0}^{1}\left\{\prod_{l\neq i,j}f_{ \boldsymbol{\theta}_{0},kl}(y_{kl},v)\right\}\dot{f}_{\boldsymbol{\theta}_{0}, ki}(y_{ki},v)\dot{f}_{\boldsymbol{\theta}_{0},kj}(y_{kj},v)^{\top}dv\] \[= \sum_{i=1}^{n_{k}}\boldsymbol{\zeta}_{ki}+\sum_{i\neq j}\boldsymbol {\zeta}_{kij}.\] As before, one gets that \(E(\boldsymbol{\zeta}_{ki}|Y_{kl},l\neq i)=0\) and \(E(\boldsymbol{\zeta}_{kij}|Y_{kl},l\neq i)=0\) Set \(A_{K}=\frac{1}{N_{K}}\sum_{k-1}^{K}\boldsymbol{\eta}_{k}\boldsymbol{\eta}_{k}^ {\top}\). Then, as \(K\to\infty\), \(E(A_{K})=\Sigma_{11}+(\lambda_{K}-1)\Sigma_{12}\to\Sigma\). If \(n_{k}\equiv n\), then \(A_{K}\to\Sigma\) by the strong law of large number, assuming only that \(\boldsymbol{\eta}_{ki}\) is square integrable. \(\frac{1}{{N_{K}}^{2}}\sum_{k=1}^{K}\operatorname{var}\ \left(\boldsymbol{\eta}_{k} \boldsymbol{\eta}_{k}^{\top}\right)\to 0\) which is equivalent to \[\frac{1}{{N_{K}}^{2}}\sum_{k=1}^{K}E\left(\boldsymbol{\eta}_{k} \boldsymbol{\eta}_{k}^{\top}\boldsymbol{\eta}_{k}\boldsymbol{\eta}_{k}^{\top} \right)-\frac{1}{{N_{K}}^{2}}\sum_{k=1}^{K}\left\{E\left(\boldsymbol{\eta}_{k} \boldsymbol{\eta}_{k}^{\top}\right)\right\}^{2}\\ =\frac{1}{{N_{K}}^{2}}\sum_{k=1}^{K}E\left(\boldsymbol{\eta}_{k} \boldsymbol{\eta}_{k}^{\top}\boldsymbol{\eta}_{k}\boldsymbol{\eta}_{k}^{\top} \right)-\frac{1}{{N_{K}}^{2}}\sum_{k=1}^{n_{k}}\left\{n_{k}\Sigma_{11}+n_{k}( n_{k}-1)\Sigma_{12}\right\}^{2}\to 0,\] and it is enough to show that \(\frac{1}{{N_{K}}^{2}}\sum_{k=1}^{K}E\left(\boldsymbol{\eta}_{k} \boldsymbol{\eta}_{k}^{\top}\boldsymbol{\eta}_{k}\boldsymbol{\eta}_{k}^{\top} \right)\to 0\). Now, since \(E\left(\|\boldsymbol{\eta}_{ki}\|^{4}\right)=V<\infty\), one gets \[\left\|\frac{1}{{N_{K}}^{2}}\sum_{k=1}^{K}E\left(\boldsymbol{ \eta}_{k}\boldsymbol{\eta}_{k}^{\top}\boldsymbol{\eta}_{k}\boldsymbol{\eta}_ {k}^{\top}\right)\right\|=\frac{1}{{N_{K}}^{2}}\left\|\sum_{k=1}^{K}\sum_{i=1 }^{n_{k}}\sum_{j=1}^{n_{k}}\sum_{l=1}^{n_{k}}\sum_{m=1}^{n_{k}}E\left( \boldsymbol{\eta}_{ki}\boldsymbol{\eta}_{kj}^{\top}\boldsymbol{\eta}_{ki} \boldsymbol{\eta}_{km}^{\top}\right)\right\|\\ \leq\frac{V}{{N_{K}}^{2}}\sum_{k=1}^{n_{k}}n_{k}^{4}\to 0.\] It then follows from Remark D.1 with \(p=4\) that the conditions of Theorem D.1 are met. In addition, \(\mathbb{T}_{K}(\boldsymbol{\theta})=N_{K}^{-1/2}\{L_{K}(\boldsymbol{\theta})- \boldsymbol{\mu}(\boldsymbol{\theta})\}\) converges uniformly in law to a Gaussian process \(\mathbb{T}\) for \(\boldsymbol{\theta}\in\mathcal{N}\), so if \(\boldsymbol{\theta}_{K_{l}}\) is any converging subsequence such that \(L_{K_{l}}(\boldsymbol{\theta}_{K_{l}})=0\), and \(\boldsymbol{\theta}_{K_{l}}\to\boldsymbol{\theta}^{\star}\), then \(\boldsymbol{\mu}\left(\boldsymbol{\theta}^{\star}\right)=0\) so by hypothesis, \(\boldsymbol{\theta}^{\star}=\boldsymbol{\theta}_{0}\). It then follows that \(\boldsymbol{\theta}_{K}\) converges in probability to \(\boldsymbol{\theta}_{0}\). Finally, \(0=N^{-1/2}L_{K}(\boldsymbol{\theta}_{K})=\mathbb{T}_{K}(\boldsymbol{\theta}_ {K})+N^{-1/2}\boldsymbol{\mu}(\boldsymbol{\theta}_{K})=\mathbb{T}_{K}( \boldsymbol{\theta})+\dot{\boldsymbol{\mu}}(\tilde{\boldsymbol{\theta}}_{K}) \boldsymbol{\Theta}_{k}\), for some \(\tilde{\boldsymbol{\theta}}_{K}\) converging to \(\boldsymbol{\theta}_{0}\). Since \(\mathbb{T}_{K}(\boldsymbol{\theta}_{K})\) converges in law to \(\mathbb{T}(\boldsymbol{\theta}_{0})\sim N(0,\Sigma)\) and \(\dot{\boldsymbol{\mu}}(\tilde{\boldsymbol{\theta}}_{K})\) converges in probability to \(\dot{\boldsymbol{\mu}}(\boldsymbol{\theta}_{0})=-\Sigma\), it follows that \(\boldsymbol{\Theta}_{K}\) converges in law to \(\Sigma^{-1}\mathbb{T}(\boldsymbol{\theta}_{0})\sim N\left(0,\Sigma^{-1}\right)\). In the case \(n_{k}\equiv n\), note that the \(\boldsymbol{\eta}_{\boldsymbol{\theta},k}\) are iid.
2310.06759
Planar aperiodic tile sets: from Wang tiles to the Hat and Spectre monotiles
A brief history of planar aperiodic tile sets is presented, starting from the Domino Problem proposed by Hao Wang in 1961. We provide highlights that led to the discovery of the Taylor--Socolar aperiodic monotile in 2010 and the Hat and Spectre aperiodic monotiles in 2023. The Spectre tile is an amazingly simple monotile; a single tile whose translated and rotated copies tile the plane but only in a way that lacks any translational periodicity. We showcase this breakthrough discovery through the 60$+$ years that aperiodic tile sets have been considered.
Tinka Bruneau, Michael F. Whittaker
2023-10-10T16:34:42Z
http://arxiv.org/abs/2310.06759v2
# Planar aperiodic tile sets: from Wang tiles to the Hat and spectra monotiles ###### Abstract. A brief history of planar aperiodic tile sets is presented, starting from the Domino Problem proposed by Hao Wang in 1961. We provide highlights that led to the discovery of the Taylor-Socolar aperiodic monotile in 2010 and the Hat and Spectre aperiodic monotiles in 2023. The Spectre tile is an amazingly simple monotile; a single tile whose translated and rotated copies tile the plane but only in a way that lacks any translational periodicity. We showcase this breakthrough discovery through the 60+ years that aperiodic tile sets have been considered. We'd like to thank Michael Baake, Kevin Brix, Felix Flicker, Robbert Fokkink, Franz Gahler, Craig Kaplan, Jan Mazac and Jamie Walton for excellent comments and suggestions on early drafts. This paper was written while the second author was a guest at the Fields Institute, and he thanks them for their hospitality and exceptional research environment. In a breakthrough result of Joan Taylor in 2010 [20], the first connected and completely geometrically defined monotile was discovered, but the tile is not simply connected. That is, the tile is not a topological disc. Taylor joined forces with Joshua Socolar to introduce their tile to the mathematical community [18, 19]. The original formulation of the Taylor-Socolar tile was described as a decorated hexagon with certain matching rules between both neighbouring and next to neighbouring tiles, and the tiling requires a reflected tile. In March 2023, Dave Smith, Craig Kaplan, Joseph Myers, and Chaim Goodman-Strauss found a simply connected monotile that they called the Hat [16]. Amazingly, the Hat is a simple polygonal shape and is entirely geometric, in the sense that no additional matching rules on how tiles are allowed to meet are required to enforce aperiodicity. What's more, in the article, a beautiful proof of aperiodicity was used to show that there are, in fact, an uncountable number of tiles in the Hat family that are also simply connected monotiles. The tiling requires a reflected version of the Hat. This left open the question of whether a simply connected and geometric solution to the monotile problem without reflections is possible. In a marvellous stroke of insight, Smith, Kaplan, Myers, and Goodman-Strauss realised that a member of the Hat family could also be used to tile the plane without using a reflected copy of the tile [17]. The Spectre aperiodic monotile was discovered. The Spectre is in the family of Hat tilings, but was one of the singular members that allowed periodic tilings. The authors realised that forbidding the reflected tile still allowed tilings of the plane, but all such tilings lack translational periodicity. Thus, the Spectre provides a remarkable solution to the aperiodic monotile problem! ## 2. Tiles, tilings, and their properties In this article, we restrict ourselves to two dimensional tilings. We mostly follow the language and terminology defined in Baake and Grimm's _Aperiodic Order_[5], which is highly recommended to the interested reader. The second main reference on general tiling theory is the book of Grunbaum and Shephard [9], which contains an excellent introduction to Wang tiles and aperiodic tile sets. We begin with the building blocks of a tiling. A _prototile_ is a labelled subset of \(\mathbb{R}^{2}\) that is equal to the closure of its interior. This just means that there are no dangling bits hanging from the tiles; in fact, one should probably just think of labelled polygons. In addition, we often allow _decorations_ of the prototiles, like edge markings that must match, arrows to determine tile orientation, or lines that must continue across tile edges. Often, but not always, these decorations can be realised by puzzle like bumps and dents in the tile edges. We'll see examples of this soon. Let \(\mathcal{P}\) be a finite set of prototiles and \(G\) a subgroup of the isometry group of \(\mathbb{R}^{2}\). A _tiling_ of \(\mathbb{R}^{2}\) (the plane) is a countable collection of _tiles_\(T=\{t_{i}\mid i\in\mathbb{N}\}\) such that: * \(t_{i}=\gamma\cdot p\) for some \(p\in\mathcal{P}\) and \(\gamma\) in \(G\); * \(\bigcup_{i\in\mathbb{N}}t_{i}=\mathbb{R}^{2}\); * \(\operatorname{interior}(t_{i})\cap\operatorname{interior}(t_{j})=\varnothing\) if \(i\neq j\). Let's take a minute to understand the definition. The first bullet point specifies that all tiles in the tiling must be isometric copies of prototiles (i.e., some composition of a translation, rotation and/or reflection of a prototile), where the group \(G\) specifies the exact subset of isometries we allow for a given prototile set. Here we are thinking of the prototiles as actually sitting in the plane, so that we can move them about to form a tiling. In this article, \(G\) will always be the translation group, the direct isometry group (translations and rotations), or the full isometry group. Note that when we use the full isometry group \(G\), we will include the reflected tile(s) in the prototile set for clarity. The second bullet point implies that the tiling covers the entire Euclidean plane and the third specifies that tiles never overlap except at their edges. Moreover, in the case that we have extra decorations, the tiling must also satisfy any extra rules specified by the decorations. As Figure 2.1 depicts, it is easy to find tilings of the plane by a single square prototile of side length one. In fact, there are infinitely many possible tilings that arise from an unmarked square prototile. We note that it's impossible to pictorially represent a complete tiling of the plane, and the reader is meant to extrapolate how a complete tiling is produced from the small patch provided. For both examples in Figure 2.1, we can let \(G\) be the subgroup of translations. The tiling on the right reveals why there are infinitely many possible square tilings, even taken up to translation, by choosing different relative shifts for rows of square tiles. In this case, the second row is shifted by \(1/2\) with respect to the bottom row that's sitting on the \(x\)-axis, and the third is shifted by \(1/3\) with respect to the bottom, and so on. We could use the same procedure to form rows below the bottom row to make a complete tiling of the plane. We can translate a tiling \(T\) by a vector \(x\) in \(\mathbb{R}^{2}\) via \(T+x:=\{t+x\mid t\in T\}\). Notice that the tiling in the middle of Figure 2.1 has the property that \(T+(1,0)=T\). Indeed, if we translate all tiles in the tiling over by one unit to the left the sets \(T+(1,0)\) and \(T\) are exactly the same! We say that a tiling \(T\) is _periodic_ if there exists a nonzero translation \(x\) in \(\mathbb{R}^{2}\) such that \(T+x=T\), and we say that \(T\) is _nonperiodic_ if \(T+x=T\) implies \(x=0\). That is, if we took an infinite transparent photocopy of the tiling \(T\), then the tiling is periodic if we can shift this photocopy by some non-trivial translation so that it perfectly matches \(T\) and nonperiodic if it only matches in exactly one place. The tiling on the right of Figure 2.1 is also periodic with \(x=(1,0)\). We'll see examples of nonperiodic tilings in the next section. There are many more concepts that are important in tiling theory. Again, the book of Baake and Grimm [5] contains a modern treatment and an exceptional foreword written by Roger Penrose. For example, there has been discussion on the _local indistinguishability classes_ (LI-classes) for the Hat and Spectre monotiles, see [3, p.2], where the notion of LI-classes can be found in [5, Section 5.1.1]. ## 3. Wang tiles A founder of modern tiling theory was the philosopher Hao Wang (1921-1995). A set of square prototiles with marked edges are called Wang tiles. Here we label the edges with a colour and Figure 2.1. Tilings of the plane by a single square prototile. The prototile is on the left along with two possible tilings. also a number, but merely to aid legibility. Tiles must meet along complete edges only when the symbol matches and we only allow \(G\) to be the group of translations; no rotations or reflections of prototiles allowed. In 1961 Wang [22] proposed the Domino Problem: given a set of Wang tiles, is it possible to algorithmically determine whether the set tiles the plane? Wang tiles are theoretically important in logic, since the behaviour of any Turing machine can be mimicked using some particular set of Wang tiles, see [9, Section 11.4]. Wang realised that a set \(\mathcal{P}\) of Wang tiles has four possibilities: 1. The set \(\mathcal{P}\) does **not** tile the plane, such as a single tile with four distinct symbols on its edges; 2. The set \(\mathcal{P}\) can only tile the plane periodically, such as the Wang tile on the left of Figure 3.1; 3. The set \(\mathcal{P}\) can tile both periodically and nonperiodically, such as the Wang tile set on the right of Figure 3.1 (points for guessing a method to fill the tiling out to the plane in a non-periodic way); 4. The set \(\mathcal{P}\) tiles the plane but only nonperiodically. Wang realised that the existence of a Wang tile set satisfying (4) implies that the Domino Problem is undecidable [22]. This led to his _Fundamental Conjecture_ that there are no Wang tile sets satisfying (4). However, Wang's student, Robert Berger, found the first such Wang tile set with 20,426 tiles [7]! According to Grunbaum and Shephard [9, Chapter 11], Berger subsequently got the set down to only 104 tiles (actually 103 tiles, see [11, Section 1.2]). Shortly after, Donald Knuth (the inventor of TeX amongst other things) modified Berger's set to only require 92 tiles. The hunt was on to find the smallest set of Wang tiles, with contributions from Hans Lauchli, Raphael Robinson, Roger Penrose, Robert Ammann, and John Conway finding Wang tiles sets for \(n=56,52,40,35,34,32,24,\) and 16, see [9, Section 11.1]. The following quote appears in [9, p.596]: The reduction in the number of Wang tiles in an aperiodic set from over 20,000 to 16 has been a notable achievement. Perhaps the minimum possible number has now been reached. If, however, further reductions are possible then it seems certain that new ideas and methods will be required. Two further reductions were found, the Kari-Culik Wang tile set with only 13 tiles was found using number theoretic methods, see [5, Section 5.7.4]. The Jeandel-Rao Wang tile set in Figure Figure 3.1. Two sets of Wang tiles, the left can only tile periodically while the right can tile both periodically and nonperiodically. 3.2 reduced this number to just 11 Wang tiles [11]! The new method in this case was an exhaustive computer search, so we now know that this is the minimal number, although we don't know that the depicted Wang tile set is essentially unique as an aperiodic set of 11 Wang tiles. A patch of Jeandel-Rao tiles is in Figure 3.3. ## 4. Relaxing the rules and the Penrose tiles A set of prototiles, of any shape, that admits tilings of the plane but only nonperiodically is called an _aperiodic tile set_. During the flurry of activity to reduce the number of Wang tiles in the 1970s, Roger Penrose and Robert Ammann began considering aperiodic tile sets of polygons with specified markings. Both independently found an aperiodic tile set with just two tiles! However, we focus only on the Penrose tiles here. An excellent introduction to aperiodic tiles sets, with proofs that had not appeared previously in the literature, is [9, Chapter 10]. Figure 3.3. A patch of Jeandel–Rao Wang tiles shows the type of complexity required to reduce the number of tiles to 11. Figure 3.2. The smallest possible set of Wang tiles was found by Jeandel and Rao [11] consisting of just 11 tiles. The Penrose two tile aperiodic set is depicted in Figure 4.1 and a patch of tiles in Figure 4.2. The tiles are simple polygonal shapes that must match full-edge to full-edge with the blue and red lines continuing from one tile to the next. We note that the line matching rules can be encoded purely geometrically by puzzle like bumps and dents, see [5, p.154]. These tiles were reduced from other versions of Penrose tile sets, with more prototiles, and have a 10 fold symmetry group. Roger Penrose gave a lecture at _Hatfest: celebrating the discovery of an aperiodic monotile_ at the University of Oxford in July, 2023 where he showed how he constructed his two tile set in Figure 4.1. He suggested that he may have been indirectly inspired by a book of Kepler that was on his parents' bookshelf. In fact, during his lecture he overlayed his tiles with an image produced by Kepler to an almost perfect match, see Figure 4.3. More information on this link can be found in Rodrigo Trevino's Notices of the American Mathematical Society article [21]. The success of the Penrose tiles in popular culture has been immense, to the point that his tiles were seemingly imprinted on Kleenex toilet paper, see Figure 4.4. Luckily, the tiles had been patented, and the following was issued by David Bradley, the Director of Pentaplex: So often we read of very large companies riding rough-shod over small businesses or individuals, but when it comes to the population of Great Britain being invited by a multi-national to wipe their bottoms on what appears to be the work of a Knight of the Realm without his permission, then a last stand must be made. Figure 4.1. The two tile Penrose aperiodic tile set. The tiles must meet full-edge to full-edge and the red and blue lines must continue from one tile to the next. Figure 4.2. A patch of a Penrose tiling. The case was settled out of court and Kleenex stopped making the toilet paper. Squares are highly coveted by scientists studying aperiodic order, the second author has one framed on his office wall. ## 5. The Taylor-Socolar monotile Given that two-tile aperiodic tile sets were found in the 1970s, it naturally begs the question of whether an aperiodic monotile is possible. That is, can one find a single tile and a set of local rules (decorations) that tile the plane but only nonperiodically. Here we have to be careful about what we mean by local rules. This is made very precise by Baake and Grimm in [5, Section 5.7]. For us, suffice it to say that a local rule is a decoration that can only "see" a finite and predefined radius in \(\mathbb{R}^{2}\). The Taylor-Socolar tile is an excellent example of a decoration that defines a local rule that is not edge-to-edge. The rules were discovered in 2010 by Joan Taylor, an amateur mathematician Figure 4.4. Toilet paper seemingly embossed with Penrose tiles Figure 4.3. Roger Penrose overlaying a patch of the Penrose tiling over Kepler’s pentagon pattern. from Australia [20]. She made contact with tiling expert Joshua Socolar in order to verify the discovery and fully work out the details. In a crowning achievement, they introduced the first true aperiodic monotile with local rules that are defined only between neighbours and next nearest neighbours [18, 19]. The Taylor-Socolar tile consists of a hexagonal tile and its mirror image such that: * The black lines must continue across tile edges, * The purple flags at vertices of tiles that meet across a single hexagonal edge must point in the same direction. The tiles are depicted in Figure 5.1 and the arrows on the right point to the flag rule (**R2**). Figure 5.2 shows a patch and the reader should verify that (**R1**) and (**R2**) hold in any local patch in order to properly understand the local rules. **Theorem 5.1** ([18, Theorem 1]).: _The Taylor-Socolar monotile is aperiodic; that is, there are tilings formed by isometries of the Taylor-Socolar tile satisfying (**R1**) and (**R2**) in every local patch, and every such tiling is nonperiodic._ The proof of Theorem 5.1 is very elegant and is recommended to the interested reader. Figure 5.1. The Taylor–Socolar monotile. On the left is the tile and its reflection. The image on the right demonstrates the (**R2**)-rule, flags that meet across a single hexagonal edge must point in the same direction. Figure 5.2. A patch of a Taylor–Socolar tiling. Interestingly, the Taylor-Socolar tile can be realised by shape alone. However, the tile is not simply connected; that is, the tile is connected but is not a topological disc. See Figure 5.3, where distinct tiles are coloured differently so that one can see how tiles interconnect. Focussing on the single black tile in the centre of the image shows the interesting geometry of the tile. For the experts, adding an extra vertex consistency rule, (**R3**), to the Taylor-Socolar tile forces a single LI-class, see [18, p.2215]. This additional rule was included in the original discovery by Taylor [20]. In practice, this means that the tiling space of the Taylor-Socolar tiling satisfying (**R1**)-(**R3**) is minimal in the sense of dynamical systems. Moreover, the Taylor-Socolar tilings are model sets [2, 12], and thus are strongly related to mathematical quasicrystals. These properties imply that the Taylor-Socolar tile is immensely important from the perspective of aperiodic order. Jamie Walton and the second author found a modification of the Taylor-Socolar tile that has edge-to-edge matching rules [23]. However, the (\(\mathcal{R}2\)) rule below is somewhat unusual, it depends on orientation in the following way. Two tiles \(t_{1}\) and \(t_{2}\) are permitted to meet along a shared edge \(e\) only if: Figure 5.3. A geometric version of the Taylor–Socolar tile. Distinct tiles have different colours so that we can see where the connected tiles interpenetrate. Image taken from [19, Figure 6]. * The black lines continue across \(e\), * Whenever the two charges at \(e\) in \(t_{1}\) and \(t_{2}\) both have a clockwise orientation then they must be opposite in charge. **Theorem 5.2** ([23, Theorem 1.1]).: _The tile in Figure 5.4 is aperiodic; that is, there are tilings formed by isometries of the tile satisfying (\(\mathcal{R}\)1) and (\(\mathcal{R}\)2) in every local patch, and every such tiling is nonperiodic._ In this case the proof of aperiodicity is surprisingly simple and does not require a lot of case-checking. Indeed, we show that (\(\mathcal{R}\)1)-lines always lead to longer (\(\mathcal{R}\)1)-lines. Thus, there are arbitrarily long (\(\mathcal{R}\)1)-lines in any tiling. However, this precludes translational periodicity. For if there was a non-trivial translation \(x\in\mathbb{R}^{2}\) such that \(T+x=T\), then there must be an (\(\mathcal{R}\)1)-line that is longer than \(|x|\) and the structure of the (\(\mathcal{R}\)1)-triangles forbids this translation. ## 6. The Hat and Spectre tiles The Hat monotile [16] was discovered by David Smith, Craig S. Kaplan, Joseph Samuel Myers, and Chaim Goodman-Strauss with the article appearing on the Mathematics arXiv in March 2023. The paper generated an immediate buzz, resulting in newspaper articles in both the New York Times and the Guardian. The Hat was originally discovered by David Smith in November 2022 and the authors worked furiously to understand whether the Hat is an aperiodic monotile. The Hat tile is unbelievably simple and elegant. It can be found by forming a hexagonal grid, dividing each hexagon into kites and then combining kites from three neighbouring hexagons into a tile. The kites are formed by cutting the hexagons with straight lines through the midpoints of Figure 5.5. A patch of the orientational monotile. Two neighbouring charges with clockwise orientation must be opposite. opposite edges. For this reason the authors often refer to it as a polykite. See Figure 6.1 for the Hat tile, its mirror image, and a rendering into kites that combine to form regular hexagons. Interestingly, the Hat was discovered by Smith when tinkering with polyforms to see what sorts of visually interesting tiling he could create. He got stuck building large patches of Hat tiles and sought help from Kaplan's Heesch Number software. The _Heesch Number_ of a tile is the largest number of concentric rings that form a patch around a single tile by isometric copies, where a ring consists of all tiles that touch the previous ring. The current record holder for a tile that does not tile the plane was found by Basic [6] and has Heesch Number 6. Kaplan was able to show that the Hat has at least Heesch number 10 and then improved that to 16. So, it seemed pretty likely that the Hat would tile the plane, and in a way that didn't seem to have any translational periodicity. This type of attack on the monotile problem was completely different from previous attempts, where authors typically start with a tiling of the plane and then add rules to try and enforce nonperiodicity. In some ways this approach goes back to a prototile set decidability problem, in the original sense of Wang. A patch of an infinite tiling formed from Hats is depicted in Figure 6.2. Notice that reflected Hats appear with a low frequency compared to Hat tiles. Figure 6.1. The Hat tile and its mirror image are an aperiodic tile set without any need for further decorations. On the right we see how the Hat is formed from kites that combine to form regular hexagons. Figure 6.2. A patch of Hat tiles. **Theorem 6.1** ([16, Theorem 1.1]).: _The Hat monotile is aperiodic; that is, there are tilings formed by isometries of the Hat tile, and every such tiling is nonperiodic._ There are two proofs that the Hat tiles the plane and two proofs of aperiodicity. The authors show that Hat tilings arise from a (nonstone) substitution rule that is recognisable, in the sense that the hierarchy can be deduced in an infinite tiling. On one hand, this allows one to construct patches of arbitrarily large size that nest into each other in order to construct a tiling of the plane. One the other hand, recognisability implies that one can identify structure in the tiling of arbitrarily large size, and hence the tiling cannot be periodic. The second proof of existence uses a rather simple fusion system (see [8] for the definition of fusion) to define arbitrarily large patches of tiles that expand out from a fixed tile, see [16, Figure 2.11]. The second proof of aperiodicity is more involved but has already led to the discovery of the Spectre tile. The idea behind the second proof of aperiodicity is to contract the edges of the hat tile to find other combinatorially equivalent tilings in the sense that the patterns formed by tiles are the same. Indeed, the two edge lengths in a Hat tile are at a ratio of \(\sqrt{3}\) to one another and come in complementary (opposite) pairs. Thus, we can label the Hat tile as \(\operatorname{Tile}(\sqrt{3},1)\). The idea is now to consider \(\operatorname{Tile}(a,b)\) for \(0\leq a\leq\sqrt{3}\) and \(0\leq b\leq 1\). There is an excellent animation of this on YouTube titled "Aperiodic monotile animation". At the two extremes are \(\operatorname{Tile}(0,1)\), called the Comet and \(\operatorname{Tile}(\sqrt{3},0)\) called the Chevron, see [16, Figure 3.1]. Since these two tiles are combinatorially equivalent to the Hat tiling, the authors are free to use them to deduce properties of the Hat tiling. Thus, the authors suppose that the Comet tiling is periodic and aim to derive a contradiction, meaning that this hypothesis could not be correct and the tiling is nonperiodic. Since the Comet is assumed to be periodic and forms the same combinatorial tiling as the Chevron, we can deduce that the Chevron is also periodic. Moreover, since the tilings are combinatorially equivalent there must be an affine map between the periodic lattice of the Comet and that of the Chevron. Using an argument, somewhat similar to an argument that \(\sqrt{2}\) is irrational, they prove that such an affine map cannot exist. This implies that the Hat tiling must also be nonperiodic. See [16, Section 3] for further details. One of the most interesting developments was again discovered by Smith and his coauthors, the Spectre tile [17]. Amazingly, this is \(\operatorname{Tile}(1,1)\) from the previous paragraph, which can be used to tile the plane periodically if one allows reflection of the tile. However, the authors realised that it is still possible to tile the plane if reflections of the tile are not allowed to appear in a tiling, and more amazingly that all such tilings are nonperiodic! \(\operatorname{Tile}(1,1)\) appears on the left hand side of Figure 6.3 and a version of the Spectre tile appears in the centre. The Spectre is merely an edge modification of \(\operatorname{Tile}(1,1)\) to curves, which eliminate the possibility of using both the tile and its reflection to tile the plane. We understand that Dave Smith proposed to call the tile the _Spectre_ due to the image in the centre of Figure 6.3. The image on the right was constructed through a fusion system to build arbitrarily large patches [17], and we think that also looks spectre like. **Theorem 6.2** ([17, Theorem 2.2]).: _Tile\((1,1)\) and the Spectre monotile are aperiodic; that is, there are tilings formed by direct (orientation-preserving) isometries of each tile, and every such tiling is nonperiodic._ Let us make a couple of remarks about Theorem 6.2. First, this is an absolutely incredible result that was completely unexpected, even given the recent Hat tile result. We note that \(\operatorname{Tile}(1,1)\) is referred to by the authors of [17] as a _weakly chiral monotile_ since it satisfies Theorem 6.2, but allowing a reflection results in a tile that can be used to construct a periodic tiling. A Spectre tile, see the centre of Figure 6.3, is referred to as a _strictly chiral aperiodic monotile_ since it satisfies Theorem 6.2, but any prototile set containing both the Spectre and its mirror image nonredundantly does not tile the plane. Theorem 6.2 was proved by showing that \(\mathrm{Tile}(1,1)\) and the Spectre arise from a (nonstone) inflation rule that is recognisable, similarly to the first proof of Theorem 6.1 for the Hat tile. For the experts, Baake, Gahler, and Sadun have extended the 1-parameter family of tiles given by \(\mathrm{Tile}(a,b)\) to complex variables [3]. They showed that all the continuous hulls are topologically conjugate dynamical systems under these parameters, up to linear rescaling of the ambient space, and found a self-similar representative they call the CAP tiling. The name follows from their result that the tiling is a \(c\)ut _a_nd _p_roject tiling, and hence forms a model set. They also compute the cohomology of the family and show that it has pure-point dynamical spectrum. In current work, the same authors have also shown that the Spectre tile has similar properties [4]. Figure 6.3. The image on the left is \(\mathrm{Tile}(1,1)\) and one must forbid reflections to obtain an aperiodic monotile. The tile in the centre is the Spectre tile that tiles the plane without allowing reflection, and does not tile the plane if there is at least one Spectre tile and at least one mirror image of the Spectre tile. The image on the right comes from a fusion system to construct arbitrarily large patches of Spectre tiles. Figure 6.4. A patch of Spectre tiles. Since the original Hat preprint appeared [16], Akiyama and Araki have provided yet another proof of existence and aperiodicty of a member of the Hat family [1]. They use the Golden Hex substitution to prove existence and Golden Ammann bars to prove aperiodicty. Mathematicians are now discovering that the unique structure of the Spectre tiling lends it many other fascinating properties [15]. For example, the _dimer model_ asks how many ways there are to colour the edges of the tiles so that each vertex meets exactly one coloured edge (dimer). Remarkably, this model can be exactly solved on the Spectre tiling. The number of dimer arrangements is \(2^{N_{\text{Mystic}}+1}\), where \(N_{\text{Mystic}}\) is the number of _Mystic_ tiles, see [17, p.6] or [15, p.2] for the definition of a Mystic. More remarkable still, the dimer model can also be exactly solved when quantum superpositions of dimer placements are allowed! Thus, Singh and Flicker have exactly solved the quantum dimer model for the first time in any setting.
2307.05847
Large deviations of conservative stochastic partial differential equations
In this paper, we establish a large deviation principle for the conservative stochastic partial differential equations, whose solutions are related to stochastic differential equations with interaction. The weak convergence method and the contraction principle in the theory of large deviations play an important role.
Ping Chen, Tusheng Zhang
2023-07-11T23:39:09Z
http://arxiv.org/abs/2307.05847v1
# Large deviations of conservative stochastic partial differential equations # Large deviations of conservative stochastic partial differential equations Ping Chen\({}^{1,}\)1, Tusheng Zhang\({}^{2,}\)2 \({}^{1}\) _School of Mathematical Sciences, University of Science and Technology of China, Hefei, 230026, China. \({}^{2}\) Department of Mathematics, University of Manchester, Oxford Road, Manchester, M13 9PL, UK._ [email protected] Footnote 2: [email protected] **Abstract:** In this paper, we establish a large deviation principle for the conservative stochastic partial differential equations, whose solutions are related to stochastic differential equations with interaction. The weak convergence method and the contraction principle in the theory of large deviations play an important role. **Key Words:** Large deviation principle; conservative stochastic partial differential equations; stochastic differential equations with interaction; weak convergence method. **AMS Mathematics Subject Classification:** Primary 60H15, 60F10; Secondary 60G57. ## 1 Introduction Let \(T\in(0,\infty)\) be fixed. In this paper, we study the small noise large deviation principle (LDP) of conservative stochastic partial differential equations (SPDEs), which is written as follows: \[\begin{split}\mathrm{d}\mu_{t}&=\frac{1}{2}D^{2}:(A( t,\cdot,\mu_{t})\mu_{t})\mathrm{d}t-\nabla\cdot(V(t,\cdot,\mu_{t})\mu_{t})\mathrm{d}t \\ &\quad-\int_{\Theta}\nabla\cdot(G(t,\cdot,\mu_{t},\theta)\mu_{t}) W(\mathrm{d}\theta,\mathrm{d}t),\end{split} \tag{1.1}\] where \(\{W_{t}\}_{t\in[0,T]}\) is a cylindrical Wiener process on some \(L^{2}\) space \(L^{2}(\Theta,\vartheta)\) defined on a complete probability space \((\Omega,\mathcal{F},\mathbb{P})\). For the precise conditions on \(A\), \(V\) and \(G\), we refer the readers to Section 2. Conservative SPDEs arise as fluctuating continuum models for interacting particle system [11], and have been used to describe the hydrodynamic large deviations of simple exclusion and zero range particle processes, see, for instance, [15] and [19]. In [8], the authors obtained the well-posedness of conservative SPDEs with correlated noise. In [17], the authors adopted a duality argument to establish the well-posedness of measure-valued solutions to the nonlinear, nonlocal stochastic Fokker-Planck equations. Additionally, motivated from fluid dynamics in vorticity form, the case of signed measure-valued solutions has been considered in [3, 10, 20] and the references therein. In a recent work [9], the authors showed that under sufficiently nice initial distribution \(\mu_{0}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) and appropriate conditions on the coefficients, the each solution to (1.1) was given as a superposition of solutions to the following stochastic differential equation (SDE) with interaction: \[\left\{\begin{aligned} &\mathrm{d}X_{t}(x)=V(t,X_{t}(x),\bar{\mu}_{t })\mathrm{d}t+\int_{\Theta}G(t,X_{t}(x),\bar{\mu}_{t},\theta)W(\mathrm{d}\theta,\mathrm{d}t),\\ & X_{0}(x)=x,\quad\bar{\mu}_{t}=\mu_{0}\circ(X_{t}(\cdot))^{-1}, \quad x\in\mathbb{R}^{d},\quad t\in[0,T].\end{aligned}\right. \tag{1.2}\] SDEs with interaction have been widely studied since the work by Dorogovtsev in [1]. In [18], the authors studied the limit behavior of solutions to SDEs with interaction in one-dimensional case. This has been extended to two-dimensional case in [16]. In [7], the authors proved a Stroock-Varadhan-type support theorem for stochastic flows generated by SDEs with interaction. In [13], the authors established the well-posedness of backward stochastic differential equations. We also refer to readers to [2, 14] and references therein for further related works. Large deviations principles (LDP) provide exponential estimates for the probability of the rare events in terms of some explicit rate function. It has wide applications in statistics, statistic mechanics, stochastic dynamical systems, financial mathematics etc. The early framework was introduced by Varadhan in [22], in which the small noise (also called Freidlin-Wentzell) LDP for finite dimensional diffusion processes were studied. The literature on large deviations is huge, and a list of them can be found, e.g., in [21]. The purpose of this paper is to establish a Freidlin-Wentzell type LDP for the conservative SPDE (1.1). To obtain the LDP of the solutions, we first establish the LDP of the associated SDEs with interaction (1.2) and then use the contraction principle in the theory of large deviations. We will adopt the weak convergence method in large deviations introduced in [4]. More precisely, we will use the more convenient sufficient conditions given in the paper [6]. The remaining part of this paper is organized as follows. In Section 2, we introduce the basic notations and assumptions, recall well-posedness results for the SDE with interaction (1.2) and conservative SPDE (1.1). Moreover, we characterize the trajectory space of the solutions of the SDEs with interaction (1.2). The trajectory space will be the space on which the large deviation is established. In Section 3, we present the main results of the paper, and then outline how the results will be proved. Section 4 and Section 5 are devoted to the proof of the LDP for the SDE with interaction (1.2). ## 2 Preliminaries Throughout this paper, we use the following notations and assumptions. For \(n\in\mathbb{N}\), let \([n]:=\{1,\cdots,n\}\). For vectors \(a\), \(b\in\mathbb{R}^{d}\) and \(A\), \(B\in\mathbb{R}^{d\times d}\), we set \(a\cdot b=\sum_{i=1}^{d}a_{i}b_{i}\), \(|a|=\sqrt{a\cdot a}\), and \(A:B=\sum_{i,j=1}^{d}a_{i,j}b_{i,j}\). For \(a\in\mathbb{R}^{d}\) and \(r>0\), denote by \(B(a,r)\) the open ball of radius \(r\) centred at \(a\) in \(\mathbb{R}^{d}\). We use the letter \(C_{p}\) to denote a generic positive constant depending on some parameter \(p\), whose value may change from line to line. Let \(C^{2}(\mathbb{R}^{d},\mathbb{R})\) be the space of twice continuously differentiable functions from \(\mathbb{R}^{d}\) to \(\mathbb{R}\). Let \(C^{2}_{c}(\mathbb{R}^{d},\mathbb{R})\) denote the subspace of \(C^{2}(\mathbb{R}^{d},\mathbb{R})\) of all compactly supported functions. For \(\varphi\), \(f_{i}\in C^{2}(\mathbb{R}^{d},\mathbb{R})\), \(i\in[d]\), define \(\nabla\varphi\) and \(D^{2}\varphi\) as the gradient and Hessian matrix of \(\varphi\), respectively, denote by \(\nabla\cdot f=\sum_{i=1}^{d}\partial f_{i}/\partial_{x_{i}}\), where \(f=(f_{i})_{i\in[d]}\). Let \(C(\mathbb{R}^{d})\) denote the space of continuous functions from \(\mathbb{R}^{d}\) to \(\mathbb{R}^{d}\). For some \(\delta\in(0,\frac{1}{3})\), let \[C_{\delta}(\mathbb{R}^{d}):=\{f\in C(\mathbb{R}^{d}):\sup_{x\in\mathbb{R}^{d}} \frac{|f(x)|}{1+|x|^{1+\delta}}<\infty\}.\] Let \(C([0,T],C_{\delta}(\mathbb{R}^{d}))\) be the space of continuous functions from \([0,T]\) to \(C_{\delta}(\mathbb{R}^{d})\), equipped with the maximum norm, defined as follows \[\|f\|_{\infty,T}:=\sup_{t\in[0,T]}\sup_{x\in\mathbb{R}^{d}}\frac{|f(t,x)|}{1+| x|^{1+\delta}},\ \ f\in C([0,T],C_{\delta}(\mathbb{R}^{d})). \tag{2.1}\] For \(p\geq 1\), let \(\mathcal{P}_{p}(\mathbb{R}^{d})\) denote the set of all probability measures on \((\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d}))\) with finite \(p\)-moment. It is well known that \(\mathcal{P}_{p}(\mathbb{R}^{d})\) is a polish space under the Wasserstein \(p\)-distance \[\mathcal{W}_{p}(\mu,\nu)=\inf_{\pi\in\Pi(\mu,\nu)}(\int_{\mathbb{R}^{d}\times \mathbb{R}^{d}}|x-y|^{p}\pi(\mathrm{d}x,\mathrm{d}y))^{\frac{1}{p}},\ \mu,\nu\in\mathcal{P}_{p}(\mathbb{R}^{d}),\] where \(\Pi(\mu,\nu)\) denotes all probability measures on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\) with marginals \(\mu\) and \(\nu\). Define \(\langle\varphi,\mu\rangle\) as the integration of a function \(\varphi:\mathbb{R}^{d}\to\mathbb{R}\) with respect to a measure \(\mu\in\mathcal{P}_{p}(\mathbb{R}^{d})\). Given a measure space \((\Theta,\mathcal{G},\vartheta)\), denote by \(L^{2}(\Theta,\vartheta):=L^{2}(\Theta,\mathcal{G},\vartheta)\) the usual space of all square integrable functions from \(\Theta\) to \(\mathbb{R}\), equipped with the inner product and norm, which are denoted by \(\langle\cdot,\cdot\rangle_{\vartheta}\) and \(\|\cdot\|_{\vartheta}\), respectively. Let \(\{W_{t}\}_{t\in[0,T]}\) be a cylindrical Wiener process on \(L^{2}(\Theta,\vartheta)\) defined on a complete probability space \((\Omega,\mathcal{F},\mathbb{F},\mathbb{P})\) with the augmented filtration \(\mathbb{F}:=\{\mathcal{F}_{t}\}_{t\in[0,T]}\) generated by \(\{W_{t}\}_{t\in[0,T]}\). For an \((\mathcal{F}_{t})\)-progressively measurable \(L^{2}(\Theta,\vartheta)\)-valued process \(g(t,\cdot)\), \(t\in[0,T]\), with \[\int_{0}^{t}\|g(s,\cdot)\|_{\vartheta}^{2}\mathrm{d}s<\infty\quad a.s.\] for every \(t\in[0,T]\), the following Ito-type stochastic integral \[\int_{0}^{t}\int_{\Theta}g(s,\theta)W(\mathrm{d}\theta,\mathrm{d}s):=\int_{0} ^{t}\Upsilon(s)\mathrm{d}W_{s},\] is well defined, where the operator-valued process \(\Upsilon(s):L^{2}(\Theta,\vartheta)\to\mathbb{R}\) is defined by \(\Upsilon(s)h=\langle g(s,\cdot),h\rangle_{\vartheta}\) for \(h\in L^{2}(\Theta,\vartheta)\), \(s\in[0,T]\). Now we give the assumptions on the coefficients \(V\), \(G\) and \(A\) appearing in equations (1.1) and (1.2). Let \(V:[0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\to\mathbb{R} ^{d}\), \(G:[0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\to(L_{2}( \Theta,\vartheta))^{d}\) and \(A:[0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\to\mathbb{R} ^{d\times d}\) are \(\mathcal{B}([0,T])\otimes\mathcal{B}(\mathbb{R}^{d})\otimes\mathcal{B}( \mathcal{P}_{2}(\mathbb{R}^{d}))\)-measurable functions. **Assumption 2.1**: _For all \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\) and \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\)_ \[A(t,x,\mu)=(\langle G_{i}(t,x,\mu,\cdot),G_{j}(t,x,\mu,\cdot)\rangle_{ \vartheta})_{i,j\in[d]}.\] **Assumption 2.2**: _The coefficients \(V\) and \(G\) are Lipschitz continuous with respect to \(x\) and \(\mu\), that is, there exists \(L>0\) such that for every \(t\in[0,T]\), \(x\), \(y\in\mathbb{R}^{d}\) and \(\mu\), \(\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\)_ \[|V(t,x,\mu)-V(t,y,\nu)|+\||G(t,x,\mu)-G(t,y,\nu)|\|_{\vartheta}\leq L(|x-y|+ \mathcal{W}_{2}(\mu,\nu)),\] _and_ \[|V(t,0,\delta_{0})|+\||G(t,0,\delta_{0})|\|_{\vartheta}\leq L,\] _where \(\delta_{0}\) denotes the \(\delta\)-measure at 0 on \(\mathbb{R}^{d}\)._ **Remark 2.1**: _Assumption 2.2 implies that for all \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\) and \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\), the coefficients \(V\) and \(G\) satisfy_ \[|V(t,x,\mu)|+\||G(t,x,\mu)|\|_{\vartheta}\leq L(1+|x|+\mathcal{W}_{2}(\mu, \delta_{0})),\] Following [9, Definition 2.2, 2.5 and 2.6], we introduce the definitions of solutions for equations (1.1) and (1.2), and the superposition principle. **Definition 2.1**: _Let \(\mu_{0}\in\mathcal{P}_{2}(\mathbb{R}^{d})\). A continuous \((\mathcal{F}_{t})\)-adapted process \(\{\mu_{t}\}_{t\in[0,T]}\), in \(\mathcal{P}_{2}(\mathbb{R}^{d})\), is a solution to the conservative SPDE (1.1) started from \(\mu_{0}\) if for every \(\varphi\in C_{c}^{2}(\mathbb{R}^{d},\mathbb{R})\), a.s. the equality_ \[\begin{split}\langle\varphi,\mu_{t}\rangle&= \langle\varphi,\mu_{0}\rangle+\frac{1}{2}\int_{0}^{t}\langle D^{2}\varphi:A( s,\cdot,\mu_{s}),\mu_{s}\rangle\mathrm{d}s\\ &+\int_{0}^{t}\langle\nabla\varphi\cdot V(s,\cdot,\mu_{s}),\mu_{ s}\rangle\mathrm{d}s+\int_{0}^{t}\int_{\Theta}\langle\nabla\varphi\cdot G(s, \cdot,\mu_{s},\theta),\mu_{s}\rangle W(\mathrm{d}\theta,\mathrm{d}s)\end{split} \tag{2.2}\] _holds for every \(t\in[0,T]\)._ **Definition 2.2**: _A family of continuous processes \(\{X_{t}(x)\}_{t\in[0,T]}\), \(x\in\mathbb{R}^{d}\), is called a solution to the SDE with interaction (1.2) with initial mass distribution \(\mu_{0}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) if the restriction of \(X\) to the time interval \([0,t]\) is \(\mathcal{B}([0,t])\otimes\mathcal{B}(\mathbb{R}^{d})\otimes\mathcal{F}_{t}\)-measurable, \(\bar{\mu}_{t}=\mu_{0}\circ(X_{t}(\cdot))^{-1}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) a.s. for all \(t\in[0,T]\) and for every \(x\in\mathbb{R}^{d}\), a.s._ \[X_{t}(x)=x+\int_{0}^{t}V(s,X_{s}(x),\bar{\mu}_{s})\mathrm{d}s+\int_{0}^{t}\int _{\Theta}G(s,X_{s}(x),\bar{\mu}_{s},\theta)W(\mathrm{d}\theta,\mathrm{d}s)\] _for all \(t\in[0,T]\)._ **Definition 2.3**: _A continuous process \(\{\mu_{t}\}_{t\in[0,T]}\), in \(\mathcal{P}_{2}(\mathbb{R}^{d})\) started from \(\mu_{0}\) is a superposition solution to the conservative SPDE (1.1) or satisfies the superposition principle if there exists a solution \(X_{t}(x)\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\), to the SDE with interaction (1.2) such that \(\mu_{t}=\mu_{0}\circ(X_{t}(\cdot))^{-1}\), \(t\in[0,T]\), a.s._ With these definitions, we recall the following results from [9]. **Proposition 2.4**: _Let Assumption 2.2 hold. Then for every \(\mu_{0}\in\mathcal{P}_{2}(\mathbb{R}^{d})\), the SDE with interaction (1.2) has a unique solution \(X_{t}(x)\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\). Moreover, there exists a version of \(X_{.}(x)\), \(x\in\mathbb{R}^{d}\), that is a continuous in \((t,x)\), and for each \(p\geq 2\), there exists a constant \(C_{L,p,d,T}>0\) such that for all \(x\), \(y\in\mathbb{R}^{d}\)_ \[\mathbb{E}[\sup_{t\in[0,T]}|X_{t}(x)|^{p}]\leq C_{L,p,d,T}(1+\int_{\mathbb{R} ^{d}}|x|^{p}\mu_{0}(\mathrm{d}x)+|x|^{p}), \tag{2.3}\] _and_ \[\mathbb{E}[\sup_{t\in[0,T]}|X_{t}(x)-X_{t}(y)|^{p}]\leq C_{L,p,d,T}|x-y|^{p}. \tag{2.4}\] **Proposition 2.5**: _Let Assumptions 2.1 and 2.2 hold. Then for every \(\mu_{0}\in\mathcal{P}_{2}(\mathbb{R}^{d})\), there exists a unique superposition solution \(\{\mu_{t}\}_{t\in[0,T]}\) to the conservative SPDE (1.1) started from \(\mu_{0}\)._ From now on, we will only consider the version \(X_{t}(x)\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\), of a solution to the SDE with interaction (1.2) which is continuous in \((t,x)\). Before the end of this section, we present a result on the space of the trajectory of the solution \(X_{t}(x)\). **Proposition 2.6**: _Let Assumption 2.2 hold and \(X_{t}(x)\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\) be the unique solution to the SDE with interaction (1.2) with initial mass distribution \(\mu_{0}\in\mathcal{P}_{m}(\mathbb{R}^{d})\), \(m>\max\{\frac{1}{\delta},2d\}\). Then \(X(\omega)\in C([0,T],C_{\delta}(\mathbb{R}^{d}))\) for \(\mathbb{P}\)-almost all \(\omega\in\Omega\)._ We first give a moment estimate, which plays an important role in the proof of Proposition 2.6. **Lemma 2.1**: _Let Assumption 2.2 hold and \(X_{t}(x)\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\) be the unique solution to the SDE with interaction (1.2) with initial distribution \(\mu_{0}\in\mathcal{P}_{m}(\mathbb{R}^{d})\), \(m>2d\). Then there exists a constant \(C_{L,m,d,T}\) such that for every \(k\in\mathbb{N}\)_ \[\mathbb{E}[\sup_{|x|\in[k,k+1)}\sup_{t\in[0,T]}|X_{t}(x)|^{m}]\leq C_{L,m,d,T} (1+k)^{m}. \tag{2.5}\] **Proof:** Let \(\mathcal{R}_{n}\) denotes a cube in \(\mathbb{R}^{d}\) whose length of the edge is \(n\). For each \(i\in[d]\), \(t\in[0,T]\) and \(n\in\mathbb{N}\), we have \[\int_{\mathcal{R}_{1}}\int_{\mathcal{R}_{1}}\Big{|}\frac{X_{t}^{i }(nx)-X_{t}^{i}(ny)}{|x-y|/\sqrt{d}}\Big{|}^{m}\mathrm{d}x\mathrm{d}y\] \[= \int_{\mathcal{R}_{n}}\int_{\mathcal{R}_{n}}\Big{|}\frac{X_{t}^{ i}(x^{\prime})-X_{t}^{i}(y^{\prime})}{|x^{\prime}-y^{\prime}|/n\sqrt{d}}\Big{|}^{m} \cdot n^{-2d}\mathrm{d}x^{\prime}\mathrm{d}y^{\prime}\] \[= (\sqrt{d})^{m}n^{m-2d}\int_{\mathcal{R}_{n}}\int_{\mathcal{R}_{n }}\frac{|X_{t}^{i}(x^{\prime})-X_{t}^{i}(y^{\prime})|^{m}}{|x^{\prime}-y^{ \prime}|^{m}}\mathrm{d}x^{\prime}\mathrm{d}y^{\prime}\] \[=: B_{t}^{i,n}.\] Let \[B^{n}:=(\sqrt{d})^{m}n^{m-2d}\int_{\mathcal{R}_{n}}\int_{\mathcal{R}_{n}}\frac {\sup_{t\in[0,T]}|X_{t}(x^{\prime})-X_{t}(y^{\prime})|^{m}}{|x^{\prime}-y^{ \prime}|^{m}}\mathrm{d}x^{\prime}\mathrm{d}y^{\prime}.\] By Fubini's theorem and (2.4), we have \[\mathbb{E}[B_{t}^{i,n}] \leq \mathbb{E}[B^{n}]\] \[\leq (\sqrt{d})^{m}n^{m-2d}C_{L,m,d,T}\int_{\mathcal{R}_{n}}\int_{ \mathcal{R}_{n}}\mathrm{d}x^{\prime}\mathrm{d}y^{\prime}\] \[\leq (\sqrt{d})^{m}n^{m-2d}C_{L,m,d,T}n^{2d}\] \[\leq C_{L,m,d,T}n^{m}.\] Applying the Garsia-Rodemich-Rumsey lemma (see Theorem 1.1 in [12]), we obtain that for each \(i\in[d]\), \(t\in[0,T]\) and \(n\in\mathbb{N}\), there exists a set \(\Omega_{i,t,n}\in\mathcal{F}\) such that \(\mathbb{P}(\Omega_{i,t,n})=0\) and for all \(\omega\in\Omega\backslash\Omega_{i,t,n}\), \(x\), \(y\in\mathcal{R}_{1}\) \[|X_{t}^{i}(nx)-X_{t}^{i}(ny)| \leq 8\int_{0}^{|x-y|}\Big{(}\frac{B_{t}^{i,n}}{u^{2d}}\Big{)}^{\frac{ 1}{m}}\mathrm{d}u\] \[= \frac{8}{1-2d/m}(B_{t}^{i,n})^{\frac{1}{m}}|x-y|^{1-\frac{2d}{m}}\] \[\leq \frac{8}{1-2d/m}(B^{n})^{\frac{1}{m}}|x-y|^{1-\frac{2d}{m}},\] which implies that with probability one, for all \(x^{\prime}\), \(y^{\prime}\in{\cal R}_{n}\) \[|X^{i}_{t}(x^{\prime})-X^{i}_{t}(y^{\prime})|\leq\frac{8(1/n)^{1-\frac{2d}{m}}} {1-2d/m}(B^{n})^{\frac{1}{m}}|x^{\prime}-y^{\prime}|^{1-\frac{2d}{m}}.\] Since the mapping: \((x,t)\mapsto X_{t}(x)\) is continuous, it follows that there exists a set \(\Omega_{i,n}\in{\cal F}\) such that \(\mathbb{P}(\Omega_{i,n})=0\) and for all \(\omega\in\Omega\backslash\Omega_{i,n}\), \(x^{\prime}\), \(y^{\prime}\in{\cal R}_{n}\) \[\sup_{t\in[0,T]}|X^{i}_{t}(x^{\prime})-X^{i}_{t}(y^{\prime})|\leq\frac{8(1/n)^ {1-\frac{2d}{m}}}{1-2d/m}(B^{n})^{\frac{1}{m}}|x^{\prime}-y^{\prime}|^{1- \frac{2d}{m}}. \tag{2.7}\] Now, choose \(x_{0}\in\mathbb{R}^{d}\) with \(|x_{0}|=k\) and let \(n=2(k+1)\). By (2.3), (2.6) and (2.7), we obtain that \[\mathbb{E}[\sup_{|x|\in[k,k+1)}\sup_{t\in[0,T]}|X_{t}(x)|^{m}]\] \[\leq C_{m}(\mathbb{E}[\sup_{|x|\in[k,k+1)}\sup_{t\in[0,T]}|X_{t}(x)-X _{t}(x_{0})|^{m}]+\mathbb{E}[\sup_{t\in[0,T]}|X_{t}(x_{0})|^{m}])\] \[\leq C_{m,d}(\mathbb{E}[B^{2(k+1)}]+\mathbb{E}[\sup_{t\in[0,T]}|X_{t} (x_{0})|^{m}])\] \[\leq C_{L,m,d,T}(1+k)^{m}.\] Now we come back to the proof of Proposition 2.6. **Proof:** Since the mapping: \((x,t)\mapsto X_{t}(x)\) is continuous, it is sufficient to prove that with probability one, \[\lim_{|x|\to\infty}\sup_{t\in[0,T]}\frac{|X_{t}(x)|}{1+|x|^{1+\delta}}=0. \tag{2.8}\] It is obvious that (2.8) is equivalent to \[\lim_{n\to\infty}\sup_{|x|\geq n}\sup_{t\in[0,T]}\frac{|X_{t}(x)|}{1+|x|^{1+ \delta}}=0. \tag{2.9}\] For each \(k\in\mathbb{N}\), let \[\eta_{k}:=\sup_{|x|\in[k,k+1)}\sup_{t\in[0,T]}\frac{|X_{t}(x)|}{1+|x|^{1+ \delta}}. \tag{2.10}\] Then \[\sup_{|x|\geq n}\sup_{t\in[0,T]}\frac{|X_{t}(x)|}{1+|x|^{1+\delta}}=\sup_{k \geq n}\eta_{k},\] and (2.9) is equivalent to \(\lim_{k\to\infty}\eta_{k}=0\). Now, we verify that with probability one, \(\lim_{k\to\infty}\eta_{k}=0\), completing the proof. For every \(\gamma>0\) and \(k\in\mathbb{N}\), let \[A_{k}:=\{\sup_{|x|\in[k,k+1)}\sup_{t\in[0,T]}|X_{t}(x)|^{m}>\gamma(1+k^{1+ \delta})^{m}\}.\] By Chebyshev's inequality and Lemma 2.1, we have \[\sum_{k=1}^{\infty}\mathbb{P}(A_{k}) \leq \sum_{k=1}^{\infty}\frac{\mathbb{E}[\sup_{|x|\in[k,k+1)}\sup_{t\in[0,T]}|X_{t}(x)|^{m}]}{\gamma(1+k^{1+\delta})^{m}}\] \[\leq \sum_{k=1}^{\infty}\frac{C_{L,m,d,T}(1+k)^{m}}{\gamma(1+k^{1+ \delta})^{m}}\] \[\leq C_{L,m,d,T,\gamma}\sum_{k=1}^{\infty}\frac{1}{k^{\delta m}}<\infty,\] here we have used \(m>\max\{\frac{1}{\delta},2d\}\) in the last step. Using the Borel-Cantelli lemma, we get that with probability one, \[\lim_{k\to\infty}\eta_{k}^{m}\leq\lim_{k\to\infty}\frac{\sup_{|x|\in[k,k+1)} \sup_{t\in[0,T]}|X_{t}(x)|^{m}}{(1+k^{1+\delta})^{m}}=0.\] The proof is complete. \(\blacksquare\) ## 3 Statement of main results In this section, we assume that the initial mass distribution \(\mu_{0}\in\mathcal{P}_{m}(\mathbb{R}^{d})\) with \(m>\max\{\frac{1}{\delta},2d\}\). For any \(\epsilon>0\), consider the following equation: \[\begin{cases}\mathrm{d}X_{t}^{\epsilon}(x)=V(t,X_{t}^{\epsilon}(x),\bar{\mu}_ {t}^{\epsilon})\mathrm{d}t+\sqrt{\epsilon}\int_{\Theta}G(t,X_{t}^{\epsilon}(x ),\bar{\mu}_{t}^{\epsilon},\theta)W(\mathrm{d}\theta,\mathrm{d}t),\\ X_{0}^{\epsilon}(x)=x,\quad\bar{\mu}_{t}^{\epsilon}=\mu_{0}\circ(X_{t}^{ \epsilon}(\cdot))^{-1},\quad x\in\mathbb{R}^{d},\quad t\in[0,T].\end{cases} \tag{3.1}\] By Proposition 2.4, the equation (3.1) admits a unique solution \(X_{t}^{\epsilon}(x)\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\). Moreover, by Proposition 2.6, for \(\mathbb{P}\)-almost all \(\omega\in\Omega\), \(X^{\epsilon}(\omega)\in C([0,T],C_{\delta}(\mathbb{R}^{d}))\). For any \(h\in L^{2}([0,T],L^{2}(\Theta,\vartheta))\), consider the so-called skeleton equation: \[\begin{cases}X_{t}^{h}(x)=x+\int_{0}^{t}V(s,X_{s}^{h}(x),\mu_{s}^{h})\mathrm{ d}s+\int_{0}^{t}\int_{\Theta}G(s,X_{s}^{h}(x),\mu_{s}^{h},\theta)h(s,\theta) \vartheta(\mathrm{d}\theta)\mathrm{d}s,\\ X_{0}^{h}(x)=x,\quad\mu_{s}^{h}=\mu_{0}\circ(X_{s}^{h}(\cdot))^{-1},\quad x \in\mathbb{R}^{d},\quad s\in[0,T].\end{cases} \tag{3.2}\] We have the following result: **Proposition 3.1**: _Under Assumption 2.2, the skeleton equation (3.2) has a unique solution \(X_{t}^{h}(x)\), \(t\in[0,T]\), \(x\in\mathbb{R}^{d}\). Moreover, \(X^{h}\in C([0,T],C_{\delta}(\mathbb{R}^{d}))\)._ **Proof:** Let \(\tilde{V}(t,x,\mu):=V(t,x,\mu)+\int_{\Theta}G(t,x,\mu,\theta)h(t,\theta) \vartheta(\mathrm{d}\theta)\). For every \(t\in[0,T]\), \(x\), \(y\in\mathbb{R}^{d}\) and \(\mu\), \(\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\), by Assumption 2.2, \[|\tilde{V}(t,x,\mu)-\tilde{V}(t,y,\nu)|\] \[\leq |V(t,x,\mu)-V(t,y,\nu)|+|\int_{\Theta}(G(t,x,\mu,\theta)-G(t,y, \nu,\theta))\cdot h(t,\theta)\vartheta(\mathrm{d}\theta)|\] \[\leq L(1+\|h(t,\cdot)\|_{\vartheta})(|x-y|+\mathcal{W}_{2}(\mu,\nu)).\] Then, using the similar arguments as in the proof of Theorem 2.9 in [9] and in the proof of Proposition 2.6, and using the fact that \(L(1+\|h(t,\cdot)\|_{\vartheta})\in L^{2}([0,T],\mathbb{R}^{+})\), one can show that there exists a unique solution \(X^{h}\in C([0,T],C_{\delta}(\mathbb{R}^{d}))\) to equation (3.2). We omit the details. \(\blacksquare\) We now state the main result in this paper. **Theorem 3.2**: _Let Assumption 2.2 hold and \(X^{\epsilon}\) be the unique solution to equation (3.1). Then the family of \(\{X^{\epsilon}\}_{\epsilon>0}\) satisfies a LDP on the space \(C([0,T],C_{\delta}(\mathbb{R}^{d}))\) with rate function_ \[I(g):=\inf_{\{h\in L^{2}([0,T],L^{2}(\Theta,\vartheta)):g=Y^{h}\}}\{\frac{1}{2 }\int_{0}^{T}\|h(t,\cdot)\|_{\vartheta}^{2}\mathrm{d}t\},\ \forall\ g\in C([0,T],C_{\delta}(\mathbb{R}^{d})),\] _with the convention \(\inf\{\emptyset\}=\infty\), here \(Y^{h}\) solves equation (3.2)._ **Proof:** Let \(H\) be a Hilbert space such that the imbedding \(L^{2}(\Theta,\vartheta)\subset H\) is Hilbert-Schmdt. By Proposition 3.1, there exists a measurable mapping \(\Gamma^{0}(\cdot):C([0,T],H)\to C([0,T],C_{\delta}(\mathbb{R}^{d}))\) such that \(Y^{h}=\Gamma^{0}(\int_{0}^{\cdot}h(s)\mathrm{d}s)\) for \(h\in L^{2}([0,T],L^{2}(\Theta,\vartheta))\). Let \[\mathcal{H}^{N}:=\{h\in L^{2}([0,T],L^{2}(\Theta,\vartheta)):\int_{0}^{T}\|h( t,\cdot)\|_{\vartheta}^{2}\mathrm{d}t\leq N\},\] and \[\tilde{\mathcal{H}}^{N}:=\{h:h\ \text{is}\ L^{2}(\Theta,\vartheta)\text{-valued}\ \mathcal{F}_{t}\text{-predictable process such that}\ h(\omega)\in\mathcal{H}^{N},\ \mathbb{P}\text{-}a.s\}.\] Throughout this paper, \(\mathcal{H}^{N}\) is endowed with the weak topology on \(L^{2}([0,T],L^{2}(\Theta,\vartheta))\) and it is a polish space. Notice that the \(L^{2}(\Theta,\vartheta)-\)cylindrical Wiener process \(W_{t},t\geq 0\) is now a \(H\)-valued Wiener process. By Proposition 2.4 and Proposition 2.6, for every \(\epsilon>0\), there exists a measurable mapping \(\Gamma^{\epsilon}:C([0,T],H)\to C([0,T],C_{\delta}(\mathbb{R}^{d}))\) such that \[\Gamma^{\epsilon}(W(\cdot))=X^{\epsilon},\] and applying the Girsanov theorem, for any \(N>0\) and \(h^{\epsilon}\in\tilde{\mathcal{H}}^{N}\), \[Y^{\epsilon}:=\Gamma^{\epsilon}(W(\cdot)+\frac{1}{\sqrt{\epsilon}}\int_{0}^{ \cdot}h^{\epsilon}(s)\mathrm{d}s)\] is the solution of the following SDE with interaction: \[\begin{cases}Y^{\epsilon}_{t}(x)=x+\int_{0}^{t}V(s,Y^{\epsilon}_{s}(x),\nu^{ \epsilon}_{s})\mathrm{d}s+\int_{0}^{t}\int_{\Theta}G(s,Y^{\epsilon}_{s}(x), \nu^{\epsilon}_{s},\theta)h^{\epsilon}(s,\theta)\vartheta(\mathrm{d}\theta) \mathrm{d}s\\ \qquad\qquad+\sqrt{\epsilon}\int_{0}^{t}\int_{\Theta}G(s,Y^{\epsilon}_{s}(x), \nu^{\epsilon}_{s})W(\mathrm{d}\theta,\mathrm{d}s),\\ Y^{\epsilon}_{0}(x)=x,\quad\nu^{\epsilon}_{s}=\mu_{0}\circ(Y^{\epsilon}_{s}( \cdot))^{-1},\quad x\in\mathbb{R}^{d},\quad s\in[0,T].\end{cases} \tag{3.3}\] According to Theorem 3.2 in [6], Theorem 3.2 is established once we have proved: **(LDP1)**: For every \(N<+\infty\), any family \(\{h^{\epsilon}\}_{\epsilon>0}\subset\tilde{\mathcal{H}}^{N}\) and any \(\gamma>0\), \[\lim_{\epsilon\to 0}\mathbb{P}(\|Y^{\epsilon}-Z^{\epsilon}\|_{\infty,T}>\gamma)=0,\] where \(Y^{\epsilon}=\Gamma^{\epsilon}(W(\cdot)+\frac{1}{\sqrt{\epsilon}}\int_{0}^{ \cdot}h^{\epsilon}(s)\mathrm{d}s)\) and \(Z^{\epsilon}=\Gamma^{0}(\int_{0}^{\cdot}h^{\epsilon}(s)\mathrm{d}s)\). * For every \(N<+\infty\) and any family \(\{h^{\epsilon}\}_{\epsilon>0}\subset\mathcal{H}^{N}\) that converges weakly to some element \(h\) in \(\mathcal{H}^{N}\) as \(\epsilon\to 0\), \[\lim_{\epsilon\to 0}\|\Gamma^{0}(\int_{0}^{\cdot}h^{\epsilon}(s)\mathrm{d}s)- \Gamma^{0}(\int_{0}^{\cdot}h(s)\mathrm{d}s)\|_{\infty,T}=0.\] (LDP1) will be verified in Proposition 4.1 in Section 4 and (LDP2) will be established in Proposition 5.1 in Section 5. \(\blacksquare\) For every \(\epsilon>0\), consider the conservative SPDE: \[\left\{\begin{aligned} &\mathrm{d}\mu_{t}^{\epsilon}=\frac{ \epsilon}{2}D^{2}:(A(t,\cdot,\mu_{t}^{\epsilon})\mu_{t}^{\epsilon})\mathrm{d}t -\nabla\cdot(V(t,\cdot,\mu_{t}^{\epsilon})\mu_{t}^{\epsilon})\mathrm{d}t\\ &\qquad-\sqrt{\epsilon}\int_{\Theta}\nabla\cdot(G(t,\cdot,\mu_{t }^{\epsilon},\theta)\mu_{t}^{\epsilon})W(\mathrm{d}\theta,\mathrm{d}t),\\ &\mu_{0}^{\epsilon}=\mu_{0}.\end{aligned}\right. \tag{3.4}\] By Proposition 2.5, there exists a unique superposition solution \(\mu_{t}^{\epsilon}=\mu_{0}\circ(X_{t}^{\epsilon}(\cdot))^{-1}\), \(t\in[0,T]\), to equation (3.4), where \(X^{\epsilon}\) is the unique solution to the corresponding SDE with interaction (3.1). Define the map \(F:C([0,T],C_{\delta}(\mathbb{R}^{d}))\to C([0,T],\mathcal{P}_{2}(\mathbb{R}^{d}))\) as \[F(u)(t)=\mu_{0}\circ(u(t,\cdot))^{-1},\ \ u\in C([0,T],C_{\delta}(\mathbb{R}^{d})).\] It is easy to see that the map is continuous. Then by Theorem 3.2 and the contraction principle of large deviaions (see Theorem 1.16 in [5]), we have the following result on the large deviations of conservative SPDEs. **Theorem 3.3**: _Let Assumptions 2.1 and 2.2 hold and \(\mu^{\epsilon}\) be the unique superposition solution to equation (3.4). Then the family of \(\{\mu^{\epsilon}\}_{\epsilon>0}\) satisfies a LDP on the space \(C([0,T],\mathcal{P}_{2}(\mathbb{R}^{d}))\) with rate function_ \[J(\phi):=\inf_{\{h\in L^{2}([0,T],L^{2}(\Theta,\theta)):\phi=F(Y^{h})\}}\{ \frac{1}{2}\int_{0}^{T}\|h(t,\cdot)\|_{\phi}^{2}\mathrm{d}t\},\ \forall\ \phi\in C([0,T],\mathcal{P}_{2}(\mathbb{R}^{d})),\] _with the convention \(\inf\{\emptyset\}=\infty\), here \(Y^{h}\) solves equation (3.2)._ ## 4 Verification of (LDP1) For any \(N<+\infty\) and any family \(\{h^{\epsilon}\}_{\epsilon>0}\subset\widetilde{\mathcal{H}}^{N}\), recall that \(Y^{\epsilon}=\Gamma^{\epsilon}(W(\cdot)+\frac{1}{\sqrt{\epsilon}}\int_{0}^{ \cdot}h^{\epsilon}(s)\mathrm{d}s)\) and \(Z^{\epsilon}=\Gamma^{0}(\int_{0}^{\cdot}h^{\epsilon}(s)\mathrm{d}s)\). In this section, we will prove the following result. **Proposition 4.1**: _Let Assumption 2.2 hold. Then for any \(\gamma>0\),_ \[\lim_{\epsilon\to 0}\mathbb{P}(\|Y^{\epsilon}-Z^{\epsilon}\|_{\infty,T}>\gamma)=0.\] Before proving Proposition 4.1, we prepare the following Lemmas. **Lemma 4.1**: _Let Assumption 2.2 hold. Then for any \(p\in[2,m]\), there exists a constant \(C_{L,p,T,N,d}\) such that for all \(x,y\in\mathbb{R}^{d}\)_ 1. \(\sup_{\epsilon\in(0,1]}\mathbb{E}[\sup_{t\in[0,T]}|Y_{t}^{\epsilon}(x)|^{p}]\leq C_ {L,p,T,N,d}(1+|x|^{p})\)_._ 2. \(\sup_{\epsilon>0}\mathbb{E}[\sup_{t\in[0,T]}|Z_{t}^{\epsilon}(x)|^{p}]\leq C_ {L,p,T,N,d}(1+|x|^{p})\)_._ 3. \(\sup_{\epsilon\in(0,1]}\mathbb{E}[\sup_{t\in[0,T]}|Y_{t}^{\epsilon}(x)-Y_{t}^{ \epsilon}(y)|^{p}]\leq C_{L,p,T,N,d}|x-y|^{p}\)_._ 4. \(\sup_{\epsilon>0}\mathbb{E}[\sup_{t\in[0,T]}|Z_{t}^{\epsilon}(x)-Z_{t}^{ \epsilon}(y)|^{p}]\leq C_{L,p,T,N,d}|x-y|^{p}\)_._ The proof of this lemma is a minor modification of the proof of Theorem 2.9 and Theorem 2.14 in [9]. We omit the details. **Lemma 4.2**: _Let Assumption 2.2 hold. For any \(x\in\mathbb{R}^{d}\), suppose that_ \[\lim_{\epsilon\to 0}\mathbb{E}[\sup_{t\in[0,T]}|Y_{t}^{\epsilon}(x)-Z_{t}^{ \epsilon}(x)|^{m}]=0. \tag{4.1}\] _Then for any \(M\in\mathbb{N}\)_ \[\lim_{\epsilon\to 0}\mathbb{E}[\sup_{t\in[0,T]}\sup_{|x|\leq M}|Y_{t}^{ \epsilon}(x)-Z_{t}^{\epsilon}(x)|^{m}]=0.\] **Proof:** Since \(\{x\in\mathbb{R}^{d}:|x|\leq M\}\) is compact, for every \(\kappa\in(0,1]\), there exist \(n_{\kappa}\in\mathbb{N}\) and \(\{x_{i}\}_{i\in[n_{\kappa}]}\subset\{x\in\mathbb{R}^{d}:|x|\leq M\}\) such that \(\{x\in\mathbb{R}^{d}:|x|\leq M\}\subset\cup_{i=1}^{n_{\kappa}}B(x_{i},\kappa)\). Then we have \[\mathbb{E}[\sup_{t\in[0,T]}\sup_{|x|\leq M}|Y_{t}^{\epsilon}(x)-Z_ {t}^{\epsilon}(x)|^{m}] \tag{4.2}\] \[= \mathbb{E}[\sup_{|x|\leq M}\sup_{t\in[0,T]}|Y_{t}^{\epsilon}(x)- Z_{t}^{\epsilon}(x)|^{m}]\] \[\leq \mathbb{E}[\sup_{x\in\cup_{i=1}^{n_{\kappa}}B(x_{i},\kappa)}\sup _{t\in[0,T]}|Y_{t}^{\epsilon}(x)-Z_{t}^{\epsilon}(x)|^{m}]\] \[\leq \mathbb{E}[\sup_{i=1}^{n_{\kappa}}(\sup_{x\in B(x_{i},\kappa)} \sup_{t\in[0,T]}|Y_{t}^{\epsilon}(x)-Z_{t}^{\epsilon}(x)|^{m})]\] \[\leq 3^{m-1}\Big{\{}\mathbb{E}[\vee_{i=1}^{n_{\kappa}}(\sup_{x\in B(x _{i},\kappa)}\sup_{t\in[0,T]}|Y_{t}^{\epsilon}(x)-Y_{t}^{\epsilon}(x_{i})|^{m})]\] \[+\mathbb{E}[\vee_{i=1}^{n_{\kappa}}(\sup_{t\in[0,T]}|Y_{t}^{ \epsilon}(x_{i})-Z_{t}^{\epsilon}(x_{i})|^{m})]\Big{\}}\] \[=: 3^{m-1}\Big{\{}I_{1}+I_{2}+I_{3}\Big{\}},\] where for each \(a\), \(b\in\mathbb{R}\), \(a\lor b:=\max\{a,b\}\). By \((iii)\) in Lemma 4.1, using an argument similar to the proof of (2.7), we obtain that with probability one, for all \(x\), \(y\in\mathcal{R}_{2(M+\kappa)}\) \[\sup_{t\in[0,T]}|Y_{t}^{\epsilon}(x)-Y_{t}^{\epsilon}(y)|\leq\frac{8d(1/2(M+ \kappa))^{1-\frac{2d}{m}}}{1-2d/m}(B^{\epsilon,2(M+\kappa)})^{\frac{1}{m}}|x- y|^{1-\frac{2d}{m}},\] where \[B^{\epsilon,2(M+\kappa)}=(\sqrt{d})^{m}2^{m-2d}(M+\kappa)^{m-2d}\int_{\mathcal{ R}_{2(M+\kappa)}}\int_{\mathcal{R}_{2(M+\kappa)}}\frac{\sup_{t\in[0,T]}|Y_{t}^{ \epsilon}(x)-Y_{t}^{\epsilon}(y)|^{m}}{|x-y|m}\mathrm{d}x\mathrm{d}y,\] and \(\sup_{\epsilon\in(0,1]}\mathbb{E}[B^{\epsilon,2(M+\kappa)}]\leq C_{L,m,T,N,d}(M+ \kappa)^{m}\). It follows that for any \(\epsilon\in(0,1]\), \[I_{1} \leq \frac{8^{m}d^{m}(1/2(M+\kappa))^{m-2d}}{(1-2d/m)^{m}}\cdot\kappa^{ m-2d}\cdot\mathbb{E}[B^{\epsilon,2(M+\kappa)}] \tag{4.3}\] \[\leq C_{L,m,T,N,d,M}\kappa^{m-2d}.\] Similarly, using \((iv)\) in Lemma 4.1, for any \(\epsilon\in(0,1]\), we have \[I_{3}\leq C_{L,m,T,N,d,M}\kappa^{m-2d} \tag{4.4}\] Obviously, \[I_{2}\leq\sum_{i=1}^{n_{\kappa}}\mathbb{E}[\sup_{t\in[0,T]}|Y_{t}^{\epsilon}(x _{i})-Z_{t}^{\epsilon}(x_{i})|^{m}]. \tag{4.5}\] Substituting (4.3), (4.4) and (4.5) into (4.2), we get that for any \(\epsilon\in(0,1]\) \[\mathbb{E}[\sup_{t\in[0,T]}\sup_{|x|\leq M}|Y_{t}^{\epsilon}(x)-Z _{t}^{\epsilon}(x)|^{m}]\] \[\leq C_{L,m,T,N,d,M}\kappa^{m-2d}+3^{m-1}\sum_{i=1}^{n_{\kappa}} \mathbb{E}[\sup_{t\in[0,T]}|Y_{t}^{\epsilon}(x_{i})-Z_{t}^{\epsilon}(x_{i})|^ {m}].\] Therefore, for any \(\alpha>0\), choose a sufficiently small \(\kappa_{0}\) such that \(C_{L,m,T,N,d,M}\kappa_{0}{}^{m-2d}<\frac{\alpha}{2}\) and then there exists \(\beta\in(0,1)\) such that for any \(\epsilon\in(0,\beta)\), \[\sum_{i=1}^{n_{\kappa_{0}}}\mathbb{E}[\sup_{t\in[0,T]}|Y_{t}^{\epsilon}(x_{i}) -Z_{t}^{\epsilon}(x_{i})|^{m}]<\frac{\alpha}{2\cdot 3^{m-1}}.\] It follows from (4.6) that for any \(\epsilon\in(0,\beta)\), \[\mathbb{E}[\sup_{t\in[0,T]}\sup_{|x|\leq M}|Y_{t}^{\epsilon}(x)-Z_{t}^{ \epsilon}(x)|^{m}]<\alpha.\] The proof is complete. \(\blacksquare\) **Proof of Proposition 4.1**. **Proof:** By Chebyshev's inequality, it suffices to show that \[\lim_{\epsilon\to 0}\mathbb{E}[\|Y^{\epsilon}-Z^{\epsilon}\|_{\infty,T}^{m}]=0. \tag{4.7}\] For every \(M\in\mathbb{N}\), we have \[\mathbb{E}[\|Y^{\epsilon}-Z^{\epsilon}\|_{\infty,T}^{m}]\] \[= \mathbb{E}[\sup_{t\in[0,T]}\sup_{x\in\mathbb{R}^{d}}\frac{|Y_{t}^ {\epsilon}(x)-Z_{t}^{\epsilon}(x)|^{m}}{(1+|x|^{1+\delta})^{m}}]\] \[= \mathbb{E}[\sup_{t\in[0,T]}\Big{(}\sup_{|x|<M}\frac{|Y_{t}^{ \epsilon}(x)-Z_{t}^{\epsilon}(x)|^{m}}{(1+|x|^{1+\delta})^{m}}\vee\sup_{|x| \geq M}\frac{|Y_{t}^{\epsilon}(x)-Z_{t}^{\epsilon}(x)|^{m}}{(1+|x|^{1+\delta}) ^{m}}\Big{)}]\] \[\leq \mathbb{E}[\sup_{t\in[0,T]}\sup_{|x|<M}\frac{|Y_{t}^{\epsilon}(x )-Z_{t}^{\epsilon}(x)|^{m}}{(1+|x|^{1+\delta})^{m}}\vee\sup_{t\in[0,T]}\sup_{| x|\geq M}\frac{|Y_{t}^{\epsilon}(x)-Z_{t}^{\epsilon}(x)|^{m}}{(1+|x|^{1+\delta})^{m}}]\] \[\leq C_{L,m,T,N}\int_{0}^{T}\mathbb{E}[\sup_{r\in[0,s]}|Y_{r}^{\epsilon}(x)-Z_{ r}^{\epsilon}(x)|^{m}]\mathrm{d}s+C_{L,m,T,N}\int_{0}^{T}\mathbb{E}[\mathcal{W}_{2}^{m}( \nu_{s}^{\epsilon},\mu_{s}^{\epsilon})]\mathrm{d}s\] \[+\epsilon^{\frac{m}{2}}C_{L,m,T,d}\int_{0}^{T}\mathbb{E}[1+|Y_{s}^ {\epsilon}(x)|^{m}+\left(\int_{\mathbb{R}^{d}}|Y_{s}^{\epsilon}(y)|^{2}\mu_{0}( \mathrm{d}y)\right)^{\frac{m}{2}}]\mathrm{d}s\] \[\leq C_{L,m,T,N}\int_{0}^{T}\mathbb{E}[\sup_{r\in[0,s]}|Y_{r}^{ \epsilon}(x)-Z_{r}^{\epsilon}(x)|^{m}]\mathrm{d}s+C_{L,m,T,N}\int_{0}^{T} \mathbb{E}[\mathcal{W}_{2}^{m}(\nu_{s}^{\epsilon},\mu_{s}^{\epsilon})]\mathrm{ d}s\] \[+\epsilon^{\frac{m}{2}}C_{L,m,T,d}\int_{0}^{T}\mathbb{E}[1+|Y_{s} ^{\epsilon}(x)|^{m}+\left(\int_{\mathbb{R}^{d}}|Y_{s}^{\epsilon}(y)|^{2}\mu_{ 0}(\mathrm{d}y)\right)^{\frac{m}{2}}]\mathrm{d}s\] \[\leq C_{L,m,T,N}\int_{0}^{T}\mathbb{E}[\sup_{r\in[0,s]}|Y_{r}^{ \epsilon}(x)-Z_{r}^{\epsilon}(x)|^{m}]\mathrm{d}s+C_{L,m,T,N}\int_{0}^{T} \mathbb{E}[\mathcal{W}_{2}^{m}(\nu_{s}^{\epsilon},\mu_{s}^{\epsilon})]\mathrm{ d}s\] \[+\epsilon^{\frac{m}{2}}C_{L,m,T,d}\int_{0}^{T}\mathbb{E}[1+|Y_{s} ^{\epsilon}(x)|^{m}+\left(\int_{\mathbb{R}^{d}}|Y_{s}^{\epsilon}(y)|^{2}\mu_{0 }(\mathrm{d}y)\right)^{\frac{m}{2}}]\mathrm{d}s\] \[\leq C_{L,m,T,N}\int_{0}^{T}\mathbb{E}[\sup_{r\in[0,s]}|Y_{r}^{ \epsilon}(x)-Z_{r}^{\epsilon}(x)|^{m}]\mathrm{d}s+C_{L,m,T,N}\int_{0}^{T} \mathbb{E}[\mathcal{W}_{2}^{m}(\nu_{s}^{\epsilon},\mu_{s}^{\epsilon})]\mathrm{ d}s\] \[+\epsilon^{\frac{m}{2}}C_{L,m,T,d}\int_{0}^{T}\mathbb{E}[1+|Y_{s} ^{\epsilon}(x)|^{m}+\left(\int_{\mathbb{R}^{d}}|Y_{s}^{\epsilon}(y)|^{2}\mu_{ 0}(\mathrm{d}y)\right)^{\frac{m}{2}}]\mathrm{d}s\] \[\leq C_{L,m,T,N}\int_{0}^{T}\mathbb{E}[\sup_{r\in[0,s]}|Y_{r}^{ \epsilon}(x)-Z_{r}^{\epsilon}(x)|^{m}]\mathrm{d}s+C_{L,m,T,N}\int_{0}^{T} \mathbb{E}[\mathcal{W}_{2}^{m}(\nu_{s}^{\epsilon},\mu_{s}^{\epsilon})]\mathrm{ d}s\] \[+\epsilon^{\frac{m}{2}}C_{L,m,T,N,d}(1+|x|^{m}),\] here, for the last step we have used \((i)\) in Lemma 4.1 and the fact that \(\int_{\mathbb{R}^{d}}|y|^{m}\mu_{0}(\mathrm{d}y)<\infty\). By Gronwall's lemma, we get for each \(\epsilon\in(0,1]\) and \(x\in\mathbb{R}^{d}\) \[\mathbb{E}[\sup_{t\in[0,T]}|Y_{t}^{\epsilon}(x)-Z_{t}^{\epsilon}(x)|^{m}]\leq C_{L,m,T,N,d}(\int_{0}^{T}\mathbb{E}[\mathcal{W}_{2}^{m}(\nu_{s}^{ \epsilon},\mu_{s}^{\epsilon})]\mathrm{d}s+\epsilon^{\frac{m}{2}}(1+|x|^{m})). \tag{4.11}\] In order to bound the integral on the right side, using (4.11) we estimate \[\mathbb{E}[\sup_{s\in[0,T]}\mathcal{W}_{2}^{m}(\nu_{s}^{\epsilon}, \mu_{s}^{\epsilon})] \leq \mathbb{E}[\sup_{s\in[0,T]}(\int_{\mathbb{R}^{d}}|Y_{s}^{\epsilon} (x)-Z_{s}^{\epsilon}(x)|^{2}\mu_{0}(\mathrm{d}x))^{\frac{m}{2}}]\] \[\leq \int_{\mathbb{R}^{d}}\mathbb{E}[\sup_{s\in[0,T]}|Y_{s}^{\epsilon} (x)-Z_{s}^{\epsilon}(x)|^{m}]\mu_{0}(\mathrm{d}x)\] \[\leq C_{L,m,T,N,d}\int_{\mathbb{R}^{d}}(\int_{0}^{T}\mathbb{E}[ \mathcal{W}_{2}^{m}(\nu_{s}^{\epsilon},\mu_{s}^{\epsilon})]\mathrm{d}s+ \epsilon^{\frac{m}{2}}(1+|x|^{m}))\mu_{0}(\mathrm{d}x)\] \[\leq C_{L,m,T,N,d}\int_{0}^{T}\mathbb{E}[\sup_{r\in[0,s]}\mathcal{W} _{2}^{m}(\nu_{r}^{\epsilon},\mu_{r}^{\epsilon})]\mathrm{d}s+C_{L,m,T,N,d} \epsilon^{\frac{m}{2}}.\] Thus, Gronwall's lemma yields \[\mathbb{E}[\sup_{s\in[0,T]}\mathcal{W}_{2}^{m}(\nu_{s}^{\epsilon},\mu_{s}^{ \epsilon})]\leq C_{L,m,T,N,d}\epsilon^{\frac{m}{2}}.\] Combining with the inequality above with (4.11) and letting \(\epsilon\) tend \(0\), we get (4.10). Next, we prove that \[\lim_{M\to\infty}\sup_{\epsilon\in(0,1]}\mathbb{E}[\sup_{t\in[0,T]}\sup_{|x| \geq M}\frac{|Y_{t}^{\epsilon}(x)-Z_{t}^{\epsilon}(x)|^{m}}{(1+|x|^{1+\delta} )^{m}}]=0. \tag{4.12}\] By Lemma 4.1, using an argument similar to that proving Lemma 2.1 shows that there exists a constant \(C_{L,m,T,N,d}\) such that for every \(k\in\mathbb{N}\) \[\sup_{\epsilon\in(0,1]}\mathbb{E}[\sup_{|x|\in[k,k+1)}\sup_{t\in[0,T]}|Y_{t}^ {\epsilon}(x)-Z_{t}^{\epsilon}(x)|^{m}]\leq C_{L,m,T,N,d}(1+k)^{m}.\] It follows that \[\sup_{\epsilon\in(0,1]}\mathbb{E}[\sup_{t\in[0,T]}\sup_{|x|\geq M }\frac{|Y_{t}^{\epsilon}(x)-Z_{t}^{\epsilon}(x)|^{m}}{(1+|x|^{1+\delta})^{m}}] \tag{4.13}\] \[= \sup_{\epsilon\in(0,1]}\mathbb{E}[\sup_{k\geq M}\sup_{|x|\in[k,k+ 1)}\sup_{t\in[0,T]}\frac{|Y_{t}^{\epsilon}(x)-Z_{t}^{\epsilon}(x)|^{m}}{(1+|x |^{1+\delta})^{m}}]\] \[\leq \sum_{k=M}^{\infty}\sup_{\epsilon\in(0,1]}\mathbb{E}[\sup_{|x| \in[k,k+1)}\sup_{t\in[0,T]}\frac{|Y_{t}^{\epsilon}(x)-Z_{t}^{\epsilon}(x)|^{m }}{(1+|x|^{1+\delta})^{m}}]\] \[\leq \sum_{k=M}^{\infty}\sup_{\epsilon\in(0,1]}\mathbb{E}[\sup_{|x| \in[k,k+1)}\sup_{t\in[0,T]}\frac{|Y_{t}^{\epsilon}(x)-Z_{t}^{\epsilon}(x)|^{m }}{(1+|x|^{1+\delta})^{m}}]\] \[\leq \sum_{k=M}^{\infty}\frac{\sup_{\epsilon\in(0,1]}\mathbb{E}[\sup_ {|x|\in[k,k+1)}\sup_{t\in[0,T]}|Y_{t}^{\epsilon}(x)-Z_{t}^{\epsilon}(x)|^{m }]}{(1+k^{1+\delta})^{m}}\] \[\leq \sum_{k=M}^{\infty}\frac{C_{L,m,T,N,d}(1+k)^{m}}{(1+k^{1+\delta} )^{m}}\leq C_{L,m,T,N,d}\sum_{k=M}^{\infty}\frac{1}{k^{\delta m}}.\] Since \(\delta m>1\), letting \(M\to\infty\) in (4.13), we get (4.12). Now, we are in the position to complete the proof of the Proposition. By (4.9) and (4.12), letting \(\epsilon\to 0\), and then letting \(M\to\infty\) in (4.8), we obtain \[\lim_{\epsilon\to 0}\mathbb{E}[\|Y^{\epsilon}-Z^{\epsilon}\|_{\infty,T]}^{m}=0.\] The proof is complete. ## 5 Verification of (LDP2) In this section, we verify (LDP2), namely, we will prove the following result. **Proposition 5.1**: _Let Assumption 2.2 hold. For every \(N<+\infty\) and any family \(\{h^{\epsilon}\}_{\epsilon>0}\subset\mathcal{H}^{N}\) that converges weakly to some element \(h\) in \(\mathcal{H}^{N}\) as \(\epsilon\to 0\), \(X^{h^{\epsilon}}\) converges to \(X^{h}\) in the space \(C([0,T],C_{\delta}(\mathbb{R}^{d}))\), where \(X^{h}\) solve equation (3.2), and \(X^{h^{\epsilon}}\) solve equation (3.2) with \(h\) replaced by \(h^{\epsilon}\)._ **Proof:** For every \(\epsilon>0\) and \(M\in\mathbb{N}\), using an argument similar to that proving (4.8) shows that \[\sup_{t\in[0,T]}\sup_{x\in\mathbb{R}^{d}}\frac{|X^{h^{\epsilon}}_{ t}(x)-X^{h}_{t}(x)|}{1+|x|^{1+\delta}} \tag{5.1}\] \[\leq \sup_{t\in[0,T]}\sup_{|x|\geq M}\frac{|X^{h^{\epsilon}}_{t}(x)-X^ {h}_{t}(x)|}{1+|x|^{1+\delta}}\vee\sup_{t\in[0,T]}\sup_{|x|<M}\frac{|X^{h^{ \epsilon}}_{t}(x)-X^{h}_{t}(x)|}{1+|x|^{1+\delta}}\] \[\leq \sup_{\epsilon>0}\sup_{t\in[0,T]}\sup_{|x|\geq M}\frac{|X^{h^{ \epsilon}}_{t}(x)-X^{h}_{t}(x)|}{1+|x|^{1+\delta}}\vee\sup_{t\in[0,T]}\sup_{|x |<M}|X^{h^{\epsilon}}_{t}(x)-X^{h}_{t}(x)|\] \[=: I^{M}_{1}\lor I^{M,\epsilon}_{2}.\] We first verify that \[\lim_{M\to\infty}I^{M}_{1}=0. \tag{5.2}\] By \((ii)\) in Lemma 4.1, we have \[I^{M}_{1} \leq \sup_{|x|\geq M}\frac{C_{L,T,N}(1+|x|)}{1+|x|^{1+\delta}} \tag{5.3}\] \[\leq \frac{C_{L,T,N}}{M^{\delta}}.\] So letting \(M\to\infty\), we obtain (5.2). Next, we verify that for every \(M\in\mathbb{N}\), \[\lim_{\epsilon\to 0}I^{M,\epsilon}_{2}=0. \tag{5.4}\] By \((iv)\) in Lemma 4.1 and using a similar argument as in the proof of Lemma 4.2, we only need to show that for every \(x\in\mathbb{R}^{d}\), \[\lim_{\epsilon\to 0}\sup_{t\in[0,T]}|X^{h^{\epsilon}}_{t}(x)-X^{h}_{t}(x)|=0. \tag{5.5}\] For every \(\epsilon>0\) and \(x\in\mathbb{R}^{d}\), by Assumption 2.2, Holder inequality and the fact that \(\{h^{\epsilon}\}_{\epsilon>0}\subset\mathcal{H}^{N}\), we have \[\sup_{t\in[0,T]}|X^{h^{\epsilon}}_{t}(x)-X^{h}_{t}(x)|^{2}\] \[\leq 3\Big{\{}(\int_{0}^{T}|V(s,X^{h^{\epsilon}}_{s}(x),\mu^{h^{ \epsilon}}_{s})-V(s,X^{h}_{s}(x),\mu^{h}_{s})|\mathrm{d}s)^{2}\] \[+(\int_{0}^{T}|\int_{\Theta}(G(s,X^{h^{\epsilon}}_{s}(x),\mu^{h^{ \epsilon}}_{s},\theta)-G(s,X^{h}_{s}(x),\mu^{h}_{s},\theta))\cdot h^{\epsilon} (\theta,s)\vartheta(\mathrm{d}\theta)|\mathrm{d}s)^{2}\] \[\sup_{t\in[0,T]}|X^{h^{\epsilon}}_{t}(x)-X^{h}_{t}(x)|^{2}\leq C_{L,T,N}(\int_{ \mathbb{R}^{d}}\sup_{t\in[0,T]}F^{2}_{\epsilon}(t,x)\mu_{0}(\mathrm{d}x)+\sup_{t \in[0,T]}F^{2}_{\epsilon}(t,x)).\] Hence, by Gronwall's lemma \[\sup_{t\in[0,T]}\mathcal{W}^{2}_{2}(\mu^{h^{\epsilon}}_{t},\mu^{h}_{t})\leq C _{L,T,N}\int_{\mathbb{R}^{d}}\sup_{t\in[0,T]}F^{2}_{\epsilon}(t,x)\mu_{0}( \mathrm{d}x).\] Combing the inequality above with (5.7), we obtain \[\sup_{t\in[0,T]}|X^{h^{\epsilon}}_{t}(x)-X^{h}_{t}(x)|^{2}\leq C_{L,T,N}(\int_{ \mathbb{R}^{d}}\sup_{t\in[0,T]}F^{2}_{\epsilon}(t,x)\mu_{0}(\mathrm{d}x)+ \sup_{t\in[0,T]}F^{2}_{\epsilon}(t,x)). \tag{5.8}\] Now, we prove that the right-hand side of the above inequality tends to \(0\) as \(\epsilon\) tending to \(0\). By Remark 2.1, the definition of the Wasserstein distance and \((ii)\) in Lemma 4.1, we get that for every \(x\in\mathbb{R}^{d}\), \[\sup_{t\in[0,T]}\left\||G(t,X^{h}_{t}(x),\mu^{h}_{t},\cdot)|\right\|^{2}_{\theta} \leq C_{L}(1+\sup_{t\in[0,T]}|X^{h}_{t}(x)|^{2}+\sup_{t\in[0,T]} \mathcal{W}^{2}_{2}(\mu^{h}_{t},\delta_{0}))\] \[\leq C_{L,T,N}(1+|x|)(t-s)^{\frac{1}{2}},\] where we have used the facts that \(h^{\epsilon}\), \(h\in\mathcal{H}^{N}\) in the last step. In combination with the fact that \(h^{\epsilon}\) converges to \(h\) weakly in \(L^{2}([0,T],L^{2}(\Theta,\vartheta))\), it follows that for every \(x\in\mathbb{R}^{d}\), \[\lim_{\epsilon\to 0}\sup_{t\in[0,T]}F_{\epsilon}^{2}(t,x)=0. \tag{5.9}\] Moreover, by the dominated convergence theorem, \[\lim_{\epsilon\to 0}\int_{\mathbb{R}^{d}}\sup_{t\in[0,T]}F_{\epsilon}^{2}(t,x) \mu_{0}(\mathrm{d}x)=0. \tag{5.10}\] Therefore, by (5.9) and (5.10), letting \(\epsilon\to 0\) in (5.8), we get (5.5). Finally, combing with (5.2) and (5.4), letting \(\epsilon\to 0\), and then letting \(M\to\infty\) in (5.1), we obtain \[\lim_{\epsilon\to 0}\sup_{t\in[0,T]}\sup_{x\in\mathbb{R}^{d}}\frac{|X_{t}^{h^{ \epsilon}}(x)-X_{t}^{h}(x)|}{1+|x|^{1+\delta}}=0,\] completing the proof.
2302.11034
Counterfeit Chip Detection using Scattering Parameter Analysis
The increase in the number of counterfeit and recycled microelectronic chips in recent years has created significant security and safety concerns in various applications. Hence, detecting such counterfeit chips in electronic systems is critical before deployment in the field. Unfortunately, the conventional verification tools using physical inspection and side-channel methods are costly, unscalable, error-prone, and often incompatible with legacy systems. This paper introduces a generic non-invasive and low-cost counterfeit chip detection based on characterizing the impedance of the system's power delivery network (PDN). Our method relies on the fact that the impedance of the counterfeit and recycled chips differs from the genuine ones. To sense such impedance variations confidently, we deploy scattering parameters, frequently used for impedance characterization of RF/microwave circuits. Our proposed approach can directly be applied to soldered chips on the system's PCB and does not require any modifications on the legacy systems. To validate our claims, we perform extensive measurements on genuine and aged samples from two families of STMicroelectronics chips to assess the effectiveness of the proposed approach.
Maryam Saadat Safa, Tahoura Mosavirik, Shahin Tajik
2023-02-21T22:26:18Z
http://arxiv.org/abs/2302.11034v1
# Counterfeit Chip Detection using Scattering Parameter Analysis ###### Abstract The increase in the number of counterfeit and recycled microelectronic chips in recent years has created significant security and safety concerns in various applications. Hence, detecting such counterfeit chips in electronic systems is critical before deployment in the field. Unfortunately, the conventional verification tools using physical inspection and side-channel methods are costly, unscalable, error-prone, and often incompatible with legacy systems. This paper introduces a generic non-invasive and low-cost counterfeit chip detection based on characterizing the impedance of the system's power delivery network (PDN). Our method relies on the fact that the impedance of the counterfeit and recycled chips differs from the genuine ones. To sense such impedance variations confidently, we deploy scattering parameters, frequently used for impedance characterization of RF/microwave circuits. Our proposed approach can directly be applied to soldered chips on the system's PCB and does not require any modifications on the legacy systems. To validate our claims, we perform extensive measurements on genuine and aged samples from two families of STMicroelectronics chips to assess the effectiveness of the proposed approach. Counterfeit Electronics, Power Delivery Network, Scattering Parameters, Impedance Characterization ## I Introduction The globalized semiconductor supply chain, developed over the last three decades, has enabled innovation and kept costs low for consumers. However, there are single points of failure that could disrupt the supply chain [1]. Such disruptions lead to chip shortage for different applications, and consequently, create a surplus of fraudsters and fake parts in the market. The introduction of counterfeit, recycled, or aged components into the supply chain could lead to quality and performance degradation of electronic systems [2, 3]. F-Secure reported a real-world example of such incidents [4], where counterfeit Cisco routers have been added to the market and sold to customers. Counterfeit Xilinx FPGAs, which had ended up on a module used by Boeing for a new U.S. Navy reconnaissance aircraft, is another example of such incidents [3]. Several techniques have been proposed in the literature to detect counterfeit and recycled chips in a system. We can categorize these techniques into two classes, namely package/die inspection and behavior analysis. Infrared (IR) imaging inspection [5, 6], X-ray tomography of [7], and microwave reflectometry [8] are examples of package/die inspections. Unfortunately, there are several drawbacks and challenges with these verification techniques. The main shortcoming of inspection techniques is that they cannot verify the electrical functionality of the chip under test and, hence, might leave the counterfeit chips undetected. Another disadvantage of these inspection methodologies is the lack of scalability due to their customized setup. On the other hand, behavior analysis of counterfeit and recycled chips using side-channel analysis (SCA) methods is easier to perform in practice. The primary assumption in SCA methods is that the counterfeit and recycled chips reveal distinguished behavior in terms of power consumption, EM emanations, and timing, and hence, performing SCA should detect such variations. However, the proposed SCA methods for IC counterfeit detection (e.g., [10, 11]) require the execution of test software or activation of pre-designed test circuits to generate traces. Such requirements might not be compatible with legacy systems. Besides, SCA methods usually deliver inaccurate and noisy traces for comparisons. As a result, advanced signal processing and machine learning tools are needed to confidently detect counterfeit chips. To mitigate the shortcomings of SCA techniques, the verifier can rely directly on measuring the root cause of such side-channel signal variations, i.e., the electrical impedance, rather than performing firmware-dependent side-channel measurements. The research question here would be how to sense changes in the impedance of a single chip's die on a populated system's board in a non-invasive fashion without a customized and expensive setup. **Our Contribution:** This paper presents a non-invasive and low-cost counterfeit chip detection using scattering parameters, which are standard RF and microwave tools for impedance characterization. Our approach is based on the fact that the impedance of the chip's die differs in recycled and counterfeit samples compared to the genuine ones. In our proposed method, we inject sine wave signals into the power distribution network (PDN) of the system and measure their reflection, which reveals the changes in the impedance. The impedance of the system's PDN over frequency is affected by various parts of the system [9, 12], from PCB to chip level. Therefore, by finding the relevant frequency bands, _we can actively probe the impedance of the die in a non-invasive manner_. We remove the need for using a customized setup and demonstrate how a common portable Vector Network Analyzer (VNA) can precisely detect counterfeit chips without requiring any operator expertise and expensive equipment. Furthermore, our measurement setup does not require any protected environment or complex system to capture the reflection profile, thus reducing the cost and complexity of the measurements. To validate our claims, we perform aging on several ST microcontrollers to emulate counterfeit and recycled chips and show that our proposed method can detect these samples with high confidence. ## II Background ### _Power Delivery Network (PDN)_ The power delivery network (PDN) of an electronic system consists of components and interconnects from the voltage regulator module (VRM) to the power rails on the chip. Fig. 1 (a) demonstrates the equivalent RLC circuit of the PDN model in a typical electronic board [13]. The PDN functions as a connection between VRM and load circuits of the IC and includes both off-chip (e.g., bulk capacitors, PCB routing, multilayered ceramic capacitors (MLCCs), spreading, and BGA vias) and on-chip (e.g., package and die materials, bonding wires) components [13]. According to Fig. 1 (b), the impedance profile of the PDN has a frequency-dependent behavior. Each component of the PCB has a specific-frequency contribution to the overall impedance profile of the PDN. In [9, 12], it was demonstrated that the characterization of PDN's impedance in the frequency domain enables the detection of PCB-level tamper events. Naturally, it is conceivable that any changes in the impedance of the die should also have an impact on the PDN's impedance. ### _Scattering Parameters_ To characterize the impedance of the PDN, scattering (S) parameters are deployed. A complex electronic board can be modeled as a one or multiple-port network, (e.g., see in Fig. 1 (c)). S-parameters are spectrally measured over the frequency domain to obtain the reflection/transmission properties of the circuit to the applied sine wave voltages/currents. In Fig. 1 (c), \(V_{i}^{+}\) and \(V_{i}^{-}\) (\(i=1,2\)) are forward and backward voltage waves through/from the circuit, respectively. In frequency domain analysis, these sine waves are represented by frequency, amplitude, and phase. In our case, we leverage the amplitude response in the frequency domain to accurately sense changes in the impedance of the chip. To measure the transmitted and/or reflected power of a signal going into and coming back from the PDN at different frequency points, one can employ a VNA. The impedance profile of the chip can be easily derived from the reflected signal from the PDN. Eq. 1 represents the relationship between the device under test (DUT) impedance \(Z_{DUT}\) and the reflection coefficient \(S_{11}\): \[Z_{DUT}=Z_{0}\frac{1+S_{11}}{1-S_{11}}, \tag{1}\] where \(Z_{0}\) is the characteristic impedance of the connecting cables to the VNA. We only use \(S_{11}\) in our proposed approach as the VNA can directly measure it. However, based on Eq. 1, it is clear that the reflection coefficient is another representation of the impedance. ### _Impact of Aging on the Impedance_ In this subsection, we elaborate on how the aging process impacts the transistors' physical parameters, and consequently, the chip's impedance. Supply voltage, temperature, stress time, and workload are the key contributors to the aging process leading to the device's performance degradation. The effects causing parameter drifts are negative bias temperature instability (NBTI), hot-carrier injection (HCI), time-dependent dielectric breakdown (TDDB), and electromigration (EM) [14], see Fig. 2. Among these parameters, NBTI and HCI are the dominant factors [14]. HCI is the process that the carriers gain high kinetic energy by an electric field to overcome the potential barrier of the \(Si/SiO_{2}\) interface and leave the channel. Those carriers could damage the gate oxide layer and get trapped in the oxide shown in Fig. 2[14]. NBTI, on the other hand, originates from broken Si-H bonds at the interface between the substrate and the gate oxide. The NBTI effect increases effective series resistance (ESR) and MOSFETs channel "ON resistance" and decreases capacitance [15]. As a result, the recycled devices show a deviated impedance compared to genuine samples. Fig. 1: (a) The equivalent circuit of the PDN of an electronic board [9]. (b) The amplitude of the impedance profile of an electronic board over frequency. (c) Scattering parameters in a two-port network. ## III Methodology ### _Threat Model_ We make the following assumptions in our threat model. We assume that the adversary has replaced a genuine chip on a PCB with a counterfeit or recycled one. The goal of the attacker could be illegal profit or weakening the reliability and lifetime of the system. We assume that the verifier has access to golden samples for obtaining golden S-parameter signatures. No control over parts of the design or specific internal support test circuitry for verification is required. Finally, we assume that the verifier can stop the clock signal and halt the chip in a specific state for frequency response measurements. Otherwise, the impedance of the chip could fluctuate during operation [17]. ### _Scattering Parameters as Signatures_ As mentioned in Sect. II-C, the recycled and counterfeit chips have a deviated impedance compared to genuine samples, and this change would result in the scattering parameters variation of the PDN. The off-chip components usually affect the PDN impedance up to a few tens of MHz, and the impedance profile at higher frequencies is dominated by the on-chip PDN [18] (see Fig. 1 (b)). Consequently, the impact of counterfeit chips on the frequency should be observable at higher frequencies. To sense the changes in the PDN impedance, we deploy only the backward-scattered response \(S_{11}\) (See Sect. II-B). Scattering parameters are complex values containing both amplitude and phase; however, to reduce the measurement complexity, we only utilize the amplitude of the reflection scattering parameter (\(|S_{11}|=\left|V_{1}^{-}/V_{1}^{+}\right|\)) of the PDN to be able to conduct single-port reflection measurements, practically. Fig. 3 shows the reflection-based hardware signature extraction method proposed here. As discussed in the previous subsection, we assume that the verifier has access to golden samples to obtain the golden signature. One of the main challenges in obtaining the golden signature is the noise caused by the environment and measurement uncertainties, as well as the manufacturing process variation between genuine samples. To mitigate the first type of noise, we needed to integrate our measurement several times at different ambient noise levels and take an average of the responses to increase the signal-to-noise ratio (SNR) and detection confidence. Process variation, on the other hand, is addressed by taking an average of the \(|S_{11}|\) signatures of all genuine samples at each frequency point and considering it as the reference signature. To verify the genuinity of a chip, we are taking the following steps. First, we measure the scattering parameter (\(|S_{11}|\)) for all genuine chips and take the average of \(|S_{11}|\) in three trials to ensure measurement repeatability and reduce any differences caused by environmental noise. This mean value \(\mu\) provides us with the golden signature. We also calculate the standard deviation \(\sigma\) of the measurements for different genuine samples to capture the impact of process variation. Second, we measure \(|S_{11}|\) of the suspicious chip and then compare it with the mean and response of the golden samples. If the \(|S_{11}|\) deviates from the mean, and its amplitude shows a higher deviation than \(6\sigma\), we consider the sample not genuine. ## IV Experimental Setup ### _Device under Test (DUT)_ For our experiments, we chose the NewAE CW308 UFO boards [19] consisting of base and target boards. The direct access to the PDN on CW308 boards through an SMA connector was the primary reason for the selection of these kits. The target boards are detachable and can be easily mounted on the baseboards. This feature makes it convenient to mount different target boards from the same family on the baseboard and conduct different experiments. We utilized the STM32F target boards [20] that include STMicroelectronics STM32F series of ARM Cortex devices for our experiments. More Fig. 4: The experimental setup for reflection response measurement. Fig. 3: Reflection-based hardware signature extraction. Fig. 2: Cross-section view of a MOSFET with different aging mechanisms [16]. precisely, we used NAE-CW308T-STM32F3 (12 samples) and NAE-CW308T-STM32F4HWC (10 samples) target boards containing STM32F303RCT7 and STM32F415RGT6 microcontrollers. The nominal voltage of these chips is 3.3 V. In all accelerated aging experiments, we used the internal clock of the microcontrollers. ### _Measurement Setup_ We utilized Mini-circuits eVNA-63+, which is a portable VNA capable of operating within 300 kHz - 6 GHz bandwidth. The VNA consists of an internal capacitor to filter out the DC voltage, and thus, no external bias tee is required. We used shielded precision test cables CBL-2FT-SMNM+, which has a male SMA connector on the side of the DUT, and thus, enables the direct connection to the boards without any additional adaptors. We precisely calibrated the VNA until the SMA connection plane on the baseboards, using open-short-load (OSL) calibration, which is the standard calibration for the one-port reflection/impedance measurements. We conducted \(|S_{11}|\) measurements within 1 MHz - 1 GHz bandwidth. We set the number of frequency samples to 5000 equally-spaced points to ensure the maximum spectral resolution. We configured the VNA to carry out the measurements with 10 kHz IF bandwidth and 5 dBm output power level. ## V Results ### _Genuine Chips Characterization_ Our first experiment investigates the signature consistency between identical boards (genuine) from the same family. To this end, we performed \(|S_{11}|\) measurements on the 3.3 V voltage rails of the 12 genuine STM32-F3 and 10 STM32-F4HWC samples. It should be noted that in all measurements, the chips were powered on, but no instructions were being executed on the chip. We performed each measurement ten times for each sample and took the average signature of the measured signatures at each frequency sample within 1 MHz - 1 GHz to reduce the environmental variation and noise effects on the experiments. Fig. 5 and 6 show the \(|S_{11}|\) signatures of twelve STM32-F3 and ten STM32-F4HWC samples when the chips are powered-off. The \(|S_{11}|\) results of these genuine samples confirm the signature consistency between them. The slight difference between these signatures results from the chip's manufacturing process variation. In the next experiment, we aim at investigating the effect of the chip's die on the impedance profile. We performed two \(|S_{11}|\) measurements on one of our genuine chips in powered-on and off states. Fig. 7 and 8 depict the reflection profile for the genuine STM32-F3 and STM32-F4HWC Fig. 5: The amplitude of the reflection response for 12 genuine STM32-F3 samples (powered-off) over 1 MHz - 1 GHz bandwidth. Fig. 8: The reflection profile of the genuine STM32-F4HWC sample for powered-on and powered-off cases within 1 MHz - 1 GHz band. Fig. 6: The amplitude of the reflection response for 10 genuine STM32-F4HWC samples (powered-off) over 1 MHz - 1 GHz bandwidth. Fig. 7: The reflection profile of the genuine STM32-F3 sample for powered-on and powered-off cases within 1 MHz to 1 GHz band. samples over the band of 1 MHz - 1 GHz, respectively. One interesting observation is that in specific frequency bands, such as 50 MHz - 120 MHz for STM32-F3 and 70 MHz - 130 MHz for STM32-F4HWC, \(|S_{11}|\) signature has been changed significantly due to the effect of powering up the chip. The main reason for such a change is the addition of the ON-transistors' impedance to the PDN circuit [21]. Therefore, we expect to observe the impact of aging on the PDN's impedance profile in these frequency bands as well. ### _Accelerated Aging_ Next, we performed accelerated aging experiments on the target chips to replicate the effect of recycling on the chip. Accelerated aging includes applying high-level stress conditions, including high-level voltage or temperature (higher than the nominal voltage or temperature), for a short period of time to accelerate the damage due to aging. To perform the aging test, the voltage supplied to the STM32F samples was 5.5 V, which is 2.2 V higher than the nominal operating voltage (3.3 V) of the DUTs. We let the target devices go through the aging process for 216 hours at room temperature. As mentioned in Sect. II, the chip's workload is a crucial parameter in aging. Working under a heavy workload accelerates aging deterioration because a larger area of the chip's circuit is involved. During the aging process, a firmware, containing AES-256 encryption instructions was running on the microcontrollers, and the encrypted output was XORed by four random numbers in four consecutive commands. Finally, the results of XOR operations were monitored by LEDs on the CW308 baseboards for health monitoring of the chips under aging. The voltage was reset to 3.3 V for \(|S_{11}|\) measurements. It should be noted that loading a firmware into the microcontrollers could alter the chip's impedance as well [17]. To eliminate this effect, the chip is subjected to the reference firmware after aging so that any difference in impedance or S-parameter is only due to the aging effect. To reduce the impact of thermal noise, we recorded all measurements under the nominal operating conditions with an hour thermal stabilization period after the aging test. It is worth mentioning that during the aging test, the baseboards were not aged as we directly supplied the voltage to the target boards. Therefore, if a change in the S-parameter of the chip is detected, we can confidently attribute this change to the aging effect. Finally, we measured the aged samples' \(|S_{11}|\) signatures to evaluate the aging repercussion on \(|S_{11}|\) signatures. ### _Counterfeit Chip Detection_ In this section, we deploy the measured \(|S_{11}|\) responses for detecting counterfeit (aged) chips. In the first scenario, where we replicated the aging effect on one chip from each STM32F family, we assessed the efficacy of our proposed detection method to detect these aged samples. We performed \(|S_{11}|\) measurement on the powered-on aged samples ten times and took the average of the measurement responses at each frequency sample. Thereafter, the average \(|S_{11}|\) profile of the genuine chips is compared to the signature of the aged chip. For a better illustration of the differences between genuine and aged chips, the mean and six times standard deviation of \(|S_{11}|\) values for genuine STM32-F3 and STM32-F4HWC chips and their corresponding aged chip responses are given in Fig. 9 and 10. The shaded area (\(6\sigma\)) represents the impact of process variation and encloses the area wherein all genuine chip signatures are expected to occur. According to Fig. 9 and 10, the chip aging has an impact on the portion of the spectrum where we detected the chip's contribution in powered-on and powered-off measurements (cf. Fig. 7 and 8). The zoomed-in view graphs in Fig. 9 and Fig. 10 show how much the aged chip signatures deviate from the mean of the genuine chip's signatures. Since the aged chips' responses do not overlap with the shaded blue areas in particular frequency bands (e.g., zoomed-in graphs), we can conclude that the deviations do not result from process variation in these bands This observation confirms that the chip's impedance changes due to the aging effect. Furthermore, the deviation due to aging increases as the aging process time increases. In the second scenario, to assess the effectiveness of the proposed framework, we deployed the proposed method to characterize a damaged chip from the STM32-F3 family. We prepared such a chip by exposing it to a high voltage level (6 V), leading to its breakdown. We measured the \(|S_{11}|\) parameter of the damaged chip and compared it with the average \(|S_{11}|\) signatures of the genuine samples. This effect is visible in Fig. 11, where the damaged chip's signature deviates from the Fig. 10: The mean and \(3\sigma\) of \(|S_{11}|\) response for the genuine STM32-F4HWC samples and \(|S_{11}|\) response for the aged chip (powered-on state). The right-side figure shows the zoomed-in view of the bandwidth (70 MHz - 130 MHz) with a high deviation from the mean graph. Fig. 9: The mean and \(3\sigma\) (from each side of the mean) of \(|S_{11}|\) response for the genuine STM32-F3 samples and \(|S_{11}|\) response for the aged chip (powered-on state). The right-side figure shows the zoomed-in view of the bandwidth (60 MHz - 120 MHz) with a high deviation from the mean graph. mean of the genuine signatures. However, this attack affects a larger bandwidth and mostly lower frequencies compared to the aging effect shown in the zoomed-in view of Fig. 9. A possible explanation could be when the chip is damaged, a larger area of the chip has been exposed to degradation. The effect of the larger parts is observed in lower frequencies, as the wavelength is inversely proportional to the frequency. A slight resonance frequency shift is also observed at a higher frequency, which is due to the change in capacitance and inductance of the PDN, as the resonance frequency is related to the capacitance and inductance of the PDN circuit. Overall, our experimental results show the proposed method is capable of confidently detecting counterfeit and recycled samples at distinct bands. ## VI Conclusion We presented a non-invasive and low-cost counterfeit chip detection approach based on characterizing the reflection frequency response of the chip's PDN. Unlike the conventional SCA detection approaches, this method makes it possible to detect counterfeiting without requiring running any test firmware or integrating additional test circuitry onto the chip, which makes it compatible with legacy systems. We measured the \(|S_{11}|\) values of the chips' PDN using a portable VNA. Afterward, \(|S_{11}|\) signatures of aged and genuine DUTs were compared to demonstrate that chip counterfeiting and recycling attacks would impact the reflection profile of the PDN at certain frequency bands. Based on our acquired results, the counterfeit chips could be detected confidently, without applying any signal processing or machine learning algorithms. The proposed technique can be considered complementary to the _ScatterVerif_ framework, presented in [12], enabling the verification of the entire system, from PCB to chip-level, in a unified manner. ## Acknowledgment This work was sponsored by Electric Power Research Institute (EPRI).
2308.04304
The Model Inversion Eavesdropping Attack in Semantic Communication Systems
In recent years, semantic communication has been a popular research topic for its superiority in communication efficiency. As semantic communication relies on deep learning to extract meaning from raw messages, it is vulnerable to attacks targeting deep learning models. In this paper, we introduce the model inversion eavesdropping attack (MIEA) to reveal the risk of privacy leaks in the semantic communication system. In MIEA, the attacker first eavesdrops the signal being transmitted by the semantic communication system and then performs model inversion attack to reconstruct the raw message, where both the white-box and black-box settings are considered. Evaluation results show that MIEA can successfully reconstruct the raw message with good quality under different channel conditions. We then propose a defense method based on random permutation and substitution to defend against MIEA in order to achieve secure semantic communication. Our experimental results demonstrate the effectiveness of the proposed defense method in preventing MIEA.
Yuhao Chen, Qianqian Yang, Zhiguo Shi, Jiming Chen
2023-08-08T14:50:05Z
http://arxiv.org/abs/2308.04304v1
# The Model Inversion Eavesdropping Attack in Semantic Communication Systems ###### Abstract In recent years, semantic communication has been a popular research topic for its superiority in communication efficiency. As semantic communication relies on deep learning to extract meaning from raw messages, it is vulnerable to attacks targeting deep learning models. In this paper, we introduce the model inversion eavesdropping attack (MIEA) to reveal the risk of privacy leaks in the semantic communication system. In MIEA, the attacker first eavesdrops the signal being transmitted by the semantic communication system and then performs model inversion attack to reconstruct the raw message, where both the white-box and black-box settings are considered. Evaluation results show that MIEA can successfully reconstruct the raw message with good quality under different channel conditions. We then propose a defense method based on random permutation and substitution to defend against MIEA in order to achieve secure semantic communication. Our experimental results demonstrate the effectiveness of the proposed defense method in preventing MIEA. ## I Introduction Recently, semantic communication has been widely believed to be one of the core technologies for the sixth generation (6G) of wireless networks because of its high communication efficiency [1]. Compared with the current research on communication which focuses on transmitting mapped bit sequences of the raw message [2, 3, 4], semantic communication systems transmit compacted semantic features. Existing literature in semantic communication mainly exploits the deep learning (DL) techniques to extract the semantic features from the raw message. For instance, Han _et al_. [5] proposed to extract the text-related features from the speech signal as the semantic features and remove the redundant content. On the receiver's side, the semantic features can be reconstructed by a deep learning model into the original message or directly applied for downstream tasks such as image classification and speech recognition. Although many works have been proposed for semantic communication considering different aspects, few studies have taken into account the security problems [6, 7, 8]. Tung _et al_. [6] proposed to encrypt the transmitted signal in semantic communication, but the encryption algorithm incurs a large computation overhead. Security is crucial in semantic communication for two main reasons. Firstly, semantic communication is more prone to privacy leakage compared to traditional communication. In traditional communication systems, the bit sequences being transmitted contain redundant bits to ensure reliable transmission, which can be used to provide a certain level of privacy protection. However, the semantic communication systems transmit compact and more semantic-related symbols which may reveal more private information. Secondly, deep-learning-based semantic communication may be vulnerable to attacks targeting DL models. Extensive studies have been conducted on attacks on the DL model, a review of which can be referred to [9]. If the semantic features being transmitted are eavesdropped by a malicious attacker, the attacker can reconstruct the raw message by utilizing the DL-based attack techniques. The attacker can also add perturbation to the transmitted data, causing the semantic communication system to make incorrect decisions on downstream tasks. For example, Sagduyu _et al_. [7] proposed a multi-domain evasion attack to cause the semantic communication system to make incorrect classifications, which is achieved by introducing noises to input images or the semantic features. Du _et al_. [8] proposed a semantic data poisoning attack, which causes the receiver to receive irrelevant messages from the transmitter. For example, the receiver wants to receive an image with a pear but gets an image with an apple instead. This attack is performed by minimizing the difference between the semantic features of the targeted message and the irrelevant message. In this paper, we consider the security issue in semantic communication systems and introduce the model inversion eavesdropping attack (MIEA) for semantic communication, where an attacker eavesdrops the transmitted symbols and attempts to reconstruct the original message from them by inverting the DL model used at the transmitter. We perform MIEA under both the white-box and the black-box settings. The attacker has knowledge of the DL model in the white-box setting while not in the black-box setting. To defend against MIEA, we also propose a defense method based on random permutation and substitution. Evaluations demonstrate that the MIEA attack works under different channel conditions, i.e., different values of the signal-to-noise ratio (SNR), which reveals the risk of privacy leaks in semantic transmission. Numerical results also validate the effectiveness of our proposed defense method. This paper is organized as follows: Section II introduces the basic ideas of semantic communications. In section III, we present the proposed MIEA under both the white-box and black-box setting, and propose our defense method. In section IV, we evaluate the effectiveness of the proposed MIEA and the proposed defense method. Section V concludes our work. ## II Fundamentals In this section, we provide the fundamentals of semantic communication and the eavesdropping performed by the attacker. We consider a semantic communication system which transmits images over wireless channels. As shown in Fig. 1, the transmitter of the semantic communication system consists of a semantic encoder and a channel encoder. The semantic encoder extracts the semantic features \(\mathbf{z}\) from the raw image \(\mathbf{x}\), while the channel encoder maps \(\mathbf{z}\) into the transmitted features \(\mathbf{y}_{\rm f}\in\mathbb{R}^{h\times w\times c}\), where \(h,w,c\) denote the height, the width and the channel of the transmitted features respectively. Before transmission, \(\mathbf{y}_{\rm f}\) is reshaped into the transmitted symbols \(\mathbf{y}\in\mathbb{R}^{N\times 2}\), where \(N=\frac{h\times w\times c}{2}\) and the two channels are the real parts and imaginary parts of the signal to be transmitted, respectively. \(\mathbf{y}\) is then transmitted over a wireless channel, which we denote as the main channel to distinguish from the channel used by the attacker. The received signal \(\hat{\mathbf{y}}\) at the receiver side can be characterized by \[\hat{\mathbf{y}}=\mathbf{H}_{\rm m}\mathbf{y}+\mathbf{n}_{\rm m}, \tag{1}\] where \(\mathbf{H}_{\rm m}\) is a matrix which reflects the main channel effect such as multi-path propagation, fading and interference, while \(\mathbf{n}_{\rm m}\) is a zero-mean additive white Gaussian noise. The receiver of the semantic communication system consists of a channel decoder and a semantic decoder. The receiver first reshapes \(\hat{\mathbf{y}}\) back to the transmitted features \(\hat{\mathbf{y}}_{\rm f}\). Then the channel decoder maps \(\hat{\mathbf{y}}_{\rm f}\) back to the semantic features \(\hat{\mathbf{z}}\). The semantic decoder then reconstructs the image \(\hat{\mathbf{x}}\) from \(\hat{\mathbf{z}}\). We jointly train the semantic encoder, channel encoder, semantic decoder and channel decoder using the following loss function: \[\mathcal{L}=\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{x}-\hat{\mathbf{x}}\|^{2}+\lambda T( \hat{\mathbf{x}}), \tag{2}\] where \(N\) is the number of the training data batch and \[T(\hat{\mathbf{x}})=\sum_{i,j}(|\hat{\mathbf{x}}_{i+1,j}-\hat{\mathbf{x}}_{i,j}|^{2}+|\hat {\mathbf{x}}_{i,j+1}-\hat{\mathbf{x}}_{i,j}|^{2})^{\beta/2}. \tag{3}\] The first term in (2) computes the mean square error (MSE) between \(\mathbf{x}\) and \(\hat{\mathbf{x}}\). The second term \(T(\hat{\mathbf{x}})\) is the total variation [10] that measures the smoothness of the reconstructed image \(\hat{\mathbf{x}}\), where \(\hat{\mathbf{x}}_{i,j}\) denotes the pixel value at the position \((i,j)\) and \(\beta\) controls the smoothness of the image, with larger \(\beta\) being more piecewise-smooth. The hyper-parameter \(\lambda\) balances the two terms. In our work, we choose \(\beta=1\) and \(\lambda=1\). Next, we introduce how an attacker eavesdrops the transmitted signal under the semantic communication system. We follow the naming convention in the security research, with Alice, Bob and Eve representing the sender, receiver and attacker respectively. Suppose Alice wants to send an image to Bob. As shown in the lower part of Fig. 1, since the transmitted symbols \(\mathbf{y}\) is transmitted over the wireless channel, it can easily be captured by any unauthorized receiver. Assume that there exists an attacker Eve who intercepts \(\mathbf{y}\) and attempts to reconstruct the raw image from it. The wireless channel between Alice and Eve is referred to as the eavesdropper channel [11]. The received signal at Eve is given by \[\hat{\mathbf{y}}_{\rm e}=\mathbf{H}_{\rm e}\mathbf{y}+\mathbf{n}_{\rm e}, \tag{4}\] Similarly, \(\mathbf{H}_{\rm e}(\cdot)\) represents the eavesdropper channel matrix and \(\mathbf{n}_{\rm e}\) is a zero-mean additive white Gaussian noise. After eavesdropping \(\hat{\mathbf{y}}_{\rm e}\), Eve is able to reconstruct the image, denoted as \(\hat{\mathbf{x}}_{\rm e}\), which will be detailed section III. Note that to avoid confusion, we use the **received image** to denote \(\hat{\mathbf{x}}\) received by Bob and the **eavesdropped image** to denote \(\hat{\mathbf{x}}_{\rm e}\) eavesdropped by Eve. ## III The Proposed MIEA and its Defense In this section, we first elaborate the idea of MIEA. To reconstruct \(\hat{\mathbf{x}}_{\rm e}\), Eve performs MIA [12] using either the white-box attack or the black-box attack, which depends on the knowledge of the semantic encoder and channel encoder that Eve has. Then we propose an effective defense method that defends against both types of attack. ### _White-box Attack_ In the white-box attack, Eve knows the parameters and structure of the semantic encoder and channel encoder. For example, the semantic communication system is publicly available or available through purchase, such as JPEG. In this case, Eve can directly use the semantic encoder and channel encoder to reconstruct the image. We denote the two encoders as a single function \(f(\cdot)\) which maps a given image \(\mathbf{x}\) to the transmitted symbol \(\mathbf{y}\), that is, \(\mathbf{y}=f(\mathbf{x})\). The reconstructed image \(\hat{\mathbf{x}}_{\rm e}\) can be obtained by solving the following optimization problem: \[\hat{\mathbf{x}}_{\rm e}=\operatorname*{arg\,min}_{\mathbf{x}}\|\hat{\mathbf{y}}_{\rm e}-f (\mathbf{x})\|^{2}+\lambda T(\mathbf{x}). \tag{5}\] The first term in (5) is the MSE between \(\hat{\mathbf{y}}_{\rm e}\) and \(f(\mathbf{x})\), while the second term \(T(\mathbf{x})\) is the total variation defined in (3) that guarantees the smoothness of \(\mathbf{x}\). Similar to (2), we set \(\beta=1\) and \(\lambda=1\) here. The optimization problem (5) can be solved by performing the gradient descent, which iteratively updates the input \(\mathbf{x}\) so that (5) can be minimized. Fig. 1: Illustration of the semantic communication with MIEA. ### _Black-box Attack_ In the black-box attack, Eve lacks knowledge of the parameters and structures of both encoders. In this case, Eve uses an inverse network of the two encoders, denoted as \(f^{-1}(\cdot)\), to inverse \(\hat{\mathbf{y}}_{\rm e}\) back to \(\hat{\mathbf{x}}_{\rm e}\). Specifically, \(f^{-1}\) takes \(\hat{\mathbf{y}}_{\rm e}\) as input and outputs \(\mathbf{x}\), i.e., \(f^{-1}(\hat{\mathbf{y}}_{\rm e})=\mathbf{x}\). To train \(f^{-1}\), we assume that Eve can feed a batch of samples \(\mathbb{X}=\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{m}\}\) into the encoder and capture the corresponding transmitted symbols \(\mathbb{Y}_{\rm e}=\{\hat{\mathbf{y}}_{1},\hat{\mathbf{y}}_{2},...,\hat{\mathbf{y}}_{m}\}\), where \(m\) is the number of the samples. Eve then trains \(f^{-1}\) using \(\mathbb{Y}_{\rm e}\) as the input and \(\mathbb{X}\) as the ground truth output. We use the \(l_{2}\) norm as the loss function and employ stochastic gradient descent to train the inverse network: \[f^{-1}=\operatorname*{arg\,min}_{g}\frac{1}{m}\sum_{i=1}^{m}\|g(\hat{\mathbf{y}}_ {i})-\mathbf{x}_{i}\|^{2}, \tag{6}\] where we use \(g\) to represent the inverse network being optimized. Once the inverse network is trained, Eve is able to reconstruct the image from any newly eavesdropped signal. ### _Defense Method against MIEA_ To defend against MIA in deep learning, researchers have proposed various techniques such as differential privacy (DP) [12] or attacker-aware training [13]. The DP technique involves adding an additive Laplacian noise to the raw image, while the attacker-aware training adds a regularization term to the loss function during training, which maximizes the MSE between the reconstructed image and the raw image. Both methods prevent Eve from reconstructing high-quality images while maintaining the performance of downstream tasks. However, in the transmission task considered in this paper, both Bob and Eve attempt to reconstruct the image from \(\hat{\mathbf{y}}\) and \(\hat{\mathbf{y}}_{\rm e}\) respectively. Furthermore, we assume that both the main channel and eavesdropper channel are AWGN channel, i.e., \(H_{\rm m}=H_{\rm e}\). Then the difference between \(\hat{\mathbf{y}}\) and \(\hat{\mathbf{y}}_{\rm e}\) is \(|\mathbf{n}_{\rm m}-\mathbf{n}_{\rm n}|\), which is relatively small. Therefore, if Eve fails to reconstruct high-quality images under the defense methods above, Bob will also fail, which contradicts the goal of the transmission task. To prevent Eve from reconstructing images as well as maintaining the image quality received by Bob, an intuitive solution is to encrypt \(\mathbf{y}\) using common cryptography algorithms, but this will incur a large computation overhead. To reduce the computation overhead, we propose a defense method based on random permutation and substitution of the transmitted features \(\hat{\mathbf{y}}_{\rm f}\), which can simultaneously defend against both types of attack. _Random Permutation and Substitution_. We first introduce the random permutation operation. For the transmitted features \(\mathbf{y}_{\rm f}\), we randomly permute the tensor along the first dimension \(h\). We define the permutation scheme \(P\) as a random permutation of the array \([0,1,...,h-1]\), where each element represents the index (0-indexed) of \(\mathbf{y}_{\rm f}\). After applying \(P\), we obtain the permuted transmitted features \(\mathbf{y}_{\rm f}^{\rm p}\). Then we perform the substitution operation on \(\mathbf{y}_{\rm f}^{\rm p}\) by swapping some of the \(y_{i}\) with \(y_{i}^{\prime}\) from another transmitted features \(\mathbf{y}_{\rm f}^{\prime}\), where \(i\) in \(y_{i}\) and \(y_{i}^{\prime}\) indicates that substitution is performed in the same position of both transmitted features. Note that we remove the subscript \(\mathsf{f}\) in \(y_{i}\) to avoid complex notations. If Alice sends several images to Bob, \(\mathbf{y}_{\rm f}^{\prime}\) can be the transmitted features of the next image. If Alice only sends one image to Bob, \(\mathbf{y}_{\rm f}^{\prime}\) can be a random-noise tensor, which will also be sent to Bob. Similarly, we define the substitution scheme \(S\) as a sub-array of the array \([0,1,...,h-1]\), where every \(i\) in \(S\) indicates the \(y_{\rm f,i}\) that should be substituted. After substitution, \(\mathbf{y}_{\rm f}^{\rm p}\) becomes \(\mathbf{y}_{\rm f}^{\rm s}\). We give an example to explain the idea of random permutation and substitution. As shown in Fig. 2, assume that \(h=5\), then \(\mathbf{y}_{\rm f}=[y_{0},y_{1},...,y_{4}]\), where \(y_{i}\in\mathbb{R}^{1\times w\times c}\), \(i=0,1,...,4\). We also assume that there is another transmitted features \(\mathbf{y}_{\rm f}^{\prime}=[y_{0}^{\prime},y_{1}^{\prime},...,y_{4}^{\prime}]\) If \(P=[4,2,1,0,3]\) for \(\mathbf{y}_{\rm f}\) and \(P^{\prime}=[0,3,4,1,2]\) for \(\mathbf{y}_{\rm f}^{\prime}\), then \(\mathbf{y}_{\rm f}^{\rm p}=[y_{4},y_{2},y_{1},y_{0},y_{3}]\) and \(\mathbf{y}_{\rm f}^{\prime}=[y_{0}^{\prime},y_{3}^{\prime},y_{4}^{\prime},y_{1}^{ \prime},y_{2}^{\prime}]\). If \(\mathbf{y}_{\rm f}\) and \(\mathbf{y}_{\rm f}^{\prime}\) share a common \(S=[0,2]\), then \(\mathbf{y}_{\rm f}^{\rm s}=[y_{0}^{\prime},y_{2},y_{4}^{\prime},y_{0},y_{3}]\) and \(\mathbf{y}^{\prime\rm s}=[y_{4},y_{3}^{\prime},y_{1},y_{2}^{\prime}]\). Suppose that Bob knows the \(P\) and \(S\) before transmission. After reshaping \(\hat{\mathbf{y}}\), Bob will first recover \(\hat{\mathbf{y}}_{\rm f}\) from \(\hat{\mathbf{y}}_{\rm f}^{\rm s}\) and then feed \(\hat{\mathbf{y}}_{\rm f}\) into the channel decoder. However, since Eve does not know \(P\) and \(S\), Eve will try to reconstruct \(\hat{\mathbf{x}}_{\rm e}\) directly from \(\hat{\mathbf{y}}_{\rm f}^{\rm s}\), which is shown to be infeasible in IV-C. Moreover, since different \(P\) and \(S\) are used for each transmission, it would be difficult for Eve to determine the correct \(P\) and \(S\) for each eavesdropped signal. _Scheme Selection_. The proposed method is dependent on Bob having knowledge of \(P\) and \(S\) before transmission. Hence it is necessary for Alice and Bob to share two common sets of schemes, namely the permutation scheme set \(\mathbb{P}\) and the substitution set \(\mathbb{S}\), which are kept secret from Eve. Both sets comprise multiple schemes that can be employed for permutation and substitution. Before each image transmission, Alice generates a value pair \(V=\{p,s\}\), which is used to select the corresponding \(P\) and \(S\) from \(\mathbb{P}\) and \(\mathbb{S}\), respectively. \(V\) is first encrypted using a secret key \(K\) shared between Alice and Bob. Then the encrypted \(V\) is transmitted to Bob, which cannot be modified by the main channel. Hence error-free techniques such as error correction and retransmission are utilized to transmit \(V\). After receiving the \(\hat{\mathbf{y}}_{\rm s}\) and the encrypted \(V\), Bob decrypts \(V\) using \(K\) and determines \(P\) and \(S\), from which \(\hat{\mathbf{y}}\) can be recovered. Fig. 2: An example of the proposed defense method. ## IV Evaluations In this section, we present our experiments to evaluate MIEA and the proposed defense method. We first evaluate MIEA's performance for the white-box attack and black-box attack. We then show the effectiveness of the proposed defense method. We use the semantic communication model DeepJSCC [14] to transmit images from the CelebA dataset [15], which we crop and resize to \(180\times 180\) in the evaluation. The semantic encoder and decoder each have four convolutional layers, while the channel encoder and decoder have one convolutional layer. We assume both the main channel and eavesdropper channel to be AWGN channel and denote the channel condition as the combination of the main channel's SNR and the eavesdropper channel's SNR. Although the main channel is not considered in MIEA, we still perform evaluation under different SNRs of the main channel, as we use different DeepJSCC models for each SNR value. For each evaluation, we consider the SNR of both channels to be 0dB, 10dB and 20dB, resulting in nine different channel conditions. ### _Evaluation Setup_ Before evaluating the performance of both attacks, we train the DeepJSCC model on the CelebA dataset using three different SNR values for the main channel (0dB, 10dB, and 20dB), resulting in three distinct DeepJSCC models. As stated in [14], the SNR value determines the standard deviation of \(\mathbf{n}_{\rm m}\) when the transmission power is normalized to 1. We train the CelebA dataset with a batch size of 128, using Adam [16] as the optimizer with a learning rate of \(10^{-3}\). To measure the image quality, we use two metrics, i.e., the structural similarity index measure (SSIM) and the peak signal-to-noise ratio (PSNR) [14], where higher values of SSIM and PSNR indicate better quality. ### _Evaluation of MIEA_ We first evaluate MIEA for the two types of attack. For the white-box attack, where Eve reconstructs the image by minimizing (5), we employ Adam [16] as the optimizer with a learning rate of \(10^{-3}\) and we initialize \(\hat{\mathbf{x}}_{\rm e}\) to an all-zero tensor. For the black-box attack, we use an inverse network \(f^{-1}(\cdot)\) consisting of an upsampling layer and two convolution layers. Then we train \(f^{-1}(\cdot)\) by solving the optimization problem in (6), where we choose the CelebA test dataset as \(\mathbb{X}\) and obtain its corresponding transmitted symbols \(\mathbb{Y}\). Similarly, we use Adam as the optimizer and set the learning rate to \(10^{-3}\). Fig. 3 shows the performance of MIEA on the DeepJSCC model under different channel conditions, with the SSIM and PSNR given below each image. The first two columns in Fig. 3 are baselines for comparisons with the images eavesdropped by MIEA, where the first column displays the original images \(\mathbf{x}\) transmitted by Alice, and the second column shows the images received by Bob under different main channel's SNRs. As shown in the first two columns, increasing the SNR improves image quality, as indicated by higher average SSIM and PSNR values. Additionally, higher SNRs reveal more details in the images, such as the female's hair. The remaining columns in Fig. 3 display the eavesdropped images obtained by MIEA. In the following evaluations in this paper, for each channel condition, we show the eavesdropped image by the white-box attack on the left and the one obtained by the black-box attack on the right. Additionally, Tablel lists the average SSIM and PSNR of the eavesdropped images of individuals selected from the CelebA training set. The first row in the table shows the quality of the images received by Bob, and for each channel condition, the values on the top show the quality of eavesdropped images obtained by the white-box attack, and the values on the bottom show the quality of the eavesdropped images obtained by the black-box attack. We note that we choose CelebA training set for evaluation because the CelebA test set is used for training \(f^{-1}(\cdot)\) in the black-box attack. It can be seen from Fig. 3 and Table I that the quality of eavesdropped images improves as the SNR of the eavesdropper channel increases for a given SNR of the main channel. Moreover, for a given SNR of the eavesdropper channel, the quality of eavesdropped images is similar under different SNRs of the main channel. It also can Fig. 3: Visualization of MIEA for the white-box attack and the black-box attack under different channel conditions. For each channel condition, the eavesdropped image by the white-box attack is displayed on the left and the one obtained by the black-box attack is on the right. be observed that the SSIM and PSNR values in the black-box attack are generally larger than those in the white-box attack. This is because the black-box attack requires training \(f^{-1}(\cdot)\) before reconstructing any image from the eavesdropped signal, which needs many samples from \(\mathbb{X}\) and \(\mathbb{Y}\). In contrast, the white-box attack directly reconstructs the image from the eavesdropped signal without any training in advance. Although the SSIM and the PSNR of the eavesdropped images in both attacks are lower than those of the images received by Bob, the eavesdropped images are visually recognizable and their privacy is compromised, which confirms the effectiveness of MIEA and reveals the risk of privacy leaks in current semantic communication. ### _Evaluation of the Proposed Defense Method_ Next, we evaluate the proposed defense method by repeating the evaluation of MIEA in section IV-B with the defense method implemented on \(\mathbf{y}_{t}\). Fig. 4 visualizes the eavesdropped images by the two attack types after applying the proposed defense method, using the same individual as in Fig. 3. It can be observed that the eavesdropped images are visually unrecognizable, demonstrating the effectiveness of the proposed defense method in preventing Eve from eavesdropping on raw images. We can also see that the contour of the female in the white-box attack is less obvious than that in the black-box attack, suggesting that the defense against the white-box attack is superior to that against the black-box attack. This is because that Eve has no prior knowledge of the defense method when performing the white-box attack, whereas \(f^{-1}(\cdot)\) used in the black-box attack has learned some knowledge of the defense method from the training samples. In addition, we provide the average SSIM and PSNR of the eavesdropped images for both attacks in Table. II. For a given SNR of the main channel, the SSIM and PSNR do not increase as the SNR of the eavesdropper channel increases because different \(P\) and \(S\) are used for different transmitted features. The average SSIM and PSNR of the eavesdropped images by the black-box attack are larger than those by the white-box attack, which is consistent with the observation from Fig. 3. Overall, the average SSIM and PSNR are relatively small, which indicates the effectiveness of the proposed defense method in preventing Eve from obtaining meaningful information from the eavesdropped signal. Next, we conduct an ablation study to further validate our proposed method. Fig. 5 and Table. III demonstrate the eavesdropped images and the average SSIM and PSNR for both attacks by applying only the random permutation. As shown in Fig. 5, when only the random substitution is applied, the white-box attack can be effectively defended, while the black-box attack can still reconstruct visually recognizable images for some \(P\) and \(S\). Moreover, the average SSIM and PSNR for the black-box attack in Table. III are larger than those in Table. II, which demonstrates that only random permutation is insufficient for defending against MIEA. Fig. 4: Visualization of MIEA for both attacks under different channel conditions after applying the proposed method. Fig. 5: Visualization of MIEA for both attacks under different channel conditions after applying only the random permutation. Fig. 6 and Table. IV show the related results for both attacks by applying only the random substitution. For both attacks, most of the eavesdropped images are visually recognizable, indicating that the attacker can still obtain sensitive information from the transmitted symbols, even though some of the semantic features have been substituted. The average SSIM and PSNR in Table. IV are larger than those in Table. III, which means that the random permutation is more effective than random substitution in defending against MIEA. From the ablation study, we can observe that the proposed defense method outperforms both the random-permutation-based and random-substitution-based defense methods, demonstrating that both permutation and substitution are essential for the effectiveness of the proposed defense method. ## V Conclusion In this paper, we propose MIEA to expose privacy risks in semantic communication. MIEA enables an attacker to eavesdrop on the transmitted symbols through an eavesdropper channel and reconstruct the raw message by inverting the DL model employed in the semantic communication system. We consider MIEA under the white-box attack and the black-box attack and propose a novel defense method based on random permutation and substitution to defend against both types of attack. In our evaluation, we first examine MIEA for both attacks under various channel conditions. We then conduct experiments and an ablation study to demonstrate the effectiveness of our proposed defense method.
2310.11225
Sparse grid approximation of stochastic parabolic PDEs: The Landau--Lifshitz--Gilbert equation
We show convergence rates for a sparse grid approximation of the distribution of solutions of the stochastic Landau-Lifshitz-Gilbert equation. Beyond being a frequently studied equation in engineering and physics, the stochastic Landau-Lifshitz-Gilbert equation poses many interesting challenges that do not appear simultaneously in previous works on uncertainty quantification: The equation is strongly non-linear, time-dependent, and has a non-convex side constraint. Moreover, the parametrization of the stochastic noise features countably many unbounded parameters and low regularity compared to other elliptic and parabolic problems studied in uncertainty quantification. We use a novel technique to establish uniform holomorphic regularity of the parameter-to-solution map based on a Gronwall-type estimate and the implicit function theorem. This method is very general and based on a set of abstract assumptions. Thus, it can be applied beyond the Landau-Lifshitz-Gilbert equation as well. We demonstrate numerically the feasibility of approximating with sparse grid and show a clear advantage of a multi-level sparse grid scheme.
Xin An, Josef Dick, Michael Feischl, Andrea Scaglioni, Thanh Tran
2023-10-17T12:57:28Z
http://arxiv.org/abs/2310.11225v2
# Sparse grid approximation of the stochastic Landau-Lifshitz-Gilbert equation ###### Abstract. We show convergence rates for a sparse grid approximation of the distribution of solutions of the stochastic Landau-Lifshitz-Gilbert equation. Beyond being a frequently studied equation in engineering and physics, the stochastic Landau-Lifshitz-Gilbert equation poses many interesting challenges that do not appear simultaneously in previous works on uncertainty quantification: The equation is strongly nonlinear, time-dependent, and has a non-convex side constraint. Moreover, the parametrization of the stochastic noise features countably many unbounded parameters and low regularity compared to other elliptic and parabolic problems studied in uncertainty quantification. We use a novel technique to establish uniform holomorphic regularity of the parameter-to-solution map based on a Gronwall-type estimate combined with previously known methods that use the implicit function theorem. We demonstrate numerically the feasibility of the stochastic collocation method and show a clear advantage of a multi-level stochastic collocation scheme for the stochastic Landau-Lifshitz-Gilbert equation. * Corresponding author, [email protected] ## 1. Introduction ### The stochastic Landau-Lifshitz-Gilbert equation The Landau-Lifshitz-Gilbert equation is a phenomenological model for the dynamic evolution of magnetization in ferromagnetic materials. In order to capture heat fluctuations of the magnetization one considers a stochastic extension of the LLG equation driven by stochastic noise, see e.g., [10, 39] for some of the first works devoted to the modeling of magnetic materials under thermal agitation. Following these early works, great interest in the physics community lead to extensive research, see e.g., [31, 37, 52, 9, 42] to name a few examples. The nonlinear nature of LLG combined with the stochastic noise attracted a lot of interest in numerical analysis: For approximating the deterministic version of LLG, weak convergence of the approximations was known since at least 2008 (see, e.g., [2, 7]), strong a priori convergence of uniform time stepping schemes that obey physical energy bounds has first been proved recently in [28] and then extended to higher-order in [1]. The latter two works build on the tangent plane idea first introduced in [2] in order to remove the nonlinear solver required in [7]. This is achieved by solving for the time derivative of the magnetization instead of the magnetization itself. To study the stochastic version of LLG (SLLG), [12, 13] formulates a rigorous definition of _weak martingale solution_ to the SLLG problem, prove existence by means of the Faedo-Galerkin method and discuss regularity even with anisotropy in the effective field and for finite multi-dimensional noise in space. In [11, 14], the authors study the 1D (in space) SLLG problem, which has applications in the manufacturing of nanowires. They prove existence of weak martingale solutions for the problem for a larger class of coefficients compared to previous works in 3D. They prove pathwise existence and uniqueness of strong solutions, a large deviation principle and use it to analyze the transitions between equilibria. The space and time approximation of the SLLG problem was considered in [8]. The authors consider an implicit midpoint scheme that preserves the unit modulus constraint on the magnetization and satisfies relevant discrete energy estimates. Then they prove, by a compactness argument, that the method converges almost surely, weakly to the exact solution, up to extraction of a subsequence. In the follow-up work [6], the scheme is applied to reproduce numerically relevant phenomena such as finite-time blow-up of the solution and thermally-activated switching. A different approach is followed in [34], where the authors propose to discretize SLLG in space and time by first applying the Doss-Sussmann transform [24, 54] to the SLLG problem to obtain a random coefficient LLG problem. They then discretize this problem using the tangent-plane scheme [2] and prove convergence (again in the sense of weak convergence of a subsequence), which in particular proves the well posedness of the random coefficient LLG problem. A tangent plane scheme is also considered in [3], where the sample paths of the SPDE in Ito form are approximated and stability and convergence results are derived. For the approximation of multi-dimensional (finite) noise, [33] generalizes the approach based on the Doss-Sussmann transform. ### Uncertainty quantification for PDEs with random coefficients Dimension independent approximation of PDEs with random coefficients has first been proposed in [20] and the idea of using a holomorphic extension of the exact solution in order to obtain convergence rates for the parametric approximation goes back to [21]. The works [57] and [4] begin the mathematical study of collocation-type schemes for random PDEs. Several extensions and improvements of certain aspects of the theory can be found in, e.g., [48], which uses sparse grids to improve the dependence on the number of parametric dimensions and [47], which employs _anisotropic_ sparse grids to achieve dimension-independent convergence (under tractability assumptions on the problem). In [46], the authors select the sparse grids with a profit-maximization principle, effectively recasting the sparse grid selection problem into a Knapsack problem. They also prove an error bound with explicit dependence on the number of approximated dimensions. In the present work, we use the same principle to build sparse grids and apply the framework to prove dimension-independent convergence. In [58], the authors extend the methodology developed in [4] and the related papers to a linear parabolic problem with random coefficients under the finite-dimensional noise assumption. They prove existence of a holomorphic extension and, based on the ideas in [4], show that this leads to convergence of stochastic collocation schemes for both a space semi-discrete and fully discrete approximations. In [49], the authors study a linear parabolic problem with uncertain diffusion coefficient under the finite dimensional noise assumption. They prove existence of a holomorphic extension by extending the problem to complex parameters and verifying the Cauchy-Riemann equations. They study convergence of stochastic Galerkin and stochastic collocation approximation. In [30], the authors consider a coupled Navier-Stokes and heat equation problem with uncertainty and develope a heuristic adaptive sparse grid scheme based on [32]. In order to discretize the Wiener process in the stochastic LLG equation, one needs to deal with unbounded parameter spaces. This has been done in, e.g., [5], where the authors study the Poisson problem with lognormal diffusion and establish summability results for Hermite coefficients based on _local-in-space_ summability of the basis used to expand the logarithm of the diffusion. In [27], the authors approximate functions with this property by means of sparse grids interpolation built using global polynomials with Gauss-Hermite interpolate nodes. They prove algebraic and dimension independent convergence rates. In the preprint [26], the authors study the regularity of a large class of problems depending on Gaussian random field inputs as well as the convergence of several numerical schemes. Several examples of PDEs with Gaussian random coefficients are given e.g. elliptic and parabolic PDEs with lognormal diffusion. The regularity result implies estimates on the Hermite coefficients of the parameter-to-solution map. These, in turn, can be used to study the convergence of Smolyak-Hermite interpolation and quadrature among other numerical methods. Beyond linear problems, [18] deals with infinite-dimensional parametric problems with compact coefficient spaces, but go beyond the setting of affine parametric dependence. They prove the existence of a holomorphic extension of the coefficient-to-solution map without extending the problem to the complex domain (as is usually done for the random Poisson problem). Rather, they employ the implicit function theorem. In [22], the authors use similar techniques in the setting of the stationary Navier-Stokes equation with random domain. ### New results for the stochastic LLG equation The present work gives a first efficient approximation of the probability distribution of the solution of the stochastic LLG equation. To that end, we will employ the Doss-Sussmann transform and discretize the resulting Wiener process via a Levy-Ciesiesky expansion. This leads to a parameterized non-linear time-dependent PDE with infinite dimensional and unbounded parameter space. We derive the following main results: * The first rigorous convergence result for a nonlinear and time-dependent parametrized PDE with unbounded parameter space. Precisely, we show convergence of piecewise quadratic sparse grids for the stochastic LLG equation with order \(1/2\) and dimension dependent constant (see Theorem 25). The result assumes that the stochastic LLG equation has uniformly Holder (in time and space) regular solutions which is true for regular initial conditions which are sufficiently close to constant. Under some reasonable assumptions and simplifications of the stochastic input, we show dimension independent convergence with order \(1/2\) (see Theorem 29). * The first result on uniform holomorphic regularity of the parameter-to-solution map for the Landau-Lifshitz-Gilbert equation. To the best knowledge of the authors, this is also the first uniform holomorphic regularity result for unbounded parameter spaces and strongly non-linear and time-dependent problems. * Improved convergence rate of a multi-level version of the stochastic collocation algorithm under natural assumptions on the underlying finite element method. To achieve the mentioned results, we have to overcome several challenges posed by the nonlinear nature of the problem: * Holomorphic parameter-to-solution map: This is well-understood for linear problems but turns out to be technically challenging for non-linear problems. While we apply the implicit function theorem as in [18], in our case the parameter space is not compact. To overcome this problem, we control the growth of the extension by means of a Gronwall-like estimate for small imaginary parts. The main challenge here is that there is no canonical complex version of LLG which supports holomorphy. The main reason for this is that any extension of the cross product is either not complex differentiable or loses orthogonality properties which normally ensure \(L^{\infty}\)-boundedness of solutions of the LLG equation. * Lack of parametric regularity: All mentioned works on uncertainty quantification require strong summability of the coefficients which arise in the expansion of the stochastic noise. Typically, \(\ell^{p}\)-summability with \(p<1\) is required. Even with the holomorphic regularity established, the present problem only provides summability in \(\ell^{p}\) for \(p>2\). We propose a simplification of the stochastic input which allows us to consider the problem in an \(L^{1}\)-setting in time. This increases the parametric regularity and results in dimension independent estimates. * Lack of sample path regularity: Regularity results for LLG are sparse even in the deterministic setting. We refer to [17, 44, 45, 16, 41, 43, 19] for partial results in 2D and 3D. Sample path regularity directly influences holomorphic regularity via the implicit function theorem. To that end, we rely on Holder space regularity results for the stochastic LLG equation (Theorem 2). ### Structure of the work We introduce the stochastic version of the LLG equation in Section 2, and transform it into a parametrized nonlinear and time-dependent PDE in Section 3 and Section 4. In Section 5, we prove that the parameter-to-solution map is holomorphic under the assumptions that sample paths of random coefficients and solutions are Holder continuous. In Section 6, we do the same for a simplified version of the problem obtained with additional modeling assumptions. This time, sample paths are assumed to be Lebesgue integrable in time. The sparsity properties of the parameter-to-solution map in the Holder setting are weaker than in the Lebesgue setting. This is reflected by the convergence of sparse grid interpolation discussed in Section 7. The results are confirmed by numerical experiments. The final section 8 derives the multi-level version of the stochastic collocation method and provides numerical tests. ## 2. The Stochastic Landau-Lifshitz-Gilbert equation We consider a bounded Lipschitz domain \(D\subset\mathbb{R}^{3}\) representing the magnetic material in the time interval \([0,T]\). By \(D_{T}\coloneqq[0,T]\times D\) we denote the space-time cylinder and by \(\partial_{n}\) the outward pointing normal derivative on \(\partial D\). Given \(\mathbf{M}_{0}:D\to\mathbb{S}^{2}\coloneqq\big{\{}\mathbf{x}\in\mathbb{R}^{3}\,:\,x_{ 1}^{2}+x_{2}^{2}+x_{3}^{2}=1\big{\}}\), \(\lambda>0\) (called the Gilbert damping parameter), the deterministic version of the problem (the _LLG equation_) reads: Find \(\mathbf{M}(t,\mathbf{x}):D_{T}\to\mathbb{S}^{2}\) such that \[\begin{cases}\partial_{t}\mathbf{M}=\lambda_{1}\mathbf{M}\times\Delta\mathbf{M}-\lambda_{ 2}\mathbf{M}\times(\mathbf{M}\times\Delta\mathbf{M})&\text{ in }D_{T},\\ \partial_{n}\mathbf{M}=0&\text{ on }\partial D\times[0,T],\\ \mathbf{M}(0)=\mathbf{M}_{0}&\text{ on }D,\end{cases} \tag{1}\] where \(\lambda_{1}=\frac{1}{1+\lambda^{2}}\), \(\lambda_{2}=\frac{\lambda}{1+\lambda^{2}}\). The solution has constant magnitude in space and time (this follows immediately from scalar multiplication of (1) with \(\mathbf{M}\)). This implies that, assuming a constant normalized initial condition \(|\mathbf{M}_{0}|\equiv 1\) on \(D\), that \[|\mathbf{M}(t,\mathbf{x})|=1\qquad\text{for all }(t,\mathbf{x})\in D_{T}.\] In (1), the exchange term \(\Delta\mathbf{M}\) can be substituted by a more general _effective field_\(\mathbf{H}_{\mathrm{eff}}(\mathbf{M})\) containing \(\Delta\mathbf{M}\) and additional lower order contributions modelling additional physical effects like material anisotropy, magnetostatic energy, external magnetic fields or the more involved Dzyaloshinskii-Moriya interaction (DMI) (see e.g. [50, Section 1.2]). The effect of heat fluctuations on the systems is described with a random model. Denote by \((\Omega,\mathcal{F},\mathbb{P})\) a probability space and let \(\mathrm{d}\mathbf{W}(\omega,t,\mathbf{x}):\Omega\times D_{T}\to\mathbb{R}^{3}\) be a suitable space-time noise (note that the exact form of this noise is subject of research and we will consider a simple one-dimensional model in the following). Consider the following formal equation for \(\mathbf{M}(\omega,t,\mathbf{x}):\Omega\times D_{T}\to\mathbb{S}^{2}\): \[\partial_{t}\mathbf{M}=\lambda_{1}\mathbf{M}\times(\Delta\mathbf{M}+\mathrm{d}\mathbf{W})- \lambda_{2}\mathbf{M}\times(\mathbf{M}\times\Delta\mathbf{M})\quad\text{in }D_{T},\mathbb{P}\text{-a.s.}\] with the same initial and boundary conditions as in (1). It is customary not to include a white noise in the second term of the right-hand-side. This is a widely accepted approximation because of the smallness of \(\lambda_{2}\) compared to \(\lambda_{1}\) (see, e.g., [12, page 3]). For simplicity, we additionally assume one-dimensional noise \(\mathbf{W}(\omega,t,\mathbf{x})=\mathbf{g}(\mathbf{x})W(\omega,t)\), where \(\mathbf{g}:D\to\mathbb{R}^{3}\) is given and \(W:\Omega\times[0,T]\to\mathbb{R}\) denotes a (scalar) Wiener process. The previous formal equation corresponds to the following _stochastic_ partial differential equation called the _stochastic LLG problem_: Find \(\mathbf{M}(\omega,t,\mathbf{x}):\Omega\times D_{T}\to\mathbb{S}^{2}\) such that \[\mathrm{d}\mathbf{M}=\left(\lambda_{1}\mathbf{M}\times\Delta\mathbf{M}-\lambda_{2}\mathbf{M} \times(\mathbf{M}\times\Delta\mathbf{M})\right)\mathrm{d}t+\left(\lambda_{1}\mathbf{M} \times\mathbf{g}\right)\circ\mathrm{d}W\quad\text{in }D_{T},\ \mathbb{P}\text{-a.s.} \tag{2}\] again with initial and boundary conditions as in (1). The symbol \(\mathrm{o}\mathrm{d}W\) denotes the Stratonovich differential. We define a weak solution of this problem following [34]. **Definition 1**.: _A weak martingale solution of (2) is a tuple \(\left(\Omega,\mathcal{F},\left(\mathcal{F}_{t}\right)_{t\in[0,T]},\mathbb{P},W,\mathbf{M}\right)\) where_ * \(\left(\Omega,\mathcal{F},\left(\mathcal{F}_{t}\right)_{t\in[0,T]},\mathbb{P}\right)\) _is a filtered probability space,_ * \(W:\Omega\times[0,T]\to\mathbb{R}\) _is a scalar Wiener process adapted to_ \(\left(\mathcal{F}_{t}\right)_{t\in[0,T]}\)_, and_ * \(\mathbf{M}:\Omega\times[0,T]\to L^{2}(D)^{3}\) _is a progressively measurable stochastic process_ _such that the following properties hold:_ * \(\mathbf{M}(\omega,\cdot)\in C^{0}([0,T],H^{-1}(D))\)__\(\mathbb{P}\)_-a.s.;_ * \(\mathbb{E}\left(\operatorname{esssup}_{t\in[0,T]}\left\|\nabla\mathbf{M}\right\|_{L ^{2}(D)}^{2}\right)<\infty\)_;_ * \(|\mathbf{M}(\omega,t,\mathbf{x})|=1\) _for all_ \(t\in[0,T],\) _for a.e._ \(\mathbf{x}\in D\) _and_ \(\mathbb{P}\)_-a.s.;_ * _For all_ \(t\in[0,T]\) _and all_ \(\mathbf{\phi}\in C_{0}^{\infty}(D)^{3}\)_,_ \(\mathbb{P}\)_-a.s. there holds_ \[\left\langle\mathbf{M}(t),\mathbf{\phi}\right\rangle-\left\langle\mathbf{M}_{0},\mathbf{ \phi}\right\rangle =-\lambda_{1}\int_{0}^{t}\left\langle\mathbf{M}\times\ \nabla\mathbf{M},\nabla\mathbf{\phi}\right\rangle\mathrm{d}s-\lambda_{2}\int_{0}^{t} \left\langle\mathbf{M}\times\nabla\mathbf{M},\nabla\left(\mathbf{M}\times\mathbf{\phi}\right) \right\rangle\mathrm{d}s\] \[+\int_{0}^{t}\left\langle\mathbf{M}\times\mathbf{g},\mathbf{\phi}\right\rangle \circ\mathrm{d}W(s),\] _where_ \(\left\langle\cdot,\cdot\right\rangle\) _denotes the_ \(L^{2}(D)^{3}\) _scalar product._ Existence of solutions to (2) in this sense was first established in [12], while uniqueness of weak solutions is still an open question. An alternative existence proof was given in [34]. Here the authors use the Doss-Sussman transform to obtain a PDE with random coefficients instead of the stochastic differential. We will follow this approach in the next section. ## 3. Random LLG equation by Doss-Sussmann transform The goal of this section is to replace the stochastic differential in (2) with an appropriate random coefficient. While this was done in [34] for technical reasons, we are mainly interested in obtaining an equivalent problem that is more amenable to collocation-type approximation. Another advantage is (formally) gaining a full order of differentiability of the solution. Given \(\mathbf{g}:D\to\mathbb{R}^{3}\), \(s\in\mathbb{R}\) and \(\mathbf{v}:D\to\mathbb{R}^{3}\) with suitable regularity, consider the following operators: \[\begin{split}& G\mathbf{v}=\mathbf{v}\times\mathbf{g}\\ &\mathcal{C}\mathbf{v}=\mathbf{v}\times\Delta\mathbf{g}+2\nabla\mathbf{v}\times \nabla\mathbf{g}\\ & e^{sG}\mathbf{v}=\mathbf{v}+\sin(s)G\mathbf{v}+(1-\cos(s))G^{(2)}\mathbf{v}\\ &\mathcal{E}(s,\mathbf{v})=\sin(s)\mathcal{C}\mathbf{v}+(1-\cos(s))( \mathcal{C}G+G\mathcal{C})\mathbf{v}\\ &\hat{\mathcal{C}}(s,\mathbf{v})=e^{-sG}\mathcal{E}(s,\mathbf{v})= \mathcal{E}(s,\mathbf{v})+\sin(s)G\mathcal{E}(s,\mathbf{v})+(1-\cos(s))G^{(2)}\mathcal{ E}(s,\mathbf{v}),\end{split} \tag{3}\] where we define \(\nabla\mathbf{v}\times\nabla\mathbf{g}\coloneqq\sum_{j=1}^{3}\frac{\partial\mathbf{v}}{ \partial x_{j}}\times\frac{\partial\mathbf{g}}{\partial x_{j}}\). Note that \(e^{sG}\) is the usual matrix exponential, where \(G^{(i)}\) denotes the composition of \(G\) with itself \(i\) times. The fact \(G^{(3)}\mathbf{v}=-\mathbf{v}\) simplifies the expression. Expanding some of the definitions, the last operator can be written as \[\hat{\mathcal{C}}(s,\mathbf{v}) =\sin(s)\mathcal{C}\mathbf{v}+(1-\cos(s))\left(\mathcal{C}G+G \mathcal{C}\right)\mathbf{u}-\sin(s)^{2}G\mathcal{C}\mathbf{u}-\sin(s)(1-\cos(s))G \left(\mathcal{C}G+G\mathcal{C}\right)\mathbf{u}\] \[+(1-\cos(s))\sin(s)G^{2}\mathcal{C}\mathbf{u}+(1-\cos(s))^{2}G^{2} \left(\mathcal{C}G+G\mathcal{C}\right)\mathbf{u}\] or, in compact form, as \[\hat{\mathcal{C}}(s,\mathbf{v})=\sum_{i=1}^{6}b_{i}(s)F_{i}(\mathbf{v}), \tag{4}\] where \(b_{i}\) are uniformly bounded with bounded derivatives and the \(F_{i}\) are linear and globally Lipschitz with the Lipschitz constants \(\beta\) and \(L\) depending only on \(\mathbf{g}\), i.e., \[\left\|b_{i}(W)\right\|_{L^{\infty}(\mathbb{R})}\leq\beta,\quad \left\|b_{i}^{\prime}(W)\right\|_{L^{\infty}(\mathbb{R})}\leq\beta \forall W\in C^{0}([0,T])\] \[\left\|F_{i}(\mathbf{u})-F_{i}(\mathbf{v})\right\|_{L^{2}(D)}\leq L \left\|\mathbf{u}-\mathbf{v}\right\|_{H^{1}(D)} \forall\mathbf{u},\mathbf{v}\in H^{1}(D)^{3}.\] With the following transformation, called the _Doss-Sussmann transform_, \[\mathbf{m}(t,x)=e^{-W(t)G}\mathbf{M}(t,x),\] we obtain the _random coefficients LLG problem_: Given \(\mathbf{M}^{0}:D\to\mathbb{S}^{2}\), find \(\mathbf{m}:\Omega\times D_{T}\to\mathbb{S}^{2}\) such that for \(\mathbb{P}\)-a.e. \(\omega\in\Omega\) \[\begin{cases}\partial_{t}\mathbf{m}(\omega)=\lambda_{1}\mathbf{m}(\omega)\times\left( \Delta\mathbf{m}(\omega)+\hat{\mathcal{C}}(W(\omega),\mathbf{m}(\omega))\right)\\ \qquad\qquad\qquad\qquad-\lambda_{2}\mathbf{m}(\omega)\times\left(\mathbf{m}(\omega) \times\left(\Delta\mathbf{m}(\omega)+\hat{\mathcal{C}}(W(\omega),\mathbf{m}(\omega)) \right)\right)\end{cases}&\text{in }D_{T},\\ \partial_{t}\mathbf{m}(\omega)=0&\text{on }[0,T]\times\partial D,\\ \mathbf{m}(\omega,0,\cdot)=\mathbf{M}_{0}&\text{on }D.\end{cases} \tag{5}\] It is shown in [34, Lemma 4.6] that any weak solution \(\mathbf{m}\) of (5) corresponds to a weak martingale solution \(\mathbf{M}=e^{W(t)G}\mathbf{m}\) of (2) through the inverse Doss-Sussmann transform. Existence of solutions to (5) is shown in [34], but again uniqueness is open. ### Space and time Holder regularity of sample paths of the random LLG problem In the present section, we prove that the sample paths of solutions of the random LLG problem (5) are Holder regular. We recall basic definitions and important facts about Holder spaces. Let \(n\in\mathbb{N}\) and \(D\subset\mathbb{R}^{n}\), \(\alpha\in(0,1)\), \(v:D\to\mathbb{C}\). The Holder-seminorm reads \(|v|_{\alpha}\coloneqq\sup_{x,y\in D,x\neq y}\frac{|v(x)-v(y)|}{|x-y|^{\alpha}}\) and by \(C^{\alpha}(D)\), we denote the Banach space of functions with finite Holder-norm \(\left\|v\right\|_{C^{\alpha}(D)}\coloneqq\left\|v\right\|_{C^{0}(D)}+|v|_{\alpha}\). Clearly, \(u,v\in C^{\alpha}(D)\) implies \(uv\in C^{\alpha}(D)\). Higher Holder regularity of order \(k\in\mathbb{N}\) is characterized via the seminorm \(|v|_{k,\alpha}\coloneqq\sum_{j\leq k}|D^{j}v|_{\alpha}\) and the corresponding Banach space \(C^{k+\alpha}(D)\coloneqq\left\{v:D\to\mathbb{C}:D^{j}v\in C^{\alpha}(D)\ \forall j\leq k\right\}\) with the norm \(\left\|v\right\|_{C^{k+\alpha}(D)}\coloneqq\sum_{j\leq k}\left\|D^{j}v\right\|_ {C^{0}(D)}+\left|D^{k}v\right|_{C^{\alpha}(D)}\). Again \(u,v\in C^{k+\alpha}(D)\) immediately implies \(uv\in C^{k+\alpha}(D)\). In the parabolic setting, it is useful to consider the _parabolic distance_ of \((t,\mathbf{x})\), \((s,\mathbf{y})\in D_{T}\), i.e., \[d((t,\mathbf{x}),(s,\mathbf{y}))\coloneqq\left(|t-s|+|\mathbf{x}-\mathbf{y}|^{2}\right)^{1/2}\] For a function \(v:D_{T}\to\mathbb{C}\), define the seminorm \[|v|_{\alpha/2,\alpha}\coloneqq\sup_{P,Q\in D_{T},P\neq Q}\frac{|v(P)-v(Q)|}{d(P,Q )^{\alpha}}.\] Define the Banach spaces \[C^{\alpha/2,\alpha}(D_{T})\coloneqq\left\{v:D_{T}\to\mathbb{R}:|v|_{\alpha/2, \alpha}<\infty\right\},\] with the norm (see [56, Section 1.2.3] for details) \[\left\|v\right\|_{C^{\alpha/2,\alpha}(D_{T})}\coloneqq\left\|v\right\|_{C^{0 }(D_{T})}+|v|_{\alpha/2,\alpha}.\] Finally, for \(k\in\mathbb{N}\) \[C^{1+\alpha/2,2+\alpha}(D_{T})\coloneqq\left\{v:D_{T}\to\mathbb{R}:v,\nabla v,D^{2}v,\partial_{t}v\in C^{\alpha/2,\alpha}(D_{T})\right\}\] is a Banach space when endowed with the norm \[\left\|v\right\|_{C^{k+\alpha/2,2k+\alpha}(D_{T})}\coloneqq\sum_{0\leq\beta \leq 2}\left\|D^{\beta}v\right\|_{\alpha/2,\alpha}+\left\|\partial_{t}v \right\|_{\alpha/2,\alpha}.\] In what follows, we work with the Holder seminorm \[\left|v\right|_{C^{1+\alpha/2,2+\alpha}(D_{T})}\coloneqq\left|v\right|_{C^{ \alpha/2,\alpha}(D_{T})}+\sum_{1\leq\beta\leq 2}\left\|D^{\beta}v\right\|_{C^{ \alpha/2,\alpha}(D_{T})}+\left\|\partial_{t}v\right\|_{C^{\alpha/2,\alpha}(D_ {T})}. \tag{6}\] As above, if \(u,v\in C^{1+\alpha/2,2+\alpha}(D_{T})\) then also \(uv\in C^{1+\alpha/2,2+\alpha}(D_{T})\). In the rest of this section, we adopt the short notation \(\left\|\cdot\right\|_{\alpha}=\left\|\cdot\right\|_{C^{\alpha}(D)}\), \(\left\|\cdot\right\|_{1+\alpha/2,2+\alpha}=\left\|\cdot\right\|_{C^{1+\alpha/ 2,2+\alpha}(D_{T})}\), and analogously for all other norms and seminorms. To prove Holder regularity of sample paths, we work with the following equivalent form of (5), obtained using algebraic manipulations including the triple product expansion and the fact that \(\left|\mathbf{m}\right|=1\) for all \(t\in[0,T]\) and a.a. \(\mathbf{x}\in D\): \[\lambda\partial_{t}\mathbf{m}+\mathbf{m}\times\partial_{t}\mathbf{m}=\Delta\mathbf{m}+|\nabla \mathbf{m}|^{2}\mathbf{m}-\mathbf{m}\times\left(\mathbf{m}\times\hat{\mathcal{C}}(W,\mathbf{m}) \right), \tag{7}\] where we recall that \(\lambda>0\) is the Gilbert damping parameter and \(\hat{\mathcal{C}}\) was defined in (3). The main result of this section is summarized in the following theorem. **Theorem 2**.: _Let \(0<\alpha<1\). Assume that \(W\in C^{\alpha/2}([0,T])\), \(\mathbf{m}^{0}\in C^{2+\alpha}(D)\) and \(\mathbf{g}\in C^{2+\alpha}(D)\). There exists \(\epsilon>0\) such that if \(\left\|\mathbf{m}^{0}\right\|_{2+\alpha}\leq\epsilon\), \(\left\|\Delta\mathbf{g}\right\|_{\alpha}\leq\epsilon\), and \(\left\|\nabla\mathbf{g}\right\|_{\alpha}\leq\epsilon\), then the solution \(\mathbf{m}\) of equation (7) with initial condition \(\mathbf{m}(0)=\mathbf{m}^{0}\) and homogeneous Neumann boundary conditions belongs to \(C^{1+\alpha/2,2+\alpha}(D_{T})\). Moreover,_ \[\left\|\mathbf{m}\right\|_{1+\alpha/2,2+\alpha}\leq C_{r}, \tag{8}\] _where \(C_{r}>0\) depends on \(\left\|\mathbf{g}\right\|_{2+\alpha}\), \(\left\|\mathbf{m}_{0}\right\|_{2+\alpha}\), \(\lambda\), \(D\) and \(T\) but is independent of \(W\)._ The proof of the theorem is inspired by [29]. However, the proofs in the mentioned work require higher temporal regularity as is available for stochastic LLG, which we circumvent by the use of Holder spaces instead of Sobolev spaces. In the following, we will require some notation: Let \(\mathbf{x}_{0}\in D\). For any \(\mathbf{v}\in C^{1+\alpha/2,2+\alpha}(D_{T})\), \(\mathbf{a}\in\mathbb{R}^{3}\), we define \[H(\mathbf{v},\mathbf{v},\mathbf{v}) \coloneqq-\mathbf{v}\times(\mathbf{v}\times\hat{\mathcal{C}}(W,\mathbf{v})),\] \[\mathcal{R}_{a}(\mathbf{v}) \coloneqq\lambda\partial_{t}\mathbf{v}+\mathbf{v}\times\partial_{t}\mathbf{v} -|\mathbf{v}|^{2}\Delta\mathbf{v}-|\nabla\mathbf{v}|^{2}\mathbf{v}+H(\mathbf{v},\mathbf{v},\mathbf{v}),\] \[L\mathbf{a} \coloneqq\lambda\mathbf{a}+\mathbf{m}^{0}(x_{0})\times\mathbf{a}.\] We will require a couple of technical results. **Lemma 3** (Continuity of the LLG residual \(\mathcal{R}_{a}\)).: _If \(\mathbf{v}\in C^{1+\alpha/2,2+\alpha}(D_{T})\), then_ \[\left\|\mathcal{R}_{a}(\mathbf{v})\right\|_{\alpha/2,\alpha}\leq \left(\left|\mathbf{v}\right|_{1+\alpha/2,2+\alpha}+\left|\mathbf{v} \right|_{1+\alpha/2,2+\alpha}^{2}\right)\left(\left(1+\left\|\mathbf{g}\right\|_{ \alpha/2,\alpha}\right)^{3}\left\|\nabla\mathbf{g}\right\|_{\alpha/2,\alpha} \left\|\mathbf{v}\right\|_{\alpha/2,\alpha}^{2}+\left(\lambda+\left\|\mathbf{v} \right\|_{1+\alpha/2,2+\alpha}\right)^{2}\right)\] \[+\left(\left\|\nabla\mathbf{g}\right\|_{\alpha/2,\alpha}+\left\|\nabla \mathbf{g}\right\|_{\alpha/2,\alpha}^{2}\right)\left(1+\left\|\mathbf{g}\right\|_{ \alpha/2,\alpha}\right)^{2}\left\|\mathbf{v}\right\|_{\alpha/2,\alpha}^{3}\] \[+\left\|\Delta\mathbf{g}\right\|_{\alpha/2,\alpha}\left(1+\left\|\bm {g}\right\|_{\alpha/2,\alpha}\right)^{3}\left\|\mathbf{v}\right\|_{\alpha/2, \alpha}^{3}.\] _In particular, \(\left\|\mathcal{R}_{a}(\mathbf{v})\right\|_{\alpha/2,\alpha}\) vanishes when \(\left|\mathbf{v}\right|_{1+\alpha/2,2+\alpha}\), \(\left\|\nabla\mathbf{g}\right\|_{\alpha/2,\alpha}\) and \(\left\|\Delta\mathbf{g}\right\|_{\alpha/2,\alpha}\) all vanish._ Proof.: For the first terms, simple estimates give \[\left\|\lambda\partial_{t}\mathbf{v}+\mathbf{v}\times\partial_{t}\mathbf{v}- \left|\mathbf{v}\right|^{2}\Delta\mathbf{v}-\left|\nabla\mathbf{v}\right|^{2}\mathbf{v}\right\| _{\alpha/2,\alpha}\leq \lambda\left|\mathbf{v}\right|_{1+\alpha/2,2+\alpha}+\left\|\mathbf{v} \right\|_{1+\alpha/2,2+\alpha}\left|\mathbf{v}\right|_{1+\alpha/2,2+\alpha}\] \[+\left\|\mathbf{v}\right\|_{1+\alpha/2,2+\alpha}^{2}\left|\mathbf{v} \right|_{1+\alpha/2,2+\alpha}+\left|\mathbf{v}\right|_{1+\alpha/2,2+\alpha}^{2} \left\|\mathbf{v}\right\|_{1+\alpha/2,2+\alpha}\] \[\leq \left(\left|\mathbf{v}\right|_{1+\alpha/2,2+\alpha}+\left|\mathbf{v} \right|_{1+\alpha/2,2+\alpha}^{2}\right)\left(\lambda+\left\|\mathbf{v}\right\|_{ 1+\alpha/2,2+\alpha}\right)^{2}.\] It remains to estimate the last term of \(\mathcal{R}_{a}\) in detail: To that end, we first establish \[\left\|\mathcal{C}\mathbf{v}\right\|_{\alpha/2,\alpha}\leq\left|\mathbf{v}\right|_{1 +\alpha/2,2+\alpha}\left\|\nabla\mathbf{g}\right\|_{\alpha}+\left\|\mathbf{v}\right\| _{\alpha/2,\alpha}\left\|\Delta\mathbf{g}\right\|_{\alpha}\] as well as \[\left\|\mathcal{C}\mathbf{v}\right\|_{\alpha/2,\alpha} =\left\|\nabla(G\mathbf{v})\times\nabla\mathbf{g}+G\mathbf{v}\times\Delta\mathbf{ g}\right\|_{\alpha/2,\alpha}\] \[\leq\left\|\nabla G\mathbf{v}\right\|_{\alpha/2,\alpha}\left\|\nabla \mathbf{g}\right\|_{\alpha}+\left\|G\mathbf{v}\right\|_{\alpha/2,\alpha}\left\|\Delta \mathbf{g}\right\|_{\alpha}\] \[\leq\left(\left\|\nabla\mathbf{v}\right\|_{\alpha/2,\alpha}\left\|\bm {g}\right\|_{\alpha}+\left\|\mathbf{v}\right\|_{\alpha/2,\alpha}\left\|\nabla\mathbf{ g}\right\|_{\alpha}\right)\left\|\nabla\mathbf{g}\right\|_{\alpha}+\left\|\mathbf{v} \right\|_{\alpha/2,\alpha}\left\|\mathbf{g}\right\|_{\alpha}\left\|\Delta\mathbf{g} \right\|_{\alpha}\] \[\leq\left|\mathbf{v}\right|_{1+\alpha/2,2+\alpha}\left\|\mathbf{g} \right\|_{\alpha}\left\|\nabla\mathbf{g}\right\|_{\alpha}+\left\|\mathbf{v}\right\|_{1 +\alpha/2,2+\alpha}\left(\left\left\|\nabla\mathbf{g}\right\|_{\alpha}^{2}+\left\| \mathbf{g}\right\|_{\alpha}\left\|\Delta\mathbf{g}\right\|_{\alpha}\right)\] Furthermore, we have \[\left\|\mathcal{E}(s,\mathbf{v})\right\|_{\alpha/2,\alpha}\leq\left\|\mathcal{C} \mathbf{v}\right\|_{\alpha/2,\alpha}+\left\|\mathcal{C}G\mathbf{v}\right\|_{\alpha/2, \alpha}+\left\|\mathbf{g}\right\|_{\alpha}\left\|\mathcal{C}\mathbf{v}\right\|_{\alpha/2,\alpha}\] and \[\left\|\hat{\mathcal{C}}(s,\mathbf{v})\right\|_{\alpha/2,\alpha} \leq\left(1+\left\|\mathbf{g}\right\|_{\alpha}+\left\|\mathbf{g}\right\|_{ \alpha}^{2}\right)\left\|\mathcal{E}(s,\mathbf{v})\right\|_{\alpha/2,\alpha}\] \[\left\|H(\mathbf{v},\mathbf{v},\mathbf{v})\right\|_{\alpha/2,\alpha} \leq\left\|\mathbf{v}\right\|_{\alpha/2,\alpha}^{2}\left\|\hat{ \mathcal{C}}(s,\mathbf{v})\right\|_{\alpha/2,\alpha}.\] Combining the previous estimates concludes the proof. Additionally, we need some finer control over the boundedness of \(\mathcal{R}_{a}\). The point of the following result is that all terms apart from the first one on the right-hand side of the estimate below are either at least quadratic in \(\mathbf{w}\) or can be made small by choosing \(\mathbf{v}\) close to a constant function. This will allow us to treat the nonlinear parts as perturbations of the heat equation. **Lemma 4**.: _For \(\mathbf{v},\mathbf{w}\in C^{1+\alpha/2,2+\alpha}(D_{T})\), there holds_ \[\left\|\mathcal{R}_{a}(\mathbf{v}-\mathbf{w})\right\|_{\alpha/2,\alpha} \leq\left\|\mathcal{R}_{a}(\mathbf{v})-(L\partial_{t}-\Delta)\mathbf{w} \right\|_{\alpha/2,\alpha}+\left\|\mathbf{v}-\mathbf{m}^{0}(x_{0})\right\|_{\alpha/2, \alpha}\left\|\mathbf{w}\right\|_{1+\alpha/2,2+\alpha}\] \[+\left\|1-\left|\mathbf{v}\right|^{2}\right\|_{\alpha/2,\alpha}\left\| \mathbf{w}\right\|_{1+\alpha/2,2+\alpha}+\left\|\mathbf{w}\right\|_{1+\alpha/2,2+ \alpha}\left|\mathbf{v}\right|_{1+\alpha/2,2+\alpha}\left(1+\left\|\mathbf{v}\right\|_{ 1+\alpha/2,2+\alpha}\right)\] \[+\left\|\mathbf{w}\right\|_{1+\alpha/2,2+\alpha}^{2}\left(1+\left|\mathbf{v }\right|_{1+\alpha/2,2+\alpha}\right)+\left\|\mathbf{w}\right\|_{1+\alpha/2,2+ \alpha}^{3}+\left\|\mathbf{w}\right\|_{1+\alpha/2,2+\alpha}^{3}C_{\mathbf{g}},\] _where \(C_{\mathbf{g}}>0\) depends only on \(\mathbf{g}\)._ Proof.: All but the last terms are estimated as in [29]. As for the last term, observe that \(H(\mathbf{v}-\mathbf{w},\mathbf{v}-\mathbf{w},\mathbf{v}-\mathbf{w})=H(\mathbf{v},\mathbf{v},\mathbf{v})-H(\mathbf{w}, \mathbf{w},\mathbf{w})\) because of Jacobi identity. The norm of \(H(\mathbf{w},\mathbf{w},\mathbf{w})\) is estimated as in the proof of Lemma 3. To prove Theorem 2, we use a fixed point iteration. Proof of Theorem 2.: Consider the initial guess \(\mathbf{m}_{0}(t,x)=\mathbf{m}_{0}(x)\) for all \(t\in[0,T]\). Define the sequence \(\left(\mathbf{m}_{\ell}\right)_{\ell}\) as follows: For \(\ell=0,1,\ldots\) 1. Define \(\mathbf{r}_{\ell}:=\mathcal{R}_{a}(\mathbf{m}_{\ell})\) 2. Solve \[\begin{cases}L\partial_{t}\mathbf{R}_{\ell}-\Delta\mathbf{R}_{\ell}=\mathbf{r}_{\ell}& \text{in }D_{T}\\ \partial_{n}\mathbf{R}_{\ell}=0&\text{on }\partial D\times[0,T]\\ \mathbf{R}_{\ell}(0)=0&\text{on }D.\end{cases}\] 3. Update \(\mathbf{m}_{\ell+1}:=\mathbf{m}_{\ell}-\mathbf{R}_{\ell}\). _Step 1 (well-posedness):_ By definition, we have \(\mathbf{m}_{0}\in C^{1+\alpha/2,2+\alpha}(D_{T})\) as well as \(\partial_{n}\mathbf{m}_{0}=0\). Assume that \(\mathbf{m}_{\ell}\in C^{1+\alpha/2,2+2\alpha}(D_{T})\) and \(\partial_{n}\mathbf{m}_{\ell}=0\). Then, Lemma 3 implies that \(\mathbf{r}_{\ell}\in C^{\alpha/2,\alpha}(D_{T})\). The parabolic regularity result [40, Theorem 10.4,SS10, VII] shows \(\mathbf{R}_{\ell}\in C^{1+\alpha/2,2+\alpha}(D_{T})\). _Step 2 (convergence):_ We show the Cauchy property of the sequence \((\mathbf{m}_{\ell})_{\ell}\): Fix \(0\leq\ell^{\prime}\leq\ell<\infty\) and observe \[\left\|\mathbf{m}_{\ell}-\mathbf{m}_{\ell^{\prime}}\right\|_{1+\alpha/2,2+\alpha}\leq \sum_{j=\ell^{\prime}}^{\ell-1}\left\|\mathbf{R}_{j}\right\|_{1+\alpha/2,2+\alpha}.\] By the previous lemmata, we have \[\left\|\mathbf{R}_{j+1}\right\|_{1+\alpha/2,2+\alpha}\leq C_{s}\left\|\mathbf{r}_{j+1 }\right\|_{\alpha/2,\alpha}=C_{s}\left\|\mathcal{R}_{a}(\mathbf{m}_{j+1})\right\|_ {\alpha/2,\alpha}=C_{s}\left\|\mathcal{R}_{a}(\mathbf{m}_{j}-\mathbf{R}_{j})\right\|_{ \alpha/2,\alpha}, \tag{9}\] where \(C_{s}>0\) is the stability constant from [40, Theorem 10.4,SS10, VII], which only depends on \(D_{T}\) and \(L\) (particularly, it is independent of \(\ell\)). We invoke Lemma 4 with \(\mathbf{v}=\mathbf{m}_{j}\) and \(\mathbf{w}=\mathbf{R}_{j}\). By construction, \(\mathcal{R}_{a}(\mathbf{m}_{j})-\left(L\partial_{t}-\Delta\right)\mathbf{R}_{j}=0\). What remains is estimated as \[\left\|\mathcal{R}_{a} (\mathbf{m}_{j}-\mathbf{R}_{j})\right\|_{\alpha/2,\alpha}\] \[\leq\left\|\mathbf{m}_{j}-\mathbf{m}^{0}(x_{0})\right\|_{\alpha/2,\alpha} \left\|\mathbf{R}_{j}\right\|_{1+\alpha/2,2+\alpha}+\left\|1-\left|\mathbf{m}_{j} \right|^{2}\right\|_{\alpha/2,\alpha}\left\|\mathbf{R}_{j}\right\|_{1+\alpha/2,2+ \alpha}\] \[\quad+\left\|\mathbf{R}_{j}\right\|_{1+\alpha/2,2+\alpha}^{3}+\left\| \mathbf{R}_{j}\right\|_{1+\alpha/2,2+\alpha}^{3}C_{\mathbf{g}}. \tag{10}\] Let us estimate the first term in (10). For any \((t,\mathbf{x})\in D_{T}\), the fundamental theorem of calculus yields \[\left|\mathbf{m}_{j}(\mathbf{x},t)-\mathbf{m}^{0}(\mathbf{x}_{0})\right|\lesssim\left\|( \partial_{t},\nabla)\mathbf{m}_{j}\right\|_{C^{0}(D_{T})}\leq\left|\mathbf{m}_{j} \right|_{1+\alpha/2,2+\alpha}.\] Analogously, we get \[\left\|\mathbf{m}_{j}-\mathbf{m}^{0}(x_{0})\right\|_{\alpha/2,\alpha}\leq 2\left|\mathbf{m}_ {j}\right|_{1+\alpha/2,2+\alpha}.\] Let us estimate the second term in (10). Since \(\mathbf{m}_{j}=\mathbf{m}^{0}+\sum_{i=0}^{j-1}\mathbf{R}_{i}\) and \(\left|\mathbf{m}^{0}\right|=1\) a.e., we have \[|\mathbf{m}_{j}|^{2}=1+2\mathbf{m}^{0}\cdot\sum_{i=0}^{j-1}\mathbf{R}_{i}+\left(\sum_{i=0 }^{j-1}\mathbf{R}_{i}\right)^{2}.\] Thus, the fact that Holder spaces are closed under multiplication and the triangle inequality imply \[\left\|1-|\mathbf{m}_{j}|^{2}\right\|_{\alpha/2,\alpha}\leq 2\left\|\mathbf{m}^{0} \right\|_{\alpha/2,\alpha}\left\|\sum_{i=0}^{j-1}\mathbf{R}_{i}\right\|_{\alpha/2, \alpha}+\left(\sum_{i=0}^{j-1}\left\|\mathbf{R}_{i}\right\|_{\alpha/2,\alpha} \right)^{2}.\] All in all, we obtain \[\left\|\mathbf{R}_{j+1}\right\|_{1+\alpha/2,2+\alpha}\leq\tilde{C}Q_{j}\left\|\mathbf{R }_{j}\right\|_{1+\alpha/2,2+\alpha} \tag{11}\] where \(\tilde{C}>0\) is independent of \(j\) and \[Q_{j} \coloneqq\left|\mathbf{m}_{j}\right|_{1+\alpha/2,2+\alpha}+\left\|\mathbf{m} ^{0}\right\|_{\alpha/2,\alpha}\left\|\sum_{i=0}^{j-1}\mathbf{R}_{i}\right\|_{ \alpha/2,\alpha}+\left(\sum_{i=0}^{j-1}\left\|\mathbf{R}_{i}\right\|_{\alpha/2, \alpha}\right)^{2}+\left|\mathbf{m}_{j}\right|_{1+\alpha/2,2+\alpha}\left(1+\left\| \mathbf{m}_{j}\right\|_{1+\alpha/2,2+\alpha}\right)\] \[+\left\|\mathbf{R}_{j}\right\|_{1+\alpha/2,2+\alpha}\left(1+\left| \mathbf{m}_{j}\right|_{1+\alpha/2,2+\alpha}\right)+\left\|\mathbf{R}_{j}\right\|_{1+ \alpha/2,2+\alpha}^{2}\left(1+C_{\mathbf{g}}\right)\!.\] It can be proved just as in [29] that for any \(q\in(0,1)\) there exists \(\epsilon>0\) such that \(\tilde{C}Q_{j}<q\) for all \(j\in\mathbb{N}\), so that \(\left\|\mathbf{R}_{j+1}\right\|_{1+\alpha/2,2+\alpha}\leq q\left\|\mathbf{R}_{j} \right\|_{1+\alpha/2,2+\alpha}\). This implies that \((\mathbf{m}_{\ell})_{\ell}\) is a Cauchy sequence in \(C^{1+\alpha/2,2+\alpha}(D_{T})\). Hence, we find a limit \(\mathbf{m}\in C^{1+\alpha/2,2+\alpha}(D_{T})\) and the arguments above already imply the estimate in Theorem 2. _Step 3 (\(\mathbf{m}\) solves (7)):_ Since \(\mathbf{m}\) fulfils the initial condition \(\mathbf{m}(0)=\mathbf{m}^{0}\) (and thus \(\left|\mathbf{m}(0)\right|=1\)) and boundary condition \(\partial_{n}\mathbf{m}=0\) on \([0,T]\times\partial D\) by the continuity of the trace operator, the continuity of \(\mathcal{R}_{a}\) and the contraction (11) imply \[\left\|\mathcal{R}_{a}(\mathbf{m})\right\|_{\alpha/2,\alpha}=\lim_{\ell}\left\| \mathcal{R}_{a}(\mathbf{m}_{\ell})\right\|_{\alpha/2,\alpha}\lesssim\lim_{\ell} \left\|\mathbf{R}_{\ell}\right\|_{1+\alpha/2,2+\alpha}\leq q^{\ell}\left\|\mathbf{R}_ {0}\right\|_{1+\alpha/2,2+\alpha}=0\] The arguments of the proof of [29, Lemma 4.8] show that \(\mathcal{R}_{a}(\mathbf{m})=0\) implies that \(\mathbf{m}\) solves (7) and hence concludes the proof. ## 4. Parametric LLG problem by Levy-Ciesielski expansion In order to make the distribution of \(\mathbf{m}\) amenable to numerical approximation, we need to parametrize the Brownian motion. It turns out that a local wavelet-type expansion of \(W\) is very beneficial as it reduces the number of active basis function at any given moment in time. Given the probability space \((\Omega,\mathcal{E},\mathbb{P})\), the Levy-Ciesielski (LC) expansion (see e.g. [35, Section 4.2]) of the Brownian motion \(W:\Omega\times[0,1]\to\mathbb{R}\) reads \[W(\omega,t)=\sum_{\ell=0}^{\infty}\sum_{j=1}^{\lceil 2^{\ell-1}\rceil}Y_{ \ell,j}(\omega)\eta_{\ell,j}(t),\] where \(Y_{\ell,j}\) are independent standard normal random variables and \(\left\{\eta_{\ell,j}\,:\,\ell\in\mathbb{N}_{0},j=1,\ldots,\lceil 2^{\ell-1} \rceil\right\}\) is the Faber-Schauder hat-function basis on \([0,1]\), i.e. \[\eta_{0,1}(t)=t,\] \[\eta(t)\coloneqq\begin{cases}t\qquad t\in[0,\frac{1}{2}]\\ 1-t\qquad t\in[\frac{1}{2},1]\\ 0\qquad\text{otherwise}\end{cases},\] \[\eta_{\ell,j}(t)=2^{-\frac{\ell-1}{2}}\eta\left(2^{\ell-1}t-j+1 \right)\qquad\forall\ell\in\mathbb{N},\ j=1,\ldots,\lceil 2^{\ell-1}\rceil. \tag{12}\] See figure 1. Observe that \(\left\|\eta_{0,1}\right\|_{L^{\infty}([0,1])}=1\) and \(\left\|\eta_{\ell,j}\right\|_{L^{\infty}([0,1])}=2^{-(\ell+1)/2}\) for all \(\ell\in\mathbb{N},j=1,\ldots,2^{\ell-1}\). Figure 1. The first eight Faber-Shauder basis functions on \([0,1]\). The LC expansion converges uniformly in \(t\), almost surely to a continuous function which coincides with the Brownian motion everywhere (see [53, Section 3.4]): \[\lim_{L\to+\infty}\sup_{t\in[0,1]}\left|W(\omega,t)-\sum_{\ell=0}^{L}\sum_{j=1}^{ \lceil 2^{\ell-1}\rceil}Y_{\ell,j}(\omega)\eta_{\ell,j}(t)\right|=0\qquad\mathbb{P} \text{-a.s.}\] By equipping \(\mathbb{R}\) with the Gaussian measure, we may parametrize \(W\) as \(W:\mathbb{R}^{\mathbb{N}}\times[0,1]\to\mathbb{R}\) such that \[W(\boldsymbol{y},t)=\sum_{\ell=0}^{\infty}\sum_{j=1}^{\lceil 2^{\ell-1} \rceil}y_{\ell,j}\eta_{\ell,j}(t), \tag{13}\] where \(y_{\ell,j}\in\mathbb{R}\) for all \(\ell\in\mathbb{N}_{0},j=1,\ldots,\lceil 2^{\ell-1}\rceil\). For \(L\in\mathbb{N}\), we call a \(L\)_levels truncation of \(W\)_ \[W_{L}(\boldsymbol{y},t)=\sum_{\ell=0}^{L}\sum_{j=1}^{\lceil 2^{\ell-1} \rceil}y_{\ell,j}\eta_{\ell,j}(t).\] We will sometimes also index the same sum as \(W_{L}(\boldsymbol{y},t)=\sum_{n=0}^{N}y_{n}\eta_{n}(t)\). The two indexing systems, hierarchical and linear, are related via \[\eta_{\ell,j}=\eta_{n}\quad\Longleftrightarrow\quad\ell=\ell(n)\coloneqq \lceil\log_{2}(n)\rceil\text{ and }j=n-\ell(n). \tag{14}\] We note that the total number of parameters is \(N=\sum_{\ell=0}^{L}\lceil 2^{\ell-1}\rceil=1+\sum_{\ell=1}^{L}2^{\ell-1}=2^{L}\). The fact that the parameter domain is unbounded requires the use of appropriate collocation nodes, a topic we treat in Section 7. Consider the Banach space of sequences: \[\mathcal{X}\coloneqq\left\{\boldsymbol{y}=(y_{i})_{i\in\mathbb{N}}\subset \mathbb{R}\,:\,\left\|\boldsymbol{y}\right\|_{\mathcal{X}}<\infty\right\},\] where the norm reads: \[\left\|\boldsymbol{y}\right\|_{\mathcal{X}}\coloneqq\left|y_{0,1}\right|+ \sum_{\ell\in\mathbb{N}}\max_{j=1,\ldots,2^{\ell-1}}\left|y_{\ell,j}\right|2^ {-(\ell+1)/2}\] The Levy-Ciesielski expansion implies that if \(\boldsymbol{y}\in\mathcal{X}\), then \(W(\boldsymbol{y},\cdot)\in L^{\infty}([0,T])\) and \(\left\|W(\boldsymbol{y})\right\|_{L^{\infty}([0,T])}\leq\left\|\boldsymbol{y} \right\|_{\mathcal{X}}\). Finally, the _parametric LLG problem_ reads: Given \(\boldsymbol{M}_{0}:D\to\mathbb{S}^{2}\), find \(\boldsymbol{m}:\mathcal{X}\times D_{T}\to\mathbb{S}^{2}\) such that for a.a. \(\boldsymbol{y}\in\mathcal{X}\) \[\begin{cases}\partial_{t}\boldsymbol{m}(\boldsymbol{y})=\boldsymbol{m}( \boldsymbol{y})\times\left(\Delta\boldsymbol{m}(\boldsymbol{y})+\hat{\mathcal{ C}}(W(\boldsymbol{y}),\boldsymbol{m}(\boldsymbol{y}))\right)\\ \qquad\qquad\qquad-\boldsymbol{m}(\boldsymbol{y})\times\left(\boldsymbol{m}( \boldsymbol{y})\times\left(\Delta\boldsymbol{m}(\boldsymbol{y})+\hat{ \mathcal{C}}(W(\boldsymbol{y}),\boldsymbol{m}(\boldsymbol{y}))\right)\right) \text{in }D_{T},\\ \partial_{n}\boldsymbol{m}(\boldsymbol{y})=0&\text{on }[0,T]\times\partial D,\\ \boldsymbol{m}(\boldsymbol{y},0,\cdot)=\boldsymbol{M}_{0}&\text{on }D,\end{cases} \tag{15}\] where we set \(\lambda_{1}=\lambda_{2}=1\) for simplicity. Applying the triple cross-product formula \(\boldsymbol{a}\times(\boldsymbol{b}\times\boldsymbol{c})=\boldsymbol{b}( \boldsymbol{a}\cdot\boldsymbol{c})-\boldsymbol{c}(\boldsymbol{a}\cdot \boldsymbol{b})\) on \(\boldsymbol{m}(\boldsymbol{y})\times(\boldsymbol{m}(\boldsymbol{y})\times( \Delta\boldsymbol{m}(\boldsymbol{y})))\), together with the fact that \(|\boldsymbol{m}|\equiv 1\), gives an equivalent problem: \[\partial_{t}\boldsymbol{m}=\Delta\boldsymbol{m}+\boldsymbol{m}\times\Delta \boldsymbol{m}-\left(\nabla\boldsymbol{m}:\nabla\boldsymbol{m}\right) \boldsymbol{m}+\boldsymbol{m}\times\hat{\mathcal{C}}(W,\boldsymbol{m})- \boldsymbol{m}\times\left(\boldsymbol{m}\times\hat{\mathcal{C}}(W, \boldsymbol{m})\right)\qquad\text{in }D_{T}. \tag{16}\] ## 5. Holomorphic regularity of parameter-to-solution map with Holder sample paths While holomorphic parameter regularity of random elliptic equations is well-known by now (see, e.g., [4, Section 3], for the case of bounded or unbounded parameter spaces under the finite dimensional noise assumption, [21] for countably-many parameters taking values on tensor product of bounded intervals, [5], for a discussion of the Poisson problem with lognormal coefficients, in which the authors study countably many unbounded parameters), the literature is much sparser for non-linear and time-dependent problems. In this section, we follow an approach from [18] which uses the implicit function theorem to obtain analyticity. While the authors in [18] can rely on a compact parameter domain to ensure a non-trivial domain of extension, we have to use intricate bounds on the parametric gradient of the solution. A recent result on the implicit function theorem for Gevrey regularity [36] could also be used to achieve similar results but lacks the explicit control of the analytic extension. In this section we frequently work with complex-valued functions. If not mentioned otherwise, Banach spaces of functions such as \(L^{2}(D)\) are understood to contain complex valued functions. To denote the codomain explicitly, we write e.g. \(L^{2}(D;\mathbb{C})\) or \(L^{2}(D;\mathbb{R})\). **Remark 5**.: _In the regularity results used below, we have to work in Holder spaces with \(\alpha\in(0,1)\). For the Faber-Schauder basis functions on \([0,1]\) (see Section 4) we have_ \[\left\|\eta_{\ell,j}\right\|_{L^{\infty}([0,1])}\leq 2^{-\ell/2},\quad\left| \eta_{\ell,j}\right|_{C^{1}([0,1])}\leq 2^{\ell/2}\quad\text{and}\quad\left\| \eta_{\ell,j}\right\|_{C^{\alpha}([0,1])}\leq 2\cdot 2^{-(1/2-\alpha)\ell}.\] _Only for \(\alpha\ll 1\) we obtain a decay of \(\left\|\eta_{\ell,j}\right\|_{C^{\alpha}([0,1])}\) close to \(2^{-\ell/2}\), which is what we expect for a truncated Brownian motion. Hence, in the following we will assume that \(\alpha>0\) is arbitrarily small._ Fix \(0<\alpha<1\). Let us consider sample paths of the Wiener processes \(W\in C^{\alpha/2}([0,T])\) (note that the sample paths of the Wiener process belong to \(C^{1/2-\varepsilon}([0,T])\) almost surely for any \(\varepsilon>0\)) and regular magnetization in the form \[\mathbf{M}_{0}(x)+\mathbf{m}(t,x),\] where we recall \(\mathbf{M}_{0}\) is the given initial condition, which we assume to belong to \(C^{2+\alpha}(D)\). Consider moreover \[\mathbf{m}\in C_{0}^{1+\alpha/2,2+\alpha}(D_{T})\coloneqq\left\{\mathbf{v}\in C^{1+ \alpha/2,2+\alpha}(D_{T}):\mathbf{v}(0)=\mathbf{0}\text{ in }D,\ \partial_{n}\mathbf{v}=\mathbf{0}\text{ on } \partial D\right\}.\] Given a noise coefficient \(\mathbf{g}\in C^{2+\alpha}(D)\), we may consider the residual of the (complex) LLG problem based on \[\begin{split}\mathcal{R}(W,\mathbf{m})&\coloneqq \widetilde{\mathcal{R}}(W,\mathbf{M}_{0}+\mathbf{m}),\qquad\text{where}\\ \widetilde{\mathcal{R}}(W,\mathbf{v})&\coloneqq\partial_ {t}\mathbf{v}-\Delta\mathbf{v}-\mathbf{v}\times\Delta\mathbf{v}+\left(\nabla\mathbf{v}:\nabla \mathbf{v}\right)\mathbf{v}-\mathbf{v}\times\hat{\mathcal{C}}(W,\mathbf{v})+\mathbf{v}\times \left(\mathbf{v}\times\hat{\mathcal{C}}(W,\mathbf{v})\right).\end{split} \tag{17}\] Here, the cross product \(\times\) is defined as in the real setting by \[\mathbf{a}\times\mathbf{b}=(a_{2}b_{3}-a_{3}b_{2},a_{3}b_{1}-a_{1}b_{3},a_{1}b_{2}-a_{ 2}b_{1})\qquad\forall\mathbf{a},\mathbf{b}\in\mathbb{C}^{3}.\] Note that due to the sesquilinear complex scalar product this implies that \(\left\langle\mathbf{a}\times\mathbf{b},\mathbf{a}\right\rangle\) might not vanish for complex valued vector fields \(\mathbf{a},\mathbf{b}\). The residual \(\mathcal{R}\) is understood as a function between Banach spaces: \[\mathcal{R}(W,\mathbf{m}):C^{\alpha/2}([0,T])\times C_{0}^{1+\alpha/2,2+\alpha}(D_ {T})\to C^{\alpha/2,\alpha}(D_{T}). \tag{18}\] ### Holomorphic extension to a neighborhood of a real Wiener process We start by proving some properties of the map \(\mathcal{R}(W,\mathbf{m})\) defined in (17). To this end, we will apply the following lemma found in much more general form, e.g., in [40, Chapter VII, SS 10, Theorem 10.3]. **Lemma 6** (Well posedness of linear parabolic systems with Holder coefficients).: _Consider \(d\in\mathbb{N}\), \(D\subset\mathbb{R}^{d}\) bounded, \(T>0\) and define \(D_{T}\coloneqq[0,T]\times D\). Let \(\mathcal{L}\) denote a vector-valued, linear second-order operator. Consider \(0<\alpha<1\), \(\partial D\in C^{2+\alpha/2}\) and denote \(a_{ij}\), \(a_{i}\), \(a\) for \(i,j=1,\ldots,d\) real scalar functions in \(C^{1+\alpha/2,2+\alpha}(D_{T})\). Assume moreover that the system \(\partial_{t}+\mathcal{L}\) is strongly parabolic in the usual sense (see, e.g., [40, Chapter VII, SS 8, Definition 7]). Consider \(f\in C^{\alpha/2,\alpha}(D_{T})\), \(u^{0}\in C^{2+\alpha}(D_{T})\). Then, the problem_ \[\begin{cases}\partial_{t}u(t,\mathbf{x})+\mathcal{L}u(t,\mathbf{x})=f(t,\mathbf{x})& \text{in }D_{T}\\ u(0,\mathbf{x})=0&\text{on }D\\ \partial_{n}u(t,\mathbf{x})=0&\text{on }[0,T]\times\partial D\end{cases}\] _has a unique solution \(u\in C^{1+\alpha/2,2+\alpha}(D_{T})\) with \(\left\|u\right\|_{1+\alpha/2,2+\alpha}\leq C_{\mathrm{stab}}\left\|f\right\|_{ \alpha/2,\alpha}\). The constant \(C_{\mathrm{stab}}\) depends on the respective norms of the coefficients \(a_{ij},a_{i},a\) as well as on the ellipticity constant._ **Remark 7**.: _Note that the compatibility conditions in [40, Chapter VII, SS 10, Theorem 10.3] of order zero (\(\alpha<1\)) are automatically satisfied in our case. This also takes care of the fact that [40, Chapter VII, SS 10, Theorem 10.3] only works for small end times \(0<\widetilde{T}\leq T\) as we can restart the estimate at any time \(\widetilde{T}\) and get the estimate for the full time interval. Moreover, while not stated explicitly, analyzing the proof of [40, Chapter VII, SS 10, Theorem 10.3] gives the dependence of \(C_{\mathrm{stab}}\) on the coefficients of the problem._ **Lemma 8**.: _Consider the function \(\mathcal{R}\) defined above by (17) and (18) with coefficients \(\alpha\in(0,1)\), \(\mathbf{g}\in C^{2+\alpha}(D)\) and \(\mathbf{M}_{0}\in C^{2+\alpha}(D)\)._ 1. \(\mathcal{R}(W,\mathbf{m})\) _is a well-defined, continuous function;_ 2. \(\mathcal{R}(W,\mathbf{m})\) _is continuously differentiable;_ 3. _Let_ \(W_{*}\in C^{\alpha/2}([0,T];\mathbb{C})\) _and_ \(\mathbf{m}_{*}\in C_{0}^{1+\alpha/2,2+\alpha}(D_{T};\mathbb{C}^{3})\) _such that_ \(\left\|\mathfrak{Im}\left(\mathbf{m}_{*}\right)\right\|_{L^{\infty}(D_{T})}\leq \frac{1}{4}\)_. Then, the differential_ \(\partial_{2}\mathcal{R}(W_{*},\mathbf{m}_{*}):C_{0}^{1+\alpha/2,2+\alpha}(D_{T}) \to C^{\alpha/2,\alpha}(D_{T})\) _is a homeomorphism._ Proof of i.: Let us first show that the residual \(\mathcal{R}(W,\mathbf{m})\) defined by (17) and (18) is a well-defined function. Clearly, \(\mathbf{M}_{0}+\mathbf{m}\in C^{1+\alpha/2,2+\alpha}(D_{T})\) if \(\mathbf{m}\in C_{0}^{1+\alpha/2,2+\alpha}(D_{T})\). Observe that \[G\colon C^{1+\alpha/2,2+\alpha}(D_{T})\to C^{1+\alpha/2,2+\alpha}(D_{T})\quad \text{and}\quad\mathcal{C}\colon C^{\alpha/2,1+\alpha}(D_{T})\to C^{\alpha/2, \alpha}(D_{T}),\] so \(\hat{\mathcal{C}}(W,\mathbf{m})\in C^{\alpha/2,\alpha}(D_{T})\). Thus, \(\mathcal{R}(W,\mathbf{m})\) is a sum of functions belonging to \(C^{\alpha/2,\alpha}(D_{T})\). The fact that \(\mathcal{R}(W,\mathbf{m})\) is continuous can be easily verified by checking that each term (cf. (17)) is continuous. Proof of ii.: The residual \(\mathcal{R}(W,\mathbf{m})\) is differentiable because it is a linear combination of differentiable functions. We now prove that each partial derivative is continuous. For \(\omega\in C^{\alpha/2}([0,T])\), \[\partial_{1}\mathcal{E}(W,\mathbf{m})[\omega] =\left(\cos(W)\mathcal{C}\mathbf{m}+\sin(W)(G\mathcal{C}+\mathcal{C} G)\mathbf{m}\right)\omega,\] \[\partial_{1}\hat{\mathcal{C}}(W,\mathbf{m})[\omega] =e^{WG}\partial_{1}\mathcal{E}(W,\mathbf{m})[\omega]+\left(\cos(W)G \mathcal{E}(W,\mathbf{m})+\sin(W)G^{2}\mathcal{E}(W,\mathbf{m})\right)\omega,\] \[\partial_{1}\widetilde{\mathcal{R}}(W,\mathbf{m})[\omega] =-\mathbf{m}\times\partial\hat{\mathcal{C}}(W,\mathbf{m})[\omega]+\mathbf{m} \times\left(\mathbf{m}\times\partial\hat{\mathcal{C}}(W,\mathbf{m})[\omega]\right).\] The linear operator \(\partial_{1}\mathcal{R}(W,\mathbf{m})\) is continuous because it is bounded. Indeed, simple estimates reveal: \[\left\|\partial_{1}\mathcal{R}(W,\mathbf{m})[\omega]\right\|_{C^{ \alpha/2,\alpha}(D_{T})}\leq \left(1+\left\|e^{\mathfrak{Im}(W)}\right\|_{C^{\alpha/2}([0,T])} \right)^{2}\left(1+\left\|\mathbf{g}\right\|_{C^{2+\alpha}(D)}\right)^{4}\] \[\qquad\left(1+\left\|\mathbf{M}_{0}+\mathbf{m}\right\|_{C^{\alpha/2,1+ \alpha}(D_{T})}\right)^{3}\left\|\omega\right\|_{C^{\alpha/2}([0,T])}\qquad \forall\omega\in C^{\alpha/2}([0,T]). \tag{19}\] The exponential dependence on \(\mathfrak{Im}\left(W\right)\) is justified by the following formulas: \(\forall x,y\in\mathbb{R}\) \[\sin(x+iy) =\sin(x)\cosh(y)+i\cos(x)\sinh(y)\] \[\cos(x+iy) =\cos(x)\cosh(y)-i\sin(x)\sinh(y).\] For \(\mathbf{v}\in C^{1+\alpha/2,2+\alpha}(D_{T})\), we get \[\partial_{2}\widetilde{\mathcal{R}}(W,\mathbf{m})[\mathbf{v}] =\partial_{t}\mathbf{v}-\Delta\mathbf{v}-\mathbf{v}\times\Delta\mathbf{m}-\mathbf{m} \times\Delta\mathbf{v}+2(\nabla\mathbf{v}:\nabla\mathbf{m})\mathbf{m}+\left(\nabla\mathbf{m}: \nabla\mathbf{m}\right)\mathbf{v} \tag{21}\] \[-\left(\mathbf{v}\times\hat{\mathcal{C}}(W,\mathbf{m})+\mathbf{m}\times\hat{ \mathcal{C}}(W,\mathbf{v})\right)\] (22) \[-\left(\mathbf{v}\times\left(\mathbf{m}\times\hat{\mathcal{C}}(W,\mathbf{m}) \right)+\mathbf{m}\times\left(\mathbf{v}\times\hat{\mathcal{C}}(W,\mathbf{m})+\mathbf{m} \times\hat{\mathcal{C}}(W,\mathbf{v})\right)\right), \tag{20}\] and continuity of \(\partial_{2}\mathcal{R}(W,\mathbf{m})=\partial_{2}\widetilde{\mathcal{R}}(W,\mathbf{M}_ {0}+\mathbf{m})\) follows by the same arguments used for \(\partial_{1}\mathcal{R}(W,\mathbf{m})\). Proof of iii.: Consider \(\mathbf{f}\in C^{\alpha/2,\alpha}(D_{T})\) and the problem \[\begin{cases}\partial_{2}\mathcal{R}(W_{*},\mathbf{m}_{*})[\mathbf{v}]=\mathbf{f}&\text{in }D_{T},\\ \partial_{n}\mathbf{v}=0&\text{on }[0,T]\times\partial D,\\ \mathbf{v}(0,\cdot)=\mathbf{0}&\text{on }D.\end{cases}\] With the aim of applying Lemma 6, we note that the principal part of \(\partial_{2}\mathcal{R}(W_{*},\mathbf{m}_{*})[\mathbf{v}]\) is \(-\Delta\mathbf{v}-\mathbf{m}_{*}\times\Delta\mathbf{v}\). We show that for any \((t,\mathbf{x})\in D_{T}\) and \(\mathbf{w}\in\mathbb{C}^{3}\), \[\mathfrak{Re}\left(\left\langle\mathbf{w}+\mathbf{m}_{*}(t,\mathbf{x})\times\mathbf{w},\mathbf{w} \right\rangle\right)\geq\frac{1}{2}\left\|\mathbf{w}\right\|^{2},\] where here \(\left\|\cdot\right\|\) and \(\left\langle\cdot,\cdot\right\rangle\) denote respectively the standard norm and scalar product on \(\mathbb{C}^{3}\). Indeed, \[\mathfrak{Re}\left(\mathbf{w}+\mathbf{m}_{*}(t,\mathbf{x})\times\mathbf{w},\mathbf{w}\right)=\left\| \mathbf{w}\right\|^{2}+\mathfrak{Re}\left(\left\langle\mathbf{m}_{*}(t,\mathbf{x})\times \mathbf{w}\right\rangle,\mathbf{w}\right)\] and algebraic manipulations lead to the identity \[\mathfrak{Re}\left(\left\langle\mathbf{m}_{*}(t,\mathbf{x})\times\mathbf{w},\mathbf{w}\right\rangle \right)=2\left\langle\mathfrak{Im}\left(\mathbf{w}\right)\times\mathfrak{Re} \left(\mathbf{w}\right),\mathfrak{Im}\left(\mathbf{m}_{*}(t,\mathbf{x})\right)\right\rangle,\] which implies the estimate \[\left|\mathfrak{Re}\left(\left\langle\mathbf{m}_{*}(t,\mathbf{x})\times\mathbf{w},\mathbf{w} \right\rangle\right)\right|\leq 2\left\|\mathfrak{Im}\left(\mathbf{m}_{*}(t,x) \right)\right\|_{L^{\infty}(D_{T})}\left\|\mathbf{w}\right\|^{2}.\] Thus, by virtue of the smallness assumption on \(\mathfrak{Im}\left(\mathbf{m}^{*}\right)\), \[\mathfrak{Re}\left(\mathbf{w}+\mathbf{m}_{*}(t,\mathbf{x})\times\mathbf{w},\mathbf{w}\right)\geq \frac{1}{2}\left\|\mathbf{w}\right\|^{2}.\] This shows that \(\partial_{2}\mathcal{R}(W_{*},\mathbf{m}_{*})\) is parabolic in the sense of Lemma 6 and hence, we obtain that \(\partial_{2}\mathcal{R}(W_{*},\mathbf{m}_{*})\) admits a continuous inverse. Together with its continuity, this implies that it is a homeomorphism. The norm of the inverse can be estimated as \[\left\|\partial_{2}\mathcal{R}(W_{*},\mathbf{m}_{*})^{-1}\left[\mathbf{f}\right]\right\| _{C^{1+\alpha/2,2+\alpha}(D_{T})}\leq C_{\mathrm{stab}}\left\|\mathbf{f}\right\| _{C^{\alpha/2,\alpha}(D_{T})}, \tag{23}\] where \(C_{\mathrm{stab}}>0\) is independent of \(\mathbf{f}\) (but depends on \(W_{*}\) and \(\mathbf{m}_{*}\)). Let us recall the implicit function theorem in the case of maps between Banach spaces (see, e.g., [23, Theorem 10.2.1]). **Theorem 9** (Implicit function).: _Let \(E,F,G\) be Banach spaces, \(A\subset E\times F\) and \(f:A\to G\) a continuously differentiable function. Let \((x_{*},y_{*})\in A\) such that \(f(x_{*},y_{*})=0\) and the partial derivative \(\partial_{2}f(x_{*},y_{*})\) be a linear homeomorphism from \(F\) onto \(G\). Then, there exists a neighborhood \(U_{*}\) of \(x_{*}\) in \(E\) such that, for every open connected neighborhood \(U\) of \(x_{*}\) in \(U_{*}\), there exists a unique continuous mapping \(u:U\to F\) such that \(u(x_{*})=y_{*}\), \((x,u(x))\in A\) and \(f(x,u(x))=0\) for any \(x\) in \(U\). Moreover, \(u\) is continuously differentiable in \(U\) and its derivative is given by_ \[u^{\prime}(x)=-\left(D_{2}f(x,u(x))\right)^{-1}\circ\left(D_{1}f(x,u(x)) \right).\] In Lemma 8, we verified all the assumptions needed to apply the implicit function theorem to the residual of the (complex) LLG residual \(\mathcal{R}\) (cf. (17) and (18)) and \(W^{*}\in C^{\alpha/2}([0,T],\mathbb{R})\), \(\mathbf{m}^{*}\in C_{0}^{1+\alpha/2,2+\alpha}(D_{T})\) such that \(\mathcal{R}(W_{*},\mathbf{m}_{*})=0\). Therefore, there exists an open connected neighbourhood \(U\) of \(W_{*}\) in \(C^{\alpha/2}([0,T];\mathbb{C})\) and a continuously differentiable map \(\mathcal{M}:U\to C^{1+\alpha/2,2+\alpha}(D_{T})\) with \(\mathcal{M}(W_{*})=\mathbf{m}_{*}\). Since complex differentiability is equivalent to holomorphy, \(\mathcal{M}\) is also holomorphic. Moreover, we can estimate the norm of its derivative evaluated at arbitrary \(W\in U\), i.e., \[\left\|\mathcal{M}^{\prime}(W)[\omega]\right\|_{C^{1+\alpha/2,2+\alpha}(D_{T })}\leq\left\|\partial_{2}\mathcal{R}(W,\mathcal{M}(W))^{-1}\right\|\left\| \partial_{1}\mathcal{R}(W,\mathcal{M}(W))\right\|\left\|\omega\right\|_{C^{ \alpha/2}([0,T])}\qquad\omega\in C^{\alpha/2}([0,T]).\] The first operator norm was estimated in (23). The second operator norm was estimated in (19). We finally obtain that, for any \(\omega\in C^{\alpha/2}([0,T])\), \[\left\|\mathcal{M}^{\prime}(W)[\omega]\right\|_{C^{1+\alpha/2,2+ \alpha}(D_{T})}\lesssim C_{\mathrm{stab}}\left(\left\|\mathfrak{Im}\left(W\right)\right\|_{C^{ \alpha/2}([0,T])},\left\|\mathcal{M}(W)\right\|_{C^{\alpha/2,2+\alpha}(D_{T})}\right)\] \[\left(1+\left\|e^{\mathfrak{Im}(W)}\right\|_{C^{\alpha/2}([0,T])} \right)^{2}\left(1+\left\|\mathcal{M}(W)\right\|_{C^{\alpha/2,1+\alpha}(D_{T} )}\right)^{3}\left\|\omega\right\|_{C^{\alpha/2}([0,T])}, \tag{24}\] where the hidden constant is independent of \(W\) or \(\omega\). ### Uniform holomorphic extension In the previous section we proved that for any \(W\in C^{\alpha/2}([0,T],\mathbb{R})\) there exists an open ball \(B(W,\epsilon_{W})\subset C^{\alpha/2}([0,T];\mathbb{C})\) of radius \(\epsilon_{W}>0\) where the solution operator \(\mathcal{M}\) can be extended as a holomorphic, uniformly bounded function. Let us define the set \[\mathcal{U}\coloneqq\left\{W\in C^{\alpha/2}([0,T],\mathbb{C}): \left\{\begin{array}{l}\mathcal{M}\text{ is holomorphic in a neighbourhood of }W,\\ \left\|\mathfrak{Im}\left(\mathcal{M}(\sigma W+(1-\sigma)\mathfrak{Re}\left(W \right))\right)\right\|_{L^{\infty}(D_{T})}\leq\frac{1}{4}\ \forall\sigma\in(0,1)\right\}\end{array}\right\} \tag{25}\] We already verified that \[\bigcup_{W\in C^{\alpha/2}([0,T];\mathbb{R})}B(W,\epsilon_{W})\subset\mathcal{ U}.\] Consider the holomorphic function \(\mathcal{M}:\mathcal{U}\to C^{1+\alpha/2,2+\alpha}(D_{T})\). Note that, as far as we know now, \(\mathcal{U}\) could become arbitrarily _thin_ along the space of real samples of \(W\). The goal of this section is to show that \(\mathcal{U}\) contains a uniformly large complex domain of the form: \[\mathcal{W}_{\epsilon}\coloneqq\bigcup_{W\in C^{\alpha/2}([0,T];\mathbb{R})}B (W,\epsilon), \tag{26}\] where \(\varepsilon>0\) is independent of \(W\). We can equivalently write \(\mathcal{W}_{\epsilon}=\left\{W\in C^{\alpha/2}([0,T])\,:\,\left\|\mathfrak{Im }\left(W\right)\right\|_{C^{\alpha/2}([0,T])}<\epsilon\right\}\). Moreover, we want \(\mathcal{M}\) to be uniformly bounded on this set. In contrast to [18], the domain of complex extension is not compact. Instead, we must rely on estimate (24). To that end, we will use the following nonlinear generalization of Gronwall's lemma: **Lemma 10** ([25], Theorem 27).: _Let \(u(t)\) and \(k(t)\) be positive continuous functions on \([c,d]\) and let \(a,b\) be nonnegative constants. Further, let \(g(z)\) be a positive nondecreasing function for \(z\geq 0\). If_ \[u(t)\leq a+b\int_{c}^{t}k(s)g(u(s))\mathrm{d}s,\quad t\in[c,d],\] _then_ \[u(t)\leq G^{-1}\left(G(a)+b\int_{c}^{t}k(s)\mathrm{d}s\right), \quad c\leq t\leq d_{1}\leq d\] _where_ \[G(\lambda)=\int_{\xi}^{\lambda}\frac{\mathrm{d}s}{g(s)}\quad(0< \xi<\lambda)\] _and \(d_{1}\) is defined such that_ \[G(A)+b\int_{c}^{t}k(s)\mathrm{d}s\] _belongs to the domain of \(G^{-1}\) for \(t\in[c,d_{1}]\)._ We restrict our discussion to Wiener processes with bounded imaginary part, i.e. \(\left\|\mathfrak{Im}\left(W\right)\right\|_{C^{\alpha/2}([0,T])}<M\) for some positive real \(M\). Then, if additionally \(\left\|\mathfrak{Im}\left(\mathcal{M}\right)(W)\right\|_{L^{\infty}(D_{T})} \leq\frac{1}{4}\), the bound (24) implies \[\left\|\mathcal{M}^{\prime}(W)\right\|\leq\mathcal{G}\left(\left\|\mathcal{M }(W)\right\|_{C^{\alpha/2,2+\alpha}(D_{T})}\right), \tag{27}\] where \(\mathcal{G}:\mathbb{R}_{+}\to\mathbb{R}_{+}\) is a positive increasing function and the operator norm is understood as in (24) (\(\mathcal{G}\) also depends on the uniform bound \(M\)). Denote \(G(s)\coloneqq\int_{0}^{s}\frac{1}{\mathcal{G}(\lambda)}\mathrm{d}\lambda\), which, by construction, is an increasing (in particular, injective) function. **Theorem 11**.: _Under the assumptions of Theorem 2, consider \(0<\epsilon\leq M\) such that_ * \(G\left(C_{r}\right)+\epsilon\) _belongs to the domain of_ \(G^{-1}\)_,_ * \(\epsilon\mathcal{G}\left(C_{\epsilon}\right)<\frac{1}{4}\)_,_ _where \(G\) was defined just above, \(C_{r}\) is from Theorem 2 and \(C_{\epsilon}\coloneqq G^{-1}\left(G(C_{r})+\epsilon\right)\). Then, \(\mathcal{M}:\mathcal{W}_{\epsilon}\to C^{1+\alpha/2,2+\alpha}(D_{T})\) is holomorphic and uniformly bounded on its domain by \(C_{\epsilon}\). \(\mathcal{M}\) extend the LLG parameter-to-solution map in the sense that \(\forall\mathbf{y}\in\mathbb{R}^{\mathbb{N}}\) such that \(W(\mathbf{y},\cdot)\in C^{\alpha/2}([0,T])\), \(\mathbf{M}_{0}+\mathcal{M}(W(\mathbf{y}))=\mathbf{m}(\mathbf{y})\)._ Proof.: For the sake of brevity, denote \(\left\|\cdot\right\|_{D_{T}}=\left\|\cdot\right\|_{C^{1+\alpha/2,2+\alpha}(D_{T})}\). We split the proof into several steps: _Step 1:_ We prove that for any \(W\in\mathcal{U}\cap\mathcal{W}_{\epsilon}\), \(\left\|\mathcal{M}(W)\right\|_{D_{T}}\leq C_{\epsilon}\). Fix \(\overline{W}\in\mathcal{U}\cap\mathcal{W}_{\epsilon}\). Since \(\mathcal{M}\) is continuously differentiable and \(\mathcal{U}\) is convex, we have \[\mathcal{M}(\overline{W})=\mathcal{M}(\mathfrak{Re}\left(\overline{W}\right))+ \int_{0}^{1}\frac{\mathrm{d}}{\mathrm{d}\sigma}\mathcal{M}\left(\sigma \mathfrak{Re}\left(\overline{W}\right)+(1-\sigma)\overline{W}\right)\mathrm{d}\sigma.\] Take the norm on both sides and estimate \[\left\|\mathcal{M}(\overline{W})\right\|_{D_{T}}\leq\left\|\mathcal{M}( \mathfrak{Re}\left(\overline{W}\right))\right\|_{D_{T}}+\int_{0}^{1}\left\| \frac{\mathrm{d}}{\mathrm{d}\sigma}\mathcal{M}(\sigma\overline{W}+(1-\sigma )\mathfrak{Re}\left(\overline{W}\right))\right\|_{D_{T}}\mathrm{d}\sigma.\] We estimate the first term with Theorem 2. As for the second term, we apply the chain rule and estimate, denoting \(W_{\sigma}=\sigma\overline{W}+(1-\sigma)\mathfrak{Re}\left(\overline{W}\right)\), \[\int_{0}^{1}\left\|\frac{\mathrm{d}}{\mathrm{d}\sigma}\mathcal{M}(W_{\sigma}) \right\|_{D_{T}}\mathrm{d}\sigma\leq\int_{0}^{1}\left\|\mathcal{M}^{\prime}(W _{\sigma})\right\|\left\|\overline{W}-\mathfrak{Re}\left(\overline{W}\right) \right\|_{C^{\alpha}([0,T])}\mathrm{d}\sigma.\] The estimate (27) applied to \(W_{\sigma}\in\mathcal{U}\) gives \[\left\|\mathcal{M}^{\prime}(W_{\sigma})\right\|\leq\mathcal{G}\left(\left\| \mathcal{M}(W_{\sigma})\right\|_{D_{T}}\right)\qquad\forall\sigma\in[0,1]. \tag{28}\] Finally, recalling that \(\overline{W}\in\mathcal{W}_{\epsilon}\), we get \[\left\|\mathcal{M}(\overline{W})\right\|_{D_{T}}\leq C_{r}+\epsilon\int_{0}^{ 1}\mathcal{G}\left(\left\|\mathcal{M}(W_{\sigma})\right\|_{D_{T}}\right) \mathrm{d}\sigma\] as well as \[\left\|\mathfrak{Im}\left(\mathcal{M}\right)(\overline{W})\right\|_{D_{T}} \leq\epsilon\int_{0}^{1}\mathcal{G}\left(\left\|\mathcal{M}(W_{\sigma})\right\| _{D_{T}}\right)\mathrm{d}\sigma.\] Apply the Gronwall type estimate from Lemma 10 to obtain \[\left\|\mathcal{M}(\overline{W})\right\|_{D_{T}}\leq G^{-1}\left(G(C_{r})+ \epsilon\right)=C_{\epsilon},\] as well as \[\left\|\mathfrak{Im}\left(\mathcal{M}\right)(\overline{W})\right\|_{D_{T}} \leq\epsilon\mathcal{G}\left(C_{\epsilon}\right)<\frac{1}{4}.\] _Step 2:_ We then prove that for all \(\overline{W}\in\partial\mathcal{U}\cap\mathcal{W}_{\epsilon}\), the following function is uniformly Lipschitz: \[f:[0,1)\to C^{1+\alpha/2,2+\alpha}(D_{T}),\quad f(\sigma)=\mathcal{M}( \sigma\overline{W}+(1-\sigma)\mathfrak{Re}\left(\overline{W}\right)).\] Let \(s,t\in[0,1)\) and observe that \[f(s)-f(t)=\int_{s}^{t}\frac{\mathrm{d}}{\mathrm{d}\sigma}\mathcal{M}(W_{ \sigma})\mathrm{d}\sigma.\] Estimate \[\left\|f(s)-f(t)\right\|_{D_{T}}\leq\int_{s}^{t}\left\|\mathcal{M}^{\prime}(W_ {\sigma})\right\|\left\|\overline{W}-\mathfrak{Re}\left(\overline{W}\right) \right\|_{C^{\alpha/2}(D_{T})}\mathrm{d}\sigma.\] We proceed as in Step 1 and conclude the Lipschitz continuity of \(f\). _Step 3:_ Finally, we prove that any _maximal_ set \(\mathcal{U}\) as defined in (25) must contain \(\mathcal{W}_{\epsilon}\). Assume by contradiction that \(\overline{W}\in\partial\mathcal{U}\cap\mathcal{W}_{\epsilon}\neq\emptyset\). The map \(f:[0,1)\to C^{1+\alpha/2,2+\alpha}(D_{T})\) from Step 2 is uniformly Lipschitz. In particular, \(f\) admits a continuous extension to \([0,1]\). Consequently, \(\mathcal{M}(\overline{W})\) is well-defined and equals \(\lim_{\sigma\to 1^{-}}\mathcal{M}(\sigma\overline{W}+(1-\sigma)\mathfrak{Re} \left(\overline{W}\right))\in C^{1+\alpha/2,2+\alpha}(D_{T})\). Since \(\mathcal{R}\) is continuous, \(\mathcal{R}(\overline{W},\mathcal{M}(\overline{W}))=0\) and \(\left\|\mathfrak{Im}\left(\mathcal{M}\right)(\overline{W})\right\|_{L^{\infty} (D_{T})}\leq\frac{1}{4}\). Thus, Lemma 8 shows that \[\partial_{2}\mathcal{R}(\overline{W},\mathcal{M}(\overline{W})):C_{0}^{1+ \alpha/2,2+\alpha}(D_{T})\to C^{\alpha/2,\alpha}(D_{T})\] is a homeomorphism and we find that the domain of holomorphy of \(\mathcal{M}\) can be further extended by an open set \(U(\overline{W})\not\subset\mathcal{U}\). Moreover, since \(\overline{W}\in\mathcal{W}_{\epsilon}\), the argument in Step 1 can be repeated to obtain \[\left\|\mathfrak{Im}\left(\mathcal{M}\right)(\overline{W})\right\|_{D_{T}} \leq\epsilon\mathcal{G}\left(C_{\epsilon}\right)<\frac{1}{4}.\] Thus, there exists an open neighbourhood \(\overline{W}\in\hat{U}\subset U_{\overline{W}}\) such that, for any \(W\in\hat{U}\), \(\left\|\mathfrak{Im}\left(\mathcal{M}\right)\left(W\right)\right\|_{D_{\mathcal{ T}}}<\frac{1}{4}\). This contradicts the maximality of \(\mathcal{U}\). Thus, we proved that \(\mathcal{M}\) is holomorphic on \(\mathcal{W}_{\varepsilon}\). The argument used in Step 1 immediately implies the uniform boundedness. ### Sparsity of the parameter-to-solution map and estimates of its derivatives In this section, we use the uniform holomorphy of \(\mathcal{M}\) to establish estimates of its derivatives. While this is a standard technique established already in [21], it turns out this will not be quite sharp enough to obtain dimension-independent convergence of the sparse grid approximation. In Section 6, we present a possible way to resolve this in the future. The LC expansion (13) can be formally extended to complex parameters \(\boldsymbol{z}\in\mathbb{C}^{\mathbb{N}}\) as \(W(\boldsymbol{z},t)\coloneqq\sum_{n\in\mathbb{N}}z_{n}\eta_{n}(t)\) for all \(t\in[0,T]\). Since \(\boldsymbol{z}\mapsto W(\boldsymbol{z},\cdot)\) is affine, the parameter-to-solution map \(\boldsymbol{z}\mapsto\boldsymbol{M}_{0}+\mathcal{M}(W(\boldsymbol{z}))\) is holomorphic for all \(\boldsymbol{z}\) such that \(W(\boldsymbol{z},\cdot)\) belongs to the domain of holomorphy of \(\mathcal{M}\), which in Theorem 11 we proved contains \(\mathcal{W}_{\epsilon}\) (cf. (26)). The domain of suitable parameters is characterized in the next lemma. To that end, define the Banach space \[\mathcal{X}(\alpha)\coloneqq\left\{\boldsymbol{z}\in\mathbb{C}^{\mathbb{N}}\, :\,\left\|\boldsymbol{z}\right\|_{\mathcal{X},\alpha}<\infty\right\},\] where the norm reads (recall the hierarchical indexing (14)) \[\left\|\boldsymbol{z}\right\|_{\mathcal{X},\alpha}:=\sum_{\ell\in\mathbb{N}} \max_{j=1,\ldots,\left[2^{\ell-1}\right]}\left|z_{\ell,j}\right|2^{-(1-\alpha) \ell/2}.\] **Lemma 12**.: _Fix \(\epsilon>0\). Consider a positive sequence \(\boldsymbol{\rho}=\left(\rho_{j}\right)_{j\in\mathbb{N}}\) such that \(\left\|\boldsymbol{\rho}\right\|_{\mathcal{X},\alpha}<\frac{\epsilon}{2}\). Define_ \[\mathcal{S}_{\boldsymbol{\rho}}\coloneqq\left\{\left(z_{j}\right)_{j\in \mathbb{N}}\subset\mathbb{C}:\mathfrak{Im}\left(z_{j}\right)\leq\rho_{j}\,\, \forall j\in\mathbb{N}\right\}. \tag{29}\] _Then, it holds that_ \[W(\mathcal{S}_{\boldsymbol{\rho}})\subset\mathcal{W}_{\epsilon}.\] **Remark 13**.: _Note that the lemma essentially states "\(\left(\boldsymbol{b},\xi,\delta,X\right)\)-holomoprhy" [26, Definition 4.1] for the Stochastic LLG problem in the case of a Holder-valued parameter-to-solution map. However, this regularity is not sufficient to apply the theory in [26], as the summability coefficient is \(p=2\), which lies out of the range \((0,\frac{2}{3})\) considered in [26]. This fact is analogous to what happens in our analysis._ Proof of Lemma 12.: Fix \(\boldsymbol{z}\in\mathcal{S}_{\boldsymbol{\rho}}\) and let us prove that \(W(\boldsymbol{z})\in\mathcal{W}_{\epsilon}\). Denote \(\mathfrak{Re}\left(\boldsymbol{z}\right)=\left(\mathfrak{Re}\left(z_{n} \right)\right)_{n\in\mathbb{N}}\in\mathbb{R}^{\mathbb{N}}\). By linearity, \[W(\boldsymbol{z},\cdot)-W(\mathfrak{Re}\left(\boldsymbol{z}\right),\cdot)=i \sum_{n\in\mathbb{N}}\mathfrak{Im}\left(z_{n}\right)\eta_{n}(\cdot).\] Recalling the hierarchical indexing (14) and by a triangle inequality, we obtain \[\left\|W(\boldsymbol{z},\cdot)-W(\mathfrak{Re}\left(\boldsymbol{z}\right), \cdot)\right\|_{C^{\alpha/2}([0,T])}\leq\sum_{\ell\in\mathbb{N}_{0}}\left\| \sum_{j=1}^{\left[2^{\ell-1}\right]}\mathfrak{Im}\left(z_{\ell,j}\right)\eta_ {\ell,j}\right\|_{C^{\alpha/2}([0,T])}.\] The terms on the right-hand side can be estimated by Banach space interpolation and the fact that all basis functions \(\eta_{\ell,j}\) on the same level have disjoint supports, i.e., \[\left\|\sum_{j}\mathfrak{Im}\left(z_{\ell,j}\right)\eta_{\ell,j} \right\|_{C^{\alpha/2}([0,T])}\leq\left\|\sum_{j}\mathfrak{Im}\left(z_{\ell,j }\right)\eta_{\ell,j}\right\|_{C^{0}([0,T])}^{1-\alpha/2}\left\|\sum_{j} \mathfrak{Im}\left(z_{\ell,j}\right)\eta_{\ell,j}\right\|_{C^{1}([0,T])}^{ \alpha/2}\] \[\leq \big{(}\max_{j}\left|\mathfrak{Im}\left(z_{\ell,j}\right)\right| \left\|\eta_{\ell,j}\right\|_{C^{0}([0,T])}\big{)}^{1-\alpha/2}\big{(}\max_{j} \left|\mathfrak{Im}\left(z_{\ell,j}\right)\right|\left\|\eta_{\ell,j}\right\|_{C ^{0}([0,T])}+\max_{j}\left|\mathfrak{Im}\left(z_{\ell,j}\right)\right|\left| \eta_{\ell,j}\right|_{C^{1}([0,T])}\big{)}^{\alpha/2}.\] Recalling that \(\left\|\eta_{i(\ell)}\right\|_{C^{0}([0,T])}\leq 2^{-\ell/2}\) and \(\left|\eta_{i(\ell)}\right|_{C^{1}([0,T])}\leq 2^{\ell/2}\) (see Remark 5), we find \[\left\|\sum_{j}\mathfrak{Im}\left(z_{\ell,j}\right)\eta_{\ell,j}\right\|_{C^{ \alpha/2}([0,T])}\leq\max_{j}\left|\mathfrak{Im}\left(z_{\ell,j}\right) \right|(2^{-\ell/2}+2^{-(1-\alpha)\ell/2}).\] With \(\mathbf{z}\in\mathcal{S}_{\mathbf{\rho}}\), we obtain \(\left\|W(\mathbf{z},\cdot)-W(\mathfrak{Re}\left(\mathbf{z}\right),\cdot)\right\|_{C^{ \alpha/2}([0,T])}<\epsilon\), which gives the statement. In practice, we define the sequence \(\mathbf{\rho}=(\rho_{n})_{n\in\mathbb{N}}\) as \[\rho_{n}=\epsilon 2^{\frac{(1-\alpha)\left\lceil\log(n)\right\rceil}{2}} \qquad\forall n\in\mathbb{N}. \tag{30}\] In conclusion, for fixed \(\epsilon>0\), \(\mathcal{M}\circ W:\mathcal{S}_{\mathbf{\rho}}\to C^{1+\alpha/2,2+\alpha}(D_{T})\) is holomorphic because it is a composition of holomorphic functions. Moreover, \(\mathcal{M}\circ W\) is uniformly bounded by \(C_{\epsilon}\) (see Theorem 11). Consider a multi-index \(\mathbf{\nu}=(\nu_{1},\ldots,\nu_{n})\subset\mathbb{N}_{0}\) and denote by \(\partial^{\mathbf{\nu}}\) the mixed derivative \(\partial_{1}^{\nu_{1}}\ldots\partial_{n}^{\nu_{n}}\) (if \(\nu_{j}=0\), the \(j\)-th partial derivative is omitted). In the following proposition, we use Cauchy's integral theorem to estimate derivatives of the parameter-to-solution map. **Proposition 14**.: _Consider \(\mathbf{m}:\mathcal{X}(\alpha)\to C^{1+\alpha/2,2+\alpha}(D_{T})\), the parameter-to-solution map of the parametric LLG problem. Fix \(\epsilon>0\) as in Proposition (11) and let \(\mathbf{\rho}=(\rho_{n})_{n\in\mathbb{N}}\) a positive sequence such that \(\left\|\mathbf{\rho}\right\|_{\mathcal{X},\alpha}<\frac{\epsilon}{2}\). Then, for any \(n\in\mathbb{N}\), \(\mathbf{\nu}=\left(\nu_{i}\right)_{i=1}^{n}\subset\mathbb{N}^{n}\), it holds that_ \[\left\|\partial^{\mathbf{\nu}}\mathbf{m}(\mathbf{y})\right\|_{C^{1+\alpha/2,2+\alpha}(D_{ T})}\leq\prod_{j=1}^{n}\nu_{j}!\rho_{j}^{-\nu_{j}}C_{\epsilon}\qquad\forall\mathbf{y} \in\mathcal{X}, \tag{31}\] _where \(C_{\epsilon}>\) 0 from Theorem 11 is independent of \(\mathbf{\nu}\) or \(\mathbf{y}\)._ Proof.: Fix \(\mathbf{y}\in\mathcal{X}\). Cauchy's integral theorem applied to \(\mathcal{M}\circ W\) (as a function of \(y_{1},\ldots y_{n}\) only) shows \[\partial^{\mathbf{\nu}}m(\mathbf{y})=\prod_{j=1}^{n}\left(\frac{\nu_{j}!}{2\pi i} \right)^{\nu_{j}}\int_{|z_{1}-y_{1}|=\rho_{1}}\cdots\int_{|z_{n}-y_{n}|=\rho_ {n}}\frac{\mathcal{M}(W(\mathbf{z};\mathbf{y}))}{(z_{1}-y_{1})^{\nu_{1}+1}\ldots(z_{n} -y_{n})^{\nu_{n}+1}}\mathrm{d}\mathbf{z},\] where \(W(\mathbf{z};\mathbf{y})=W(z_{1},\cdots,z_{n},y_{n+1},y_{n+2},\ldots)\). A straightforward estimate yields \[\left\|\partial^{\mathbf{\nu}}\mathbf{m}(\mathbf{y})\right\|_{C^{1+\alpha/2,2+\alpha}(D_{ T})}\leq\prod_{j=1}^{n}\nu_{j}!\rho_{j}^{-\nu_{j}}\max_{\mathbf{z}\in\mathbf{B}(\mathbf{y}_{ \mathbf{\nu}},\mathbf{\rho})}\left\|\mathcal{M}(W(\mathbf{z};\mathbf{y}))\right\|_{C^{1+\alpha /2,2+\alpha}(D_{T})},\] where \(\mathbf{B}(\mathbf{y}_{\mathbf{\nu}},\mathbf{\rho})=\bigotimes_{j=1}^{n}B(y_{j},\rho_{j})\) is a tensor product of balls with centers given by \((y_{1},\ldots,y_{n})\) and radii given by \(\mathbf{\rho}\). Observe that the estimate depends on \(\mathbf{y}\) only through the last factor. However, for fixed \(\epsilon\), this is uniformly bounded from above as proved in Theorem 17. ## 6. Holomorphy of a simplified parameter-to-solution map with Lebesgue sample paths In the present section, we aim at proving stronger regularity and sparsity properties of the random LLG parameter-to-solution map. A key observation is that these properties depend on the Banach spaces chosen for the sample paths of the random coefficients (in our case, the Wiener process) and the sample paths of the solutions (in our case, the magnetizations). In this case, we show that using _Lebesgue_ spaces for the time variable is superior to using Holder spaces. Because of the nonlinear nature of the random LLG problem, the results hold only for a simplified version of the stochastic input. We make the following modelling assumptions: * The sample paths of the Wiener process \(W\) are "small". This is justified e.g. for small final times \(T\ll 1\); * The gradients \(\nabla\mathbf{g}\) is "small", meaning that the stochastic noise is spatially uniform. This is justified for small domain sizes (samples in real-world applications are often in the nano- and micrometer range). This leads to the following simplifications in the random LLG residual (defined in (17)): \[\nabla\mathbf{m}\times\nabla\mathbf{g} \approx 0,\] \[\sin(W) \approx W,\] \[1-\cos(W) \approx\frac{W^{2}}{2}\approx 0.\] Consequently, we approximate \(\hat{\mathcal{C}}(W,\mathbf{m})\) defined in (3) with the first order expansion \[\bar{\mathcal{C}}(W,\mathbf{m})\coloneqq W\mathbf{m}\times\Delta\mathbf{g},\] where \(\mathbf{g}\in C^{2+\alpha}(D)\). This term appears in the _simplified random LLG residual_ \[\begin{split}\mathcal{R}_{s}(W,\mathbf{m})&\coloneqq \widetilde{\mathcal{R}}_{s}(W,\mathbf{M}_{0}+\mathbf{m}),\qquad\text{where}\\ \widetilde{\mathcal{R}}_{s}(W,\mathbf{v})&\coloneqq \partial_{t}\mathbf{v}-\Delta\mathbf{v}-\mathbf{v}\times\Delta\mathbf{v}+\left(\nabla\mathbf{v}: \nabla\mathbf{v}\right)\mathbf{v}-\mathbf{v}\times\tilde{\mathcal{C}}(W,\mathbf{v})+\mathbf{v} \times\left(\mathbf{v}\times\tilde{\mathcal{C}}(W,\mathbf{v})\right).\end{split} \tag{32}\] Observe that the magnetization corresponding to \(W\) is \(\mathbf{M}_{0}(\mathbf{x})+\mathbf{m}(t,\mathbf{x})\). The map \(\widetilde{\mathcal{R}}_{s}\) is understood as a function between Banach spaces: \[\tilde{\mathcal{R}}_{s}:\mathbb{W}\times\mathbb{M}\to V, \tag{33}\] where, for \(1<q<\infty\), \[\mathbb{W} \coloneqq L^{q}([0,T]), \tag{34}\] \[\mathbb{M} \coloneqq\left\{\mathbf{m}\in L^{q}([0,T],C^{2+\alpha}(D))\,:\, \partial_{t}\mathbf{m}\in L^{q}([0,T],C^{\alpha}(D)),\mathbf{m}(0,\cdot)=\mathbf{0}\text{ on }D,\ \partial_{n}\mathbf{m}=0\text{ on }\partial D\right\},\] \[V \coloneqq L^{q}([0,T],C^{\alpha}(D)).\] Observe that since \(\partial_{t}\mathbf{m}\in L^{q}([0,T];C^{\alpha}(D))\) for \(q>1\), \(\left\lVert\mathbf{m}(t)\right\rVert_{C^{\alpha}(D)}\leq\left\lVert\partial_{t} \mathbf{m}\right\rVert_{L^{1}([0,T],C^{\alpha}(D))}\) for all \(t\in[0,T]\). This implies that \(\mathbf{m}\in C^{0}([0,T],C^{\alpha}(D))\) and \(\left\lVert\mathbf{m}\right\rVert_{C^{0}([0,T],C^{\alpha}(D))}\leq\left\lVert\bm {m}\right\rVert_{\mathbb{M}}\). In particular, interpolation shows \[\left\lVert\mathbf{m}\right\rVert_{L^{\infty}(D_{T})}+\left\lVert\mathbf{m}\right\rVert _{L^{2}(0,T;C^{1}(D))}\leq\left\lVert\mathbf{m}\right\rVert_{\mathbb{M}}.\] Note that \(\tilde{\mathcal{C}}(\cdot,\cdot)\) is bounded and linear in both arguments, i.e., \[\left\lVert\tilde{\mathcal{C}}(W,\mathbf{m})\right\rVert_{L^{q}([0,T],C^{\alpha}(D ))}\leq \left\lVert W\right\rVert_{L^{q}([0,T])}\left\lVert\mathbf{m}\right\rVert _{C^{0}([0,T],C^{\alpha}(D))}\left\lVert\mathbf{g}\right\rVert_{C^{2+\alpha}(D)} \qquad\forall W\in\mathbb{W},\mathbf{m}\in C^{0}([0,T],C^{\alpha}(D)). \tag{35}\] For some \(\theta>0\), we consider the solution map \(\mathcal{M}\) on the set \(\mathbb{W}_{\theta}:=\left\{W\in\mathbb{W}\cap\mathbb{R}^{[0,T]}\,:\,\left\lVert W \right\rVert_{L^{\infty}([0,T])}\leq\theta\right\}\) of "small" real valued Wiener processes. The goal of the following sections is to construct a uniform (with respect to \(L^{q}\)) holomorphic extension. ### Holomorphic extension to a neighborhood of a real Wiener process Analogously to Lemma 8, we get the following result: **Lemma 15**.: _Consider the function \(\mathcal{R}_{s}\) defined above by (32) and (33) with coefficients \(\alpha\in(0,1)\), \(\mathbf{g}\in C^{2+\alpha}(D)\), \(\mathbf{M}_{0}\in C^{2+\alpha}(D)\)._ 1. \(\mathcal{R}_{s}(W,\mathbf{m})\) _is a well define, continuous function;_ 2. \(\mathcal{R}_{s}(W,\mathbf{m})\) _is continuously differentiable;_ 3. _Let_ \(W^{*}\in\mathbb{W}\) _and_ \(\mathbf{m}^{*}\in\mathbb{M}\) _such that_ \(\left\lVert\mathfrak{Im}\left(\mathbf{m}\right)\right\rVert_{L^{\infty}(D_{T})} \leq\frac{1}{4}\)_. Then, the differential_ \(\partial_{2}\mathcal{R}_{s}(W^{*},\mathbf{m}^{*}):\mathbb{M}\to V\) _is a homeomorphism with_ \(\left\lVert\partial_{2}\mathcal{R}_{s}(W^{*},\mathbf{m}^{*})^{-1}\right\rVert\leq C _{\mathrm{stab}}(\left\lVert W\right\rVert_{\mathbb{W}},\left\lVert\mathcal{M }(W)\right\rVert_{\mathbb{M}})\)_._ **Remark 16**.: _The proof of iii. requires the use of a \(L^{q}\)-regularity result for the linear and elliptic parabolic problem given by the operator \(\partial_{2}\mathcal{R}_{s}(W^{*},\mathbf{m}^{*})\) which coincides with (20) but \(\hat{\mathcal{C}}\) replaced by \(\hat{\mathcal{C}}\). For scalar problems, this can be found in [51, Section 4]. Strictly speaking, however, Lemma 15 only holds under the assumption that [51] can be generalized to the vector valued case._ Consider \(W_{*}\in\mathbb{W}_{\theta}\) and \(\mathbf{m}_{*}\in\mathbb{M}\) taking values respectively in \(\mathbb{R}\) and \(\mathbb{S}^{2}\subset\mathbb{R}^{3}\) and such that \(\mathcal{R}(W_{*},\mathbf{m}_{*})=0\). By the implicit function theorem (see Theorem 9), there exists an open connected neighborhood \(U\) of \(W_{*}\) in \(\mathbb{W}\) (in particular, \(U\) is a set of complex-valued functions) and a holomorphic map \(\mathcal{M}:U\to\mathbb{M}\) such that \(\mathcal{M}(W_{*})=\mathbf{m}_{*}\). As for the differential of \(\mathcal{M}\), we have \[\mathcal{M}^{\prime}(W)=-\partial_{2}\mathcal{R}_{s}(W,\mathcal{M}(W))^{-1} \circ\partial_{1}\mathcal{R}(W,\mathcal{M}(W)).\] The proof of _ii._ in the previous lemma gives \[\left\lVert\partial_{1}\mathcal{R}(W,\mathcal{M}(W))\right\rVert\lesssim\left\lVert \mathbf{g}\right\rVert_{C^{2+\alpha}(D)}\left(1+\left\lVert\mathbf{M}_{0}+\mathcal{M}(W) \right\rVert_{\mathbb{M}}\right)^{3}\qquad\forall W\in\mathbb{W}.\] All in all, we have \[\left\lVert\mathcal{M}(W)^{\prime}\right\rVert\leq C_{\mathrm{stab}}\left( \left\lVert W\right\rVert_{\mathbb{W}},\left\lVert\mathcal{M}(W)\right\rVert_{ \mathbb{M}}\right)\left\lVert\mathbf{g}\right\rVert_{C^{2+\alpha}(D)}\left(1+\left\lVert \mathbf{M}_{0}+\mathcal{M}(W)\right\rVert_{\mathbb{M}}\right)^{3}\qquad\forall W\in \mathbb{W}. \tag{36}\] ### Uniform holomorphic extension Consider \[\mathcal{U}\coloneqq\left\{W:[0,T]\to\mathbb{C}\,:\,\left\{\begin{array}{ll}& \bullet\,\,\mathcal{M}\,\,\text{is holomorphic in a neighbourhood of $W$}\\ &\bullet\,\,\|\mathfrak{Im}\left(\mathcal{M}\right)(\sigma W+(1-\sigma) \mathfrak{Re}\left(W\right))\|_{L^{\infty}(D_{T})}<\frac{1}{4}\quad\forall \sigma\in[0,1]\end{array}\right\}.\] Observe that this set differs from set \(\mathcal{U}\) defined in Section 5.2 because of the different domains and codomains of the solution operator \(\mathcal{M}\). In the previous section we verified that \[\bigcup_{W\in\mathbb{W}_{\theta}}B(W,\epsilon_{W})\subset\mathcal{U},\] where \(\epsilon_{W}>0\) depends on \(W\) (see Section 5) and the ball is understood with respect to the \(L^{q}\)-norm. As in Section 5, we bootstrap this result to get a holomorphic extension on \[\mathcal{W}_{\epsilon}\coloneqq\left\{W\in\mathbb{W}\,:\,\|W\|_{\mathbb{W}}< \epsilon\right\}. \tag{37}\] Compared to the set \(\mathcal{W}_{\epsilon}\) (cf. 26) defined in the setting with Holder regular sample paths, here we additionally impose a constraint on the real part of the coefficients. The proof of Theorem 2 can be transferred to this simplified version of LLG and hence we have \[\left\|\mathcal{M}(W)\right\|_{\mathbb{M}}\leq\overline{C}_{r}\qquad\forall W \in\mathbb{W}_{\theta}. \tag{38}\] If \(W\in\mathcal{W}_{\epsilon}\cap\mathcal{U}\), the estimate (36) on the operator norm of \(\mathcal{M}^{\prime}\) can be written as \[\|\mathcal{M}^{\prime}(W)\|\leq\mathcal{G}(\|\mathcal{M}(W)\|_{\mathbb{M}}),\] where the function \(\mathcal{G}:\mathbb{R}_{+}\to\mathbb{R}_{+}\) is increasing and depends on \(\epsilon\). The following theorem can be proved like Theorem 11 with minimal modifications. **Theorem 17**.: _Consider \(\epsilon>0\) such that_ * \(G(\overline{C}_{r})+\epsilon\) _belong to the domain of_ \(G^{-1}\)_,_ * \(\epsilon\mathcal{G}(\overline{C}_{\epsilon})<\frac{1}{4}\)_,_ _where \(\overline{C}_{r}\) was defined in (38), \(G(s)=\int_{\xi}^{s}\frac{ds}{\mathcal{G}(s)}\) for a fixed \(\xi>0\) and \(\overline{C}_{\epsilon}\coloneqq G^{-1}(G(\overline{C}_{r})+\epsilon)\)._ _Then, \(\mathcal{M}:\mathcal{W}_{\epsilon}\to\mathbb{M}\) is holomorphic and uniformly bounded by \(\overline{C}_{\epsilon}\). \(\mathcal{M}\) extends the random LLG parameter-to-solution map in the sense that for any \(\boldsymbol{y}\in\mathbb{R}^{\mathbb{N}}\) such that \(W(\boldsymbol{y})\in\mathbb{W}\), \(\boldsymbol{M}_{0}+\mathcal{M}(W(\boldsymbol{y}))=\boldsymbol{m}(\boldsymbol {y})\)._ ### Sparsity of the parameter-to-solution map and estimate of its derivatives In this section, we show how the result from the previous section leads to estimates of the derivatives of the parameter-to-solution map \(\boldsymbol{m}(\boldsymbol{y})=\boldsymbol{M}_{0}+\mathcal{M}(W(\boldsymbol {y},\cdot))\). Contrary to Section 5.3, the domain of holomorphy of the parameter-to-solution map now _depends on which mixed derivative \(\partial^{\boldsymbol{\nu}}\) is considered_. **Definition 18**.: _Given a multi-index \(\boldsymbol{\nu}=(\nu_{1},\dots,\nu_{n})\subset\mathbb{N}_{0}\), \(0<\delta<\frac{1}{2}\) and \(0<\gamma<1\) consider a sequence of positive numbers \(\boldsymbol{\rho}=\boldsymbol{\rho}(\boldsymbol{\nu},\delta,\gamma)\) defined as follows:_ \[\rho_{t,j}\coloneqq\gamma\begin{cases}1&\text{if }\nu_{\ell,j}=0\\ 2^{\left(\frac{3}{2}-\delta\right)\ell}\frac{1}{r_{\ell}(\boldsymbol{\nu})}& \text{if }\nu_{\ell,j}=1\\ 2^{\left(\frac{1}{2}-\delta\right)\ell}&\text{otherwise}\end{cases}\quad\forall \ell\in\mathbb{N}_{0},j=1,\dots,\lceil 2^{\ell-1}\rceil, \tag{39}\] _where we used the hierarchical indexing (14) and \(r_{\ell}(\boldsymbol{\nu})\coloneqq\#\left\{j\in 1,\dots,\lceil 2^{\ell-1} \rceil:\nu_{\ell,j}=1\right\}\). Define then the set_ \[\mathcal{S}(\boldsymbol{\nu},\delta,\gamma)\coloneqq\left\{\boldsymbol{z}= \left(z_{\ell,j}\right)_{\ell,j}\in\mathbb{C}^{\mathbb{N}}:|z_{\ell,j}|<\rho_{ \ell,j}\quad\forall\ell\in\mathbb{N}_{0},j=1,\dots,\lceil 2^{\ell-1}\rceil \right\}.\] Recall the definition of \(\mathcal{W}_{\epsilon}\) given in (37). In particular, the definition depends on the parameters \(\epsilon>0\) and on \(0<q<\infty\) through the definition of the space \(\mathbb{W}\) (cf. (34)). **Lemma 19**.: _For any \(\epsilon>0\), \(\boldsymbol{\nu}=(\nu_{1},\dots,\nu_{n})\subset\mathbb{N}_{0}\) and \(0<\delta<\frac{1}{2}\), there exists \(q>1\) and \(\gamma>0\) such that_ \[\boldsymbol{z}\in\mathcal{S}(\boldsymbol{\nu},\delta,\gamma)\ \Rightarrow\ W( \boldsymbol{z},\cdot)\in\mathcal{W}_{\epsilon}\] Proof.: A triangle inequality yields \[\left\|W(\mathbf{z})\right\|_{L^{q}([0,T])}=\left\|\sum_{\ell\in\mathbb{N}_{0}}\sum_{ j=1,\ldots,[2^{\ell-1}]}z_{\ell,j}\eta_{\ell,j}\right\|_{L^{q}([0,T])}\leq\sum_{ \ell\in\mathbb{N}_{0}}\sum_{j=1,\ldots,[2^{\ell-1}]}\left|z_{\ell,j}\right| \left\|\eta_{\ell,j}\right\|_{L^{q}([0,T])}.\] For the Faber-Shauder basis functions (cf. (12)), \(\left\|\eta_{\ell,j}\right\|_{L^{q}([0,T])}\leq 2^{-(1/q+1/2)\ell}\). Together with the fact that \(\mathbf{z}\in\mathcal{S}(\mathbf{\nu},\delta,\gamma)\), it gives \[\left\|W(\mathbf{z})\right\|_{L^{q}([0,T])}\leq\sum_{\ell\in\mathbb{N}_{0}}2^{-(1/ q+1/2)\ell}\sum_{j=1}^{[2^{\ell-1}]}\rho_{\ell,j}.\] By the definition of \(\mathbf{\rho}\), we may write \[\sum_{\ell\in\mathbb{N}_{0}}2^{-(1/q+1/2)\ell}\sum_{j=1}^{[2^{\ell-1}]}\rho_{ \ell,j}=\gamma\sum_{\ell}2^{-(1/q+1/2)\ell}\left(\#\left\{i:\nu_{\ell,i}=0 \right\}+2^{\left(\frac{3}{2}-\delta\right)\ell}\frac{1}{r_{\ell}(\mathbf{\nu})}r _{\ell}(\mathbf{\nu})+2^{\left(\frac{1}{2}-\delta\right)\ell}\#\left\{i:\nu_{\ell,i}>1\right\}\right).\] Trivially, \(\#\left\{i:\nu_{\ell,i}=0\right\}<2^{\ell}\) and \(\#\left\{i:\nu_{\ell,i}>1\right\}<2^{\ell}\). Finally, we get \[\sum_{\ell\in\mathbb{N}_{0}}2^{-(1/q+1/2)\ell}\left\|\mathbf{\rho}_{\ell}\right\| _{\ell^{1}}\lesssim\gamma\sum_{\ell\in\mathbb{N}_{0}}\left(2^{-(1/q-1/2)\ell} +2^{-\delta\ell/2}+2^{-\delta\ell/2}\right)<\infty,\] as long as \(1<q<\frac{1}{1-\delta/2}\). We introduce the set of parameters \(\mathcal{S}_{\theta}:=\left\{\mathbf{y}\in\mathbb{R}^{\mathbb{N}}\,:\,W(\mathbf{y})\in \mathbb{W}_{\theta}\right\}\) leading to "small" Wiener processes. Application of Cauchy's integral theorem leads, as in Proposition 14, to the following result. **Proposition 20**.: _Fix \(\epsilon>0\) as in Theorem 17 and \(0<\delta<\frac{1}{2}\), \(\gamma>0\) as in Lemma 19. Fix a multi-index \(\mathbf{\nu}=(\nu_{1},\ldots,\nu_{n})\subset\mathbb{N}\) and define the sequence \(\mathbf{\rho}\) as in Definition 18. Then, it holds that_ \[\left\|\partial^{\mathbf{\nu}}\mathbf{m}(\mathbf{y})\right\|_{\mathbb{M}}\leq\mathbf{\nu}!\prod _{\ell\in\mathbb{N}_{0}}\prod_{j=1}^{[2^{\ell-1}]}\rho_{\ell,j}^{-\nu_{\ell,j} }\overline{C}_{\epsilon}\qquad\forall\mathbf{y}\in\mathcal{S}_{\theta}, \tag{40}\] _where we used the hierarchical indexing of sequences (14) and \(\overline{C}_{\epsilon}\) (which depends on \(\epsilon\)) defined in Theorem 17._ ## 7. Sparse grid approximation of the parameter-to-solution map We briefly recall the sparse grid construction. A more complete discussion can be found e.g. in [15] or [47]. Consider the nodes family \(\mathcal{Y}^{m}=(y_{i}^{m})_{i=1}^{m}\subset\mathbb{R}\) for any \(m\in\mathbb{N}\) such that \(y_{1}^{1}=0\) (the reason for this requirement will be clarified below). We shall write \(y_{i}\) rather than \(y_{i}^{m}\) when the context makes it unambiguous. Let \(I_{m}:C^{0}(\mathbb{R})\to P_{m}\) denote an _interpolation operator over \(\mathcal{Y}^{m}\)_ into a suitable \(m\) dimensional space \(P_{m}\), i.e., \(I_{m}[u](y)=u(y)\) for all \(y\in\mathcal{Y}^{m}\) and any \(u\in C^{0}(\mathbb{R})\). Consider a _level-to-knot function_\(m:\mathbb{N}_{0}\to\mathbb{N}\), i.e. a strictly increasing function such that \(m(0)=1\). For any \(i\in\mathbb{N}_{0}\), the _detail operator_ is by definition \[\Delta_{i}:C^{0}(\mathbb{R})\to P_{m(i)}\] \[\Delta_{i}u=I_{m(i)}u-I_{m(i-1)}u\qquad\forall u\in C^{0}(\mathbb{ R}),\] where \(I_{m(-1)}\equiv 0\) so that \(\Delta^{0}u=I_{1}u\equiv u(0)\), the constant interpolant in one point. In order to discuss interpolation schemes in more than one dimensions, denote by \(\mathcal{F}\) the set of integer-valued sequences (also called multi-indices) with finite support, i.e. \(\mathbf{\nu}\in\mathcal{F}\) if and only if \(\operatorname{supp}(\mathbf{\nu})\coloneqq\left\{i\in\mathbb{N}\,:\,\nu_{i}\neq 0\right\}\) is finite. Given \(\mathbf{\nu}\in\mathcal{F}\), the corresponding _hierarchical surplus_ operator is by definition: \[\Delta_{\mathbf{\nu}}\coloneqq\bigotimes_{n\in\operatorname{supp}(\mathbf{\nu})}\Delta _{\nu_{n}},\] where the tensor product is effectively finite because \(\mathbf{\nu}\in\mathcal{F}\) and \(\Delta_{0}u\equiv u(0)\). The hierarchical surplus operator can be applied to any function \(u\in C^{0}(\mathbb{R}^{\mathbb{N}})\), i.e. a continuous function defined on the space of real valued sequences, by considering only the components of the independent variable in \(\operatorname{supp}\left(\boldsymbol{\nu}\right)\) and fixing the remaining ones to \(0\). Clearly, \(\Delta_{\boldsymbol{\nu}}u\in P_{\boldsymbol{\nu}}\coloneqq\bigotimes_{n\in \operatorname{supp}(\boldsymbol{\nu})}P_{m(\nu_{n})}\). Consider now \(\Lambda\subset\mathcal{F}\)_downward-closed_, i.e. \[\boldsymbol{\nu}\in\Lambda\ \Rightarrow\ \boldsymbol{\nu}-\boldsymbol{e}_{n} \in\Lambda\quad\forall n\in\operatorname{supp}(\boldsymbol{\nu}),\] where \(\boldsymbol{e}_{n}\) is the \(n\)-th coordinate unit vector. The _sparse grid interpolant_ is by definition \[\mathcal{I}_{\Lambda}\coloneqq\sum_{\boldsymbol{\nu}\in\Lambda}\Delta_{ \boldsymbol{\nu}}:C^{0}(\mathbb{R}^{\mathbb{N}})\to P_{\Lambda} \tag{41}\] where \(P_{\Lambda}\coloneqq\bigoplus_{\boldsymbol{\nu}\in\Lambda}P_{\boldsymbol{\nu}}\). It can be proved that there exists a finite set \(\mathcal{H}_{\Lambda}\subset\mathbb{R}^{\mathbb{N}}\), the _sparse grid_, such that for any \(u\in C^{0}(\mathbb{R}^{\mathbb{N}})\) \[\mathcal{I}_{\Lambda}u(\boldsymbol{y})=u(\boldsymbol{y})\qquad\forall \boldsymbol{y}\in\mathcal{H}_{\Lambda}\] and \(\mathcal{I}_{\Lambda}u\) is the unique element of \(P_{\Lambda}\) with this property. Equivalently, there exists a _Lagrange basis_\(\left(L_{\boldsymbol{y}}\right)_{\boldsymbol{y}\in\mathcal{H}}\) of \(P_{\Lambda}\), i.e. \(L_{\boldsymbol{y}}(\boldsymbol{z})=\delta_{\boldsymbol{y},\boldsymbol{z}}\) for any \(\boldsymbol{y},\boldsymbol{z}\in\mathcal{H}_{\Lambda}\). As a consequence, for any \(u\in C^{0}(\mathbb{R}^{\mathbb{N}})\) \[\mathcal{I}_{\Lambda}u(\boldsymbol{z})=\sum_{\boldsymbol{y}\in\mathcal{H}_{ \Lambda}}u(\boldsymbol{y})L_{\boldsymbol{y}}(\boldsymbol{z})\qquad\forall \boldsymbol{z}\in\mathbb{R}^{\mathbb{N}}.\] An important question is the one of (quasi) optimal approximation: Given that \(u\in C^{0}(\mathbb{R}^{\mathbb{N}})\) belongs to a given function class and given a _computational budget_\(Q\in\mathbb{N}\), we look for \(\Lambda\subset\mathcal{F}\) with \(\#\mathcal{H}_{\Lambda}\leq Q\) such that \[\|u-\mathcal{I}_{\Lambda}u\|_{L^{2}_{\mu}(\mathbb{R}^{\mathbb{N}})}\lesssim \min\left\{\|u-\mathcal{I}_{\tilde{\Lambda}}u\|_{L^{2}_{\mu}(\mathbb{R}^{N})} \,:\,\tilde{\Lambda}\subset\mathcal{F}\text{ downward closed such that }\#\mathcal{H}_{\tilde{\Lambda}}\leq Q\right\}, \tag{42}\] where the hidden constant is independent of \(\Lambda\). Following [46], we reformulate the problem of selecting \(\Lambda\) as a (linear programming relaxation of a) _knapsack problem_. For a generic multi-index \(\boldsymbol{\nu}\in\mathcal{F}\), consider a _value_\(v_{\boldsymbol{\nu}}\geq 0\) and _work_\(w_{\boldsymbol{\nu}}>0\) that satisfy \[\|\Delta^{\boldsymbol{\nu}}u\|_{L^{2}_{\mu}(\mathbb{R}^{\mathbb{ N}})} \lesssim v_{\boldsymbol{\nu}}\qquad\forall\boldsymbol{\nu}\in \mathcal{F},\] \[\#\mathcal{H}_{\Lambda} \lesssim\sum_{\boldsymbol{\nu}\in\Lambda}w_{\boldsymbol{\nu}} \qquad\forall\Lambda\subset\mathcal{F}\text{ downward-closed}.\] The optimal multi-index set selection problem (42) is substituted by the following _knapsack problem_: \[\max\left\{\sum_{\boldsymbol{\nu}\in\Lambda}v_{\boldsymbol{\nu}}\ :\ \Lambda \subset\mathcal{F}\text{ downward-closed and }\sum_{\boldsymbol{\nu}\in\Lambda}w_{ \boldsymbol{\nu}}\leq Q\right\}.\] The solution to (the linear relaxation of) this problem can be found as follows. Define the _profit_ \[\mathcal{P}_{\boldsymbol{\nu}}\coloneqq\frac{v_{\boldsymbol{\nu}}}{w_{ \boldsymbol{\nu}}}\qquad\forall\boldsymbol{\nu}\in\mathcal{F}.\] Induce a partial ordering on \(\mathcal{F}\) as: \[\boldsymbol{\nu}_{1}\preceq\boldsymbol{\nu}_{2}\Leftrightarrow\mathcal{P}_{ \boldsymbol{\nu}_{1}}\leq\mathcal{P}_{\boldsymbol{\nu}_{2}}\qquad\forall \boldsymbol{\nu}_{1},\boldsymbol{\nu}_{2}\in\mathcal{F}\] and sort its elements accordingly as \(\boldsymbol{\nu}_{1},\boldsymbol{\nu}_{2},\dots\) In case \(\mathcal{P}_{\boldsymbol{\nu}_{1}}=\mathcal{P}_{\boldsymbol{\nu}_{2}}\) sort the two multi-indices in lexicographic order. Define for any \(n\in\mathbb{N}\) the _\(n\)-elements quasi-optimal multi-index set_ \[\Lambda_{n}\coloneqq\left\{\boldsymbol{\nu}_{1},\dots,\boldsymbol{\nu}_{n} \right\}\subset\mathcal{F}. \tag{43}\] We assume that value and work are monotone, i.e. \(v_{\boldsymbol{\nu}+\boldsymbol{e}_{n}}\leq v_{\boldsymbol{\nu}}\), \(w_{\boldsymbol{\nu}+\boldsymbol{e}_{n}}\geq w_{\boldsymbol{\nu}}\) for all \(\boldsymbol{\nu}\in\mathcal{F}\), \(n\in\mathbb{N}\). The convergence rate of the corresponding sparse grid interpolation is estimated as follows. **Theorem 21**.: _[_46_, Theorem 1]_ _If there exists \(\tau\in(0,1]\) such that_ \[C_{\tau}\coloneqq\left(\sum_{\boldsymbol{\nu}\in\mathcal{F}}\mathcal{P}_{ \boldsymbol{\nu}}^{\tau}w_{\boldsymbol{\nu}}\right)^{1/\tau}<\infty,\] _then_ \[\|u-\mathcal{I}_{\Lambda_{n}}u\|_{L^{2}_{\mu}(\mathbb{R}^{\mathbb{N}})}\leq C_{ \tau}\#\mathcal{H}_{\Lambda_{n}}^{1-1/\tau}.\] In the next two sections we discuss the sparse grid methods defined using piecewise polynomial interpolation. ### 1D piecewise polynomial interpolation on \(\mathbb{R}\) Let \(\mu(x;\sigma^{2})=\frac{1}{\sqrt{2\pi\sigma^{2}}}e^{-x^{2}/2\sigma^{2}}\) denote the normal density with mean \(0\) and variance \(\sigma^{2}>0\). Let \(\mu(x)=\mu(x;1)\) and \(\tilde{\mu}(x)=\mu(x;\sigma^{2})\) for some fixed \(\sigma^{2}>1\). Consider the error function \(\operatorname{erf}(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-t^{2}}\mathrm{d}t\). For \(m\in\mathbb{N}\) odd, define \(\{y_{1},\ldots,y_{m}\}\subset\mathbb{R}\) by \[y_{i}=\phi\left(-1+\frac{i}{m+1}\right)\qquad i=1,\ldots,m, \tag{44}\] where \[\phi(x)\coloneqq\alpha\operatorname{erf}^{-1}(x)\qquad\forall x\in(-1,1), \tag{45}\] and \[\alpha=\alpha(p,\sigma^{2})\coloneqq\sqrt{\frac{4p}{1-\frac{1}{\sigma^{2}}}} \tag{46}\] The \(m\) nodes define \(m+1\) symmetric intervals (the first and last of which are unbounded). See Figure 2. We define a 1D piecewise polynomial interpolation operator as follows. When \(m=1\), let for any \(u\in C^{0}(\mathbb{R})\) and any \(p\geq 2\) \[I_{1}^{p}[u]=I_{1}[u]\equiv u(0),\] i.e. the constant interpolation. When \(m\geq 3\), \(I_{m}^{p}[u]\) is the piecewise polynomial function of degree \(p\) over the intervals defined by \(\mathcal{Y}^{m}\). More precisely, for any \(u\in C^{0}(\mathbb{R})\), \[I_{m}^{p}[u](y_{i})=u(y_{i}) \forall i=1,\ldots,m,\] \[I_{m}^{p}[u]|_{[y_{i},y_{i+1}]}\in\mathbb{P}_{p-1} \forall i=1,\ldots,m-1,\] \[I_{m}^{p}[u](y)\text{ polynomial extension of }I_{m}^{p}[u]|_{[y_{1},y_{2}]} \text{ if }y\leq y_{1},\] \[I_{m}^{p}[u](y)\text{ polynomial extension of }I_{m}^{p}[u]|_{[y_{m-1},y_{m}]} \text{ if }y\geq y_{m}.\] We assume additionally that for each \(i=1,\ldots,m-1\), the interval \((y_{i},y_{i+1})\) contains additional \(p-2\) distinct interpolation nodes so that \(I_{m}^{p}[u]\) is uniquely defined. The function \(\phi\) is such that \(\left(\phi^{\prime}(x)\right)^{2p}\tilde{\mu}^{-1}(\phi(x))\mu(\phi(x))\) is constant and equals \[C_{\phi}=\sqrt{\sigma^{2}}\left(\frac{\alpha\sqrt{\pi}}{2}\right)^{2p}, \tag{47}\] where \(\alpha\) was defined in (46). The following result is a standard interpolation error estimate on weighted spaces which, in this precise form, we could not find in the literature. Figure 2. Examples of nodes (44) for \(p=2\) on \(\mathbb{R}\). It can be seen that the nodes span a wider and wider portion of the real line and, at the same time, become denser. If the number of nodes is suitably increased (for example using (48)), the nodes family is nested. **Lemma 22**.: _Consider \(u:\mathbb{R}\to\mathbb{R}\) with \(\partial u\in L^{2}_{\mu}(\mathbb{R})\). Then,_ \[\|u-I_{1}[u]\|_{L^{2}_{\mu}(\mathbb{R})}\leq\tilde{C}_{1}\left\|\partial u\right\| _{L^{2}_{\mu}(\mathbb{R})},\] _where \(\tilde{C}_{1}=\sqrt{\int_{\mathbb{R}}\left|y\right|\tilde{\mu}^{-1}(y)\mathrm{ d}\mu(y)}\). If additionally, \(\partial^{p}u\in L^{2}_{\tilde{\mu}}(\mathbb{R})\) for \(p\geq 2\), then_ \[\|u-I^{p}_{m}[u]\|_{L^{2}_{\mu}(\mathbb{R})}\leq\tilde{C}_{2}(m+1)^{-p}\frac{ \left\|\partial u\right\|_{L^{2}_{\mu}(\mathbb{R})}}{p!}\qquad\forall m\geq 3 \text{ odd},\] _where \(\tilde{C}_{2}=\sqrt{C_{\phi}\frac{p}{2}\left(m-1+2^{2p+1}\right)}\) and \(C_{\phi}\) was defined in (47)._ Proof.: For the first estimate, apply the fundamental theorem of calculus and Cauchy-Schwarz to estimate \[u(y)-u(0)=\int_{0}^{y}\partial u\leq\left\|\partial u\right\|_{L^{2}_{\mu}( \mathbb{R})}\sqrt{\int_{0}^{y}\tilde{\mu}^{-1}}.\] Substitute this in \(\|u-u(0)\|_{L^{2}_{\mu}(\mathbb{R})}\) to obtain the first estimate. For the second estimate, let \(i\in\{2,\ldots,m-2\}\). Apply the fundamental theorem of calculus \(p\) times and recall that \(I^{p}_{m}[u]\in\mathbb{P}_{p-1}([y_{i},y_{i+1}])\) to obtain \[\left(u-I^{p}_{m}[u]\right)(y)=\int_{y_{i}}^{y}\int_{\xi_{1}}^{z_{1}}\cdots \int_{\xi_{p-1}}^{z_{p-1}}\partial^{p}u,\] where \(\xi_{i}\in[y_{i},y_{i+1}]\) is such that \(\partial^{i}(u-I^{p}_{m}[u])(\xi_{i})=0\). Cauchy-Schwarz gives \[\int_{\xi_{p-1}}^{z_{p-1}}\partial^{p}u\leq\left\|\partial^{p}u\right\|_{L^{2} _{\mu}([y_{i},y_{i+1}])}\sqrt{\int_{y_{i}}^{y_{i+1}}\tilde{\mu}^{-1}}.\] Simple estimates give \[\left(u-I^{p}_{m}[u]\right)(y) \leq\left\|\partial^{p}u\right\|_{L^{2}_{\mu}([y_{i},y_{i+1}])} \int_{y_{i}}^{y}\int_{y_{i}}^{z_{1}}\cdots\int_{y_{i}}^{z_{p-2}}\sqrt{\int_{y _{i}}^{y_{i+1}}\tilde{\mu}^{-1}}\] \[\leq\left\|\partial^{p}u\right\|_{L^{2}_{\mu}([y_{i},y_{i+1}])} \tilde{\mu}^{-1/2}(y)\int_{y_{i}}^{z_{1}}\cdots\int_{y_{i}}^{z_{p-2}}\sqrt{ \left|z_{p-1}-y_{i}\right|}\] \[\leq\left\|\partial^{p}u\right\|_{L^{2}_{\mu}([y_{i},y_{i+1}])} \tilde{\mu}^{-1/2}(y)\frac{\left|y-y_{i}\right|^{p-1+\frac{1}{2}}}{(p-1)!}.\] Consider now only \(i\in\left\{\frac{m+1}{2},\ldots,m-2\right\}\). \[\int_{y_{i}}^{y_{i+1}}\left|u-I^{p}_{m}[u]\right|^{2}(y)\mathrm{d}\mu(y)\leq \frac{\left\|\partial^{p}u\right\|_{L^{2}_{\mu}([y_{i},y_{i+1}])}^{2}}{(p-1)! }\int_{y_{i}}^{y_{i+1}}\left|y-y_{i}\right|^{2p-1}\tilde{\mu}^{-1}(y)\mathrm{ d}\mu(y)\] In order to estimate the last integral, change variables using \(\phi(x)\) defined in (45). We get \[\int_{y_{i}}^{y_{i+1}}\left|y-y_{i}\right|^{2p-1}\tilde{\mu}^{-1}(y)\mathrm{ d}\mu(y)\leq\int_{x_{i}}^{x_{i+1}}\left|\phi(x)-\phi(x_{i})\right|^{2p-1} \tilde{\mu}^{-1}(\phi(x))\mu(\phi(x))\phi^{\prime}(x)\mathrm{d}x.\] A Taylor expansion together with the fact that \(\phi^{\prime}\) is increasing, gives \(\phi(x)-\phi(x_{i})\leq\phi^{\prime}(x)(x-x_{i})\). So we get \[\int_{y_{i}}^{y_{i+1}}\left|y-y_{i}\right|^{2p-1}\tilde{\mu}^{-1}(y)\mathrm{ d}\mu(y)\leq\int_{x_{i}}^{x_{i+1}}(x-x_{i})^{2p-1}\left(\phi^{\prime}(x) \right)^{2p}\tilde{\mu}^{-1}(\phi(x))\mu(\phi(x))\mathrm{d}x.\] Recall now that \(\left(\phi^{\prime}(x)\right)^{2p}\tilde{\mu}^{-1}(\phi(x))\mu(\phi(x))\equiv C _{\phi}\). Integration yields \[\int_{y_{i}}^{y_{i+1}}\left|y-y_{i}\right|^{2p-1}\tilde{\mu}^{-1}(y)\mathrm{ d}\mu(y)\leq\frac{(m+1)^{-2p}}{2p}C_{\phi}.\] And for the original quantity, we get \[\int_{y_{i}}^{y_{i+1}}\left|u-I_{m}^{p}[u]\right|^{2}(y)\mathrm{d}\mu(y)\leq C_{ \phi}\frac{p^{2}}{2p}(m+1)^{-2p}\left(\frac{\|\partial^{p}u\|_{L^{2}_{\tilde{ \mu}}([y_{i},y_{i+1}])}}{p!}\right)^{2}.\] For \(i=m-1,m\), recall that \(I_{m}^{p}\) is defined in \([y_{m},+\infty)\) as the polynomial extension in the previous interval. Analogous estimates give \[\int_{y_{m-1}}^{+\infty}\left|u-I_{m}^{p}[u]\right|^{2}(y)\mathrm{d}\mu(y)\leq C _{\phi}\frac{p^{2}}{2p}2^{2p}(m+1)^{-2p}\left(\frac{\|\partial^{p}u\|_{L^{2}_{ \tilde{\mu}}([y_{i},y_{i+1}])}}{p!}\right)^{2}.\] Analogous estimates for \(m\leq\frac{m+1}{2}\) give the second estimates. We define the following _level-to-knots function_: \[m(\nu)\coloneqq 2^{\nu+1}-1\qquad\forall\nu\in\mathbb{N}_{0} \tag{48}\] and observe that \(\left(\mathcal{Y}^{m(i)}\right)_{i\in\mathbb{N}_{0}}\) are nested, i.e. \(\mathcal{Y}^{m(i)}\subset\mathcal{Y}^{m(i+1)}\) for all \(i\in\mathbb{N}_{0}\). The level-to-knot function is used to define detail operators \(\Delta_{i}\) and hierarchical surpluses as expained in the beginning of the section. We now apply the previous results to estimate 1D detail operators. **Lemma 23**.: _Consider \(u:\mathbb{R}\to\mathbb{R}\), a function with \(\partial u\in L^{2}_{\tilde{\mu}}(\mathbb{R})\). Consider \(p\geq 2\). There holds_ \[\left\|\Delta_{1}[u]\right\|_{L^{2}_{\tilde{\mu}}(\mathbb{R})}\leq C_{1}\left\| \partial u\right\|_{L^{2}_{\tilde{\mu}}(\mathbb{R})},\] _where \(C_{1}=\tilde{C}_{1}\sqrt{\int_{0}^{\infty}\sum_{j=1}^{p}\left|l_{j}\right|^{2} \mathrm{d}\tilde{\mu}}\sqrt{\int_{0}^{y_{3}}\tilde{\mu}^{-1}}\). \(\tilde{C}_{1}\) was defined in the previous lemma, \(y_{1},y_{2},y_{3}\) delimit the intervals of definition of the piecewise polynomial \(I_{3}^{p}[u]\) and \((l_{j})_{j=1}^{p}\) denote the degree \(p-1\) Lagrange basis on \([y_{2},y_{3}]\). If additionally \(\partial^{p}u\in L^{2}_{\tilde{\mu}}(\mathbb{R})\), we have_ \[\left\|\Delta_{\nu}[u]\right\|_{L^{2}_{\tilde{\mu}}(\mathbb{R})}\leq C_{2}2^{- p\nu}\left\|\partial^{p}u\right\|_{L^{2}_{\tilde{\mu}}(\mathbb{R})}\qquad \forall\nu\geq 1,\] _where \(C_{2}=\tilde{C}_{2}(1+2^{-p})\)._ Proof.: Let us begin with the first estimate. Recall that nodes are nested, so \(I_{1}[u]=I_{1}\left[I_{3}^{p}[u]\right]\). This implies that \[\Delta_{1}[u]=I_{3}^{p}[u]-I_{1}[u]=I_{3}^{p}[u]-I_{1}\left[I_{3}^{p}[u]\right] =(1-I_{1})\left[I_{3}^{p}[u]\right].\] Applying the previous lemma, we get \[\left\|\Delta_{1}[u]\right\|_{L^{2}\mu(\mathbb{R})}\leq\tilde{C}_{1}\left\| \partial I_{3}^{p}[u]\right\|_{L^{2}_{\tilde{\mu}}(\mathbb{R})}.\] To estimate the last integral, consider \(x_{1}=y_{2},x_{2},\ldots,x_{p}=y_{3}\) the interpolation nodes in the interval \([y_{2},y_{3}]\). Observe that \(\partial I^{p}[u]=\partial I^{p}[u-u(0)]\) and estimate \[\int_{0}^{\infty}\left|\partial I^{p}[u]\right|^{2}\mathrm{d} \tilde{\mu} =\int_{0}^{\infty}\left|\partial I^{p}[u-u(0)]\right|^{2}\mathrm{ d}\tilde{\mu}=\int_{0}^{\infty}\left|\sum_{j=1}^{p}(u(x_{j})-u(0))l_{j}^{ \prime}\right|^{2}\mathrm{d}\tilde{\mu}\] \[\leq\max_{j=1,\ldots,n}\left|u(x_{j})-u(0)\right|\int_{0}^{\infty }\sum_{j=1}^{p}\left|l_{j}^{\prime}\right|^{2}\mathrm{d}\tilde{\mu}\] Since the second term is bounded for fixed \(p\), let us focus on the first one. Simple computations give \[\max_{j=1,\ldots,n}\left|u(x_{j})-u(0)\right|\leq\int_{0}^{y_{3}}\left| \partial u\right|\leq\left\|\partial u\right\|_{L^{2}_{\tilde{\mu}}([0,y_{3}] )}\sqrt{\int_{0}^{y_{3}}\tilde{\mu}^{-1}}.\] This, together with analogous computations on \((-\infty,0]\) give the first estimate. To prove the second estimate, Observe that \[\left\|\Delta_{\nu}[u]\right\|_{L^{2}_{\mu}(\mathbb{R})}=\left\|I_{m(\nu)}^{p}[u ]-I_{m(\nu-1)}^{p}[u]\right\|_{L^{2}_{\mu}(\mathbb{R})}\leq\left\|u-I_{m(\nu)}^ {p}[u]\right\|_{L^{2}_{\mu}(\mathbb{R})}+\left\|u-I_{m(\nu-1)}^{p}[u]\right\|_ {L^{2}_{\mu}(\mathbb{R})}\] The previous lemma and simple computations imply the second estimate. We can finally estimate hieararchical surpluses as follows. **Proposition 24**.: _Let \(u:\mathbb{R}^{\mathbb{N}}\to\mathbb{R}\), \(p\geq 2\) and \(\boldsymbol{\nu}\in\mathcal{F}\). Then_ \[\left\|\Delta_{\boldsymbol{\nu}}[u]\right\|_{L^{2}_{\mu}(\mathbb{R}^{\mathbb{N }})}\leq\prod_{i:\nu_{i}=1}C_{1}\prod_{i:\nu_{i}>1}\frac{C_{2}2^{-p\nu_{i}}}{p!}\left\|\partial_{\{i:\nu_{i}=1\}}\partial^{p}_{\{i:\nu_{i}>1\}}u\right\|_{L^ {2}_{\hat{\mu}}(\mathbb{R}^{\mathbb{N}})},\] _where uis understood to be sufficiently regular for the right-hand-side to be well defined and \(C_{1},C_{2}>0\) are constants defined in the previous lemma._ Proof.: Assume without loss of generality that all components of \(\boldsymbol{\nu}\) are non-zero except the first \(N\) ones. Then, \[\left\|\Delta_{\boldsymbol{\nu}}[u]\right\|_{L^{2}_{\mu}(\mathbb{R}^{\mathbb{ N}})}^{2}=\int_{\mathbb{R}^{N}}\left|\Delta_{\boldsymbol{\nu}}[u]\right|^{2} \mathrm{d}\mu=\int_{\mathbb{R}^{N-1}}\int_{\mathbb{R}}\left|\Delta_{1}[y_{1} \mapsto\Delta_{\hat{\boldsymbol{\nu}}_{1}}u]\right|^{2}\mathrm{d}\mu_{1} \mathrm{d}\hat{\mu}_{1},\] where we denoted \(\hat{\boldsymbol{\nu}}_{1}=(\nu_{2},\ldots,\nu_{N})\) and \(\hat{\mu}_{1}\) the \(N-1\)-dimensional Gaussian measure. We apply the previous estimate (assume that \(\nu_{1}=1\), the other case is analogous) \[\left\|\Delta_{\boldsymbol{\nu}}[u]\right\|_{L^{2}_{\mu}(\mathbb{R}^{\mathbb{ N}})}^{2}\leq\int_{\mathbb{R}^{N-1}}C_{1}^{2}\int_{\mathbb{R}}\left|\partial_{1} \Delta_{\hat{\boldsymbol{\nu}}_{1}}[u]\right|^{2}\mathrm{d}\hat{\mu}_{1} \mathrm{d}\hat{\mu}_{1}.\] We now exchange the integrals as well as the operators acting on \(u\) to get \[\left\|\Delta_{\boldsymbol{\nu}}[u]\right\|_{L^{2}_{\mu}(\mathbb{R}^{\mathbb{ N}})}^{2}\leq C_{1}^{2}\int_{\mathbb{R}}\int_{\mathbb{R}^{N-1}}\left|\Delta_{ \hat{\boldsymbol{\nu}}_{1}}[\partial_{1}u]\right|^{2}\mathrm{d}\hat{\mu}_{1} \mathrm{d}\hat{\mu}_{1}.\] We can iterate this procedure \(N-1\) additional steps to obtain the statement. ### Basic profits and dimension-dependent convergence In the present section, we discuss the convergence of sparse grid approximation when the sample paths of Wiener processes and magnetizations are assumed to be Holder-continuous. To this end, we apply the regularity result found in section (5). Let us apply Proposition (14) to estimate the derivatives appearing in the estimate we found in Proposition 24. We find \[\left\|\Delta_{\boldsymbol{\nu}}[u]\right\|_{L^{2}_{\mu}(\mathbb{R}^{\mathbb{ N}})}\leq\prod_{i\in\mathrm{supp}(\boldsymbol{\nu})}v_{\nu_{i}},\] where \[\tilde{v}_{\nu_{i}}=\begin{cases}C_{1}\rho_{i}^{-1}&\text{if }\nu_{i}=1\\ C_{2}\left(2^{\nu_{i}}\rho_{i}\right)^{-p}&\text{if }\nu_{i}>1.\end{cases}\] and \[\rho_{i}=\epsilon 2^{\frac{(1-\alpha)\bigcap\log_{2}(i)}{2}}.\] Recall the framework presented at the beginning of Section 7. Given a multi-index \(\boldsymbol{\nu}\in\mathcal{F}\), we define as its value \[\tilde{v}_{\boldsymbol{\nu}}=\prod_{i\in\mathrm{supp}(\boldsymbol{\nu})} \tilde{v}_{\nu_{i}}, \tag{49}\] and as work \[w_{\boldsymbol{\nu}}=\prod_{i\in\mathrm{supp}(\boldsymbol{\nu})}p2^{\nu_{i}}. \tag{50}\] The definition of work is justified as follows: From the definition of 1D nodes (44) and level-to-knots function (48), each time a multi-index is added to the multi-index set, the spars grid gains \((2^{\nu_{i}+1}-2)(p-1)+1\) new nodes. Recall that the profit reads \[\tilde{\mathcal{P}}_{\boldsymbol{\nu}}=\frac{\tilde{v}_{\boldsymbol{\nu}}}{w_{ \boldsymbol{\nu}}}. \tag{51}\] We apply the convergence theorem 21. We obtain a convergence rate that depends root-exponentially on the number of approximated parameters. We skip the computations because they are a simplified version of the ones presented in the next section. **Theorem 25**.: _Let \(N\in\mathbb{N}\) and denote by \(\mathbf{m}_{N}:\mathbb{R}^{N}\to C^{1+\alpha/2,2+\alpha}(D_{T})\) the parameter-to-solution map of the parametric LLG problem under the \(N\)-dimensional noise assumption, i.e. assuming that \(W(\mathbf{y},t)=\sum_{i=1}^{N}y_{i}\eta_{i}(t)\) for all \(t\in[0,T]\) and all \(\mathbf{y}\in\mathbb{R}^{N}\). Let \(\Lambda_{n}\subset\mathbb{N}^{N}\) denote the multi-index set (43) defined using \(\tilde{\mathcal{P}}_{\mathbf{\nu}}\) as in (51). Let \(\mathcal{I}_{\Lambda_{n}}\) denote the corresponding piecewise polynomial sparse grids interpolant of degree \(p-1\) with nodes (44) and denote \(\mathcal{H}_{\Lambda_{n}}\subset\mathbb{R}^{N}\) the corresponding sparse grid. Under the assumptions of Theorem 2, for any \(\frac{2}{(1+\alpha)p}<\tau<1\),_ \[\|\mathbf{m}_{N}-\mathcal{I}_{\Lambda_{n}}\mathbf{m}_{N}\|_{L^{2}_{n}(\mathbb{R}^{N}, C^{1+\alpha/2,2+\alpha}(D_{T}))}\leq C_{\tau,p}(N)\left(\#\mathcal{H}_{ \Lambda_{n}}\right)^{1-1/\tau}. \tag{52}\] _where \(C_{\tau,p}(N)\) is a function of \(\tau\), \(p\), \(N\). In particular,_ \[C_{\tau,p}(N)=(1+P_{0})^{1/\tau}\exp\frac{1}{\tau}\left(\frac{C_{1}^{\tau}\left( 2p\right)^{1-\tau}}{2}\frac{1-N^{(1-(1-\alpha)\tau/2)}}{1-2^{1-(1-\alpha)\tau/2 }}+\frac{C_{2}^{\tau}\sigma(p,\tau)}{2}\frac{1}{1-2^{1-(1-\alpha)p\tau/2}} \right),\] _where_ \[P_{0}=C_{1}^{\tau}(2p)^{1-\tau}+C_{2}^{\tau}p^{1-\tau}\sigma(p, \tau),\] \[\sigma(p,\tau)=\frac{2^{2(1-\tau(p+1))}}{1-2^{1-\tau(p+1)}}\] _and \(C_{1},C_{2}\) were defined in Lemma 23. In particular, the bound grows root-exponentially in the number of dimensions._ ### Improved profits and dimension-independent convergence In the previous section, we could prove only a dimension-_dependent_ convergence. This may be attributed to the slow growth of the holomorphy radii \(\rho_{i}\lesssim 2^{\frac{(1-\alpha)\ell(i)}{2}}\). Let us consider the setting from Section 6, in which we assumed small Wiener processes and a coefficient \(\mathbf{g}\) with small gradient. With these assumptions, we proved that holomorphy radii can be chosen as (39). This will be sufficient to obtain dimension-_independent_ convergence. Again we work within the framework described at the beginning of Section 7. We need to define values that for any \(\mathbf{\nu}\in\mathcal{F}\) bound \(\|\Delta_{\mathbf{\nu}}\mathbf{m}\|_{L^{2}_{n}(\mathcal{S}_{\theta},\mathbb{M})}\) from above. The estimates from Proposition 24 followed the estimate on the derivatives from Porposition 20 motivates the following choice of values: \[v_{\mathbf{\nu}}=\prod_{i\in\mathrm{supp}(\mathbf{\nu})}v_{\nu_{i}},\] where \[v_{\nu_{i}}=\begin{cases}C_{1}\rho_{i}^{-1}&\text{if }\nu_{i}=1\\ C_{2}\left(2^{\nu_{i}}\rho_{i}\right)^{-p}&\text{if }\nu_{i}>1\end{cases}\] and \[\rho_{i}=\rho_{\ell,j}\coloneqq\gamma\begin{cases}1&\text{if }\nu_{\ell,j}=0 \\ 2^{\left(\frac{3}{2}-\delta\right)\ell}\frac{1}{\tau_{\ell}(\mathbf{\nu})}&\text{if }\nu_{\ell,j}=1\\ 2^{\left(\frac{1}{2}-\delta\right)\ell}&\text{otherwise},\end{cases}\] where \(i\) and \((\ell,j)\) are related through the hierarchical indexing (14). Recall that \(r_{\ell}(\mathbf{\nu})=\#\left\{j:\nu_{\ell,j}=1\right\}\). With works defined as in (50), the profits now read \[\mathcal{P}_{\mathbf{\nu}}=\frac{v_{\mathbf{\nu}}}{w_{\mathbf{\nu}}}. \tag{53}\] Let us determining for which \(\tau\in(0,1)\)\(\sum_{\mathbf{\nu}\in\mathcal{F}}v_{\mathbf{\nu}}^{\tau}w_{\mathbf{\nu}}^{1-\tau}\) is finite. This setting is more complex than the one in the previous section because the factors \(v_{\nu_{i}}\) that define the values \(v_{\mathbf{\nu}}\) depend in general on \(\mathbf{\nu}\) rather than \(\nu_{i}\) alone. Define \[\mathcal{F}^{*}\coloneqq\left\{\boldsymbol{\nu}\in\mathcal{F}:\nu_{i}\neq 1\ \forall i\in\mathbb{N}\right\}\] and for any \(\boldsymbol{\nu}\in\mathcal{F}^{*}\) \[K_{\boldsymbol{\nu}}\coloneqq\left\{\dot{\boldsymbol{\nu}}\in\mathcal{F}\,:\, \hat{\nu}_{i}=\nu_{i}\text{ if }\nu_{i}>1\text{ and }\hat{\nu}_{i}\in\left\{0,1\right\}\text{ if }\nu_{i}=0\right\}.\] The family \(\left\{K_{\boldsymbol{\nu}}\right\}_{\boldsymbol{\nu}\in\mathcal{F}^{*}}\) is a partition of \(\mathcal{F}\). Observe that \[\begin{split}\sum_{\boldsymbol{\nu}\in\mathcal{F}}v_{\boldsymbol {w}}^{\tau}w_{\boldsymbol{\nu}}^{1-\tau}&=\sum_{\boldsymbol{ \nu}\in\mathcal{F}^{*}}\sum_{\dot{\boldsymbol{\nu}}\in K_{\boldsymbol{\nu}}}v _{\boldsymbol{\nu}}^{\tau}w_{\dot{\boldsymbol{\nu}}}^{1-\tau}\\ &=\sum_{\boldsymbol{\nu}\in\mathcal{F}^{*}}\sum_{\dot{\boldsymbol {\nu}}\in K_{\boldsymbol{\nu}}}\sum_{i:\dot{\nu}_{i}\leq 1}\left(v_{\dot{\nu}_{i}}^{ \tau}w_{\dot{\nu}_{i}}^{1-\tau}\right)\prod_{i:\dot{\nu}_{i}>1}\left(v_{\dot{ \nu}_{i}}^{\tau}w_{\dot{\nu}_{i}}^{1-\tau}\right)\\ &=\sum_{\boldsymbol{\nu}\in\mathcal{F}^{*}}\prod_{i:\nu_{i}>1} \left(v_{\nu_{i}}^{\tau}w_{\nu_{i}}^{1-\tau}\right)\sum_{\dot{\boldsymbol{ \nu}}\in K_{\boldsymbol{\nu}}}\prod_{i:\dot{\nu}_{i}\leq 1}\left(v_{\dot{ \nu}_{i}}^{\tau}w_{\dot{\nu}_{i}}^{1-\tau}\right).\end{split} \tag{54}\] Consider the following subset of \(\mathcal{F}\): \[\mathcal{F}\left\{0,1\right\}\coloneqq K_{\boldsymbol{0}}=\left\{\boldsymbol {\nu}\in\mathcal{F}\,:\,\nu_{i}\in\left\{0,1\right\}\forall i\in\mathbb{N} \right\}.\] **Lemma 26**.: _Let \(0<p<1\), \(p<q<\infty\), and the sequence \(\boldsymbol{a}=(a_{j})_{j}\in\ell^{p}(\mathbb{N})\). Then,_ \[(\left|\boldsymbol{\nu}\right|_{1}!\ \boldsymbol{a}^{\boldsymbol{\nu}})_{ \boldsymbol{\nu}}\in\ell^{q}(\mathcal{F}\left\{0,1\right\}).\] Proof.: Choose \(\varepsilon>0\) such that \(1/(1+\varepsilon)\geq p\) and \(q>p(1+\varepsilon)\). We consider \(\alpha>\left|\boldsymbol{a}\right|_{1/(1+\varepsilon)}\) and write \[\sum_{\boldsymbol{\nu}\in\mathcal{F}\left\{0,1\right\}}(\left|\boldsymbol{\nu }\right|_{1}!\ \boldsymbol{a}^{\boldsymbol{\nu}})^{q}=\sum_{\boldsymbol{\nu}\in \mathcal{F}\left\{0,1\right\}}\left(\left|\boldsymbol{\nu}\right|_{1}!\ \alpha^{\left|\boldsymbol{\nu}\right|_{1}}\left(\frac{\boldsymbol{a}}{\alpha} \right)^{\boldsymbol{\nu}}\right)^{q}.\] There exists \(C_{\epsilon}>0\) such that \(\alpha^{\left|\boldsymbol{\nu}\right|_{1}}\leq C_{\varepsilon}\left(\left| \boldsymbol{\nu}\right|_{1}!\right)^{\varepsilon}\) for all \(\boldsymbol{\nu}\in\mathcal{F}\left\{0,1\right\}\). Thus, \[\sum_{\boldsymbol{\nu}\in\mathcal{F}\left\{0,1\right\}}(\left|\boldsymbol{ \nu}\right|_{1}!\ \boldsymbol{a}^{\boldsymbol{\nu}})^{q}\lesssim\sum_{\boldsymbol{\nu}\in \mathcal{F}\left\{0,1\right\}}\left(\left(\left|\boldsymbol{\nu}\right|_{1}! \right)^{1+\varepsilon}\left(\frac{\boldsymbol{a}}{\alpha}\right)^{ \boldsymbol{\nu}}\right)^{q}.\] Factorizing out the \(1+\varepsilon\) yields \[\sum_{\boldsymbol{\nu}\in\mathcal{F}\left\{0,1\right\}}(\left|\boldsymbol{\nu }\right|_{1}!\ \boldsymbol{a}^{\boldsymbol{\nu}})^{q}\lesssim\sum_{\boldsymbol{\nu}\in \mathcal{F}\left\{0,1\right\}}\left(\left|\boldsymbol{\nu}\right|_{1}!\left( \frac{\boldsymbol{a}}{\alpha}\right)^{\frac{1}{1+\varepsilon}\boldsymbol{\nu} }\right)^{(1+\varepsilon)q}.\] Since \(\boldsymbol{\nu}!=1\) for all \(\boldsymbol{\nu}\in\mathcal{F}\left\{0,1\right\}\), we can write \[\sum_{\boldsymbol{\nu}\in\mathcal{F}\left\{0,1\right\}}(\left|\boldsymbol{\nu }\right|_{1}!\ \boldsymbol{a}^{\boldsymbol{\nu}})^{q}\lesssim\sum_{\boldsymbol{\nu}\in \mathcal{F}\left\{0,1\right\}}\left(\frac{\left|\boldsymbol{\nu}\right|_{1}! }{\boldsymbol{\nu}!}\left(\frac{\boldsymbol{a}}{\alpha}\right)^{\frac{1}{1+ \varepsilon}\boldsymbol{\nu}}\right)^{(1+\varepsilon)q} \tag{55}\] Observe that \(\sum_{j}(\frac{a_{j}}{\alpha})^{\frac{1}{1+\varepsilon}}<1\) because of the definition of \(\alpha\). Moreover, from the assumption on \(\boldsymbol{a}\) we have \(\left(\frac{\boldsymbol{a}}{\alpha}\right)^{\frac{1}{1+\varepsilon}}\in\ell^{r} (\mathbb{N})\) for any \(r\geq p(1+\varepsilon)\). Then, [21, Theorem 1] implies that the second sum in (55) is finite, thus proving the statement. **Lemma 27**.: _If \(\tau>\frac{1}{\frac{3}{2}-\delta}\), there exists \(C>0\) such that for any \(\boldsymbol{\nu}\in\mathcal{F}^{*}\),_ \[\sum_{\dot{\boldsymbol{\nu}}\in K_{\boldsymbol{\nu}}}\prod_{i:\dot{\nu}_{i}\leq 1 }\left(v_{\dot{\nu}_{i}}^{\tau}w_{\dot{\nu}_{i}}^{1-\tau}\right)\leq C.\] Proof.: For this proof, we denote the level of \(i\) by \(\ell(i)\). First observe that, from the definitions of value and work, we may write \[\prod_{i:\dot{\nu}_{i}\leq 1}\left(v_{\dot{\nu}_{i}}^{\tau}w_{\dot{\nu}_{i}}^{1- \tau}\right)=\prod_{i:\dot{\nu}_{i}=1}\left(C_{1}2^{-\left(\frac{3}{2}-\delta \right)\ell(i)}r_{\ell(i)}(\boldsymbol{\nu})\right)^{\tau}\left(2p\right)^{1- \tau}.\] The factors in the right-hand-side are independent of the components of \(\mathbf{\nu}\) for which \(\nu_{i}\neq 1\). Thus, we define \[D_{\mathbf{\nu}}=\left\{\mathbf{d}\in\mathcal{F}\,:\,\begin{cases}d_{i}=0&\text{if }\nu_{i}>1 \\ d_{i}\in\{0,1\}&\text{otherwise}\end{cases}\right\}\subset\mathcal{F}\left\{0,1\right\}\] and substitute \[\sum_{\hat{\mathbf{\nu}}\in K_{\mathbf{\nu}}}\prod_{i:\hat{\nu}_{i}=1}\left(C_{1}2^{- \left(\frac{3}{2}-\delta\right)\ell(i)}r_{\ell(i)}(\hat{\mathbf{\nu}})\right)^{ \tau}(2p)^{1-\tau}=\sum_{\mathbf{d}\in D_{\mathbf{\nu}}}\prod_{i:d_{i}=1}\left(C_{1}2^{ -\left(\frac{3}{2}-\delta\right)\ell(i)}r_{\ell(i)}(\mathbf{d})\right)^{\tau}(2p)^ {1-\tau}\,.\] From the definition of \(r_{\ell(i)}(\mathbf{d})\), we get \[\prod_{i:d_{i}=1}r_{\ell(i)}(\mathbf{d})\leq\prod_{\ell:\exists j:d_{\ell,j}=1}r_{ \ell}(\mathbf{d})^{r_{\ell}(\mathbf{d})}.\] Stirling formula gives \(r_{\ell}(\mathbf{d})^{r_{\ell}(\mathbf{d})}\leq r_{\ell}(\mathbf{d})!e^{r_{\ell}(\mathbf{d})}\). Observe that \(r_{\ell}(\mathbf{d})\leq\left|\mathbf{d}_{\ell}\right|_{1}\), where \(\mathbf{d}_{\ell}=\left(d_{\ell,1},\ldots,d_{\ell,\lceil 2^{\ell-1}\rceil}\right)\) for any \(\ell\in\mathbb{N}_{0}\). Together with a simple property of the factorial, this gives \[\prod_{\ell:\exists j:d_{\ell,j}=1}(r_{\ell}(\mathbf{d}))!\leq\prod_{\ell:\exists j :d_{\ell,j}=1}|\mathbf{d}_{\ell}|_{1}!\leq\left(\sum_{\ell\in\mathbb{N}}|\mathbf{d}_{ \ell}|_{1}\right)!=|\mathbf{d}|_{1}!\,.\] To summarize, we have estimated \[\sum_{\hat{\mathbf{\nu}}\in K_{\mathbf{\nu}}}\prod_{i:\hat{\nu}_{i}\leq 1}\left(v_{ \hat{\nu}_{i}}^{\tau}w_{\hat{\nu}_{i}}^{1-\tau}\right)\leq\sum_{\mathbf{d}\in D_ {\mathbf{\nu}}}(|\mathbf{d}|_{1}!)^{\tau}\prod_{i:d_{i}=1}\left(C_{1}^{\tau}2^{-\left( \frac{3}{2}-\delta\right)\ell(i)\tau}\left(2p\right)^{1-\tau}e^{\tau}\right)\] Define for all \(j\in\mathbb{N}\) to obtain \[\sum_{\hat{\mathbf{\nu}}\in K_{\mathbf{\nu}}}\prod_{i:\hat{\nu}_{i}\leq 1}\left(v_{ \hat{\nu}_{i}}^{\tau}w_{\hat{\nu}_{i}}^{1-\tau}\right)\leq\sum_{\mathbf{d}\in D_ {\mathbf{\nu}}}\Big{(}\,|\mathbf{d}|_{1}!\mathbf{c}^{\mathbf{d}}\Big{)}^{\tau}.\] Simple computations reveal that \(\mathbf{c}=(c_{j})_{j}\in\ell^{\tau}(\mathbb{N})\) for all \(\tau>(\frac{3}{2}-\delta)^{-1}\). We apply the previous lemma and conclude the proof. Going back to (54), we are left with determining for which parameters \(p\geq 3,\tau>\frac{1}{\frac{3}{2}-\delta}\) the series \(\sum_{\mathbf{\nu}\in\mathcal{F}^{*}}\prod_{i:\nu_{i}>1}\left(v_{\nu_{i}}^{\tau}w_ {\nu_{i}}^{1-\tau}\right)\) is summable. By means of the product structure of the summands, we can write \[\sum_{\mathbf{\nu}\in\mathcal{F}^{*}}\prod_{i:\nu_{i}>1}\left(v_{\nu_{i}}^{\tau}w_ {\nu_{i}}^{1-\tau}\right)=\prod_{i\in\mathbb{N}}\sum_{\nu_{i}\in\mathbb{N} \setminus\{1\}}v_{\nu_{i}}^{\tau}w_{\nu_{i}}^{1-\tau}=\prod_{i\in\mathbb{N}} \left(1+\sum_{\nu_{i}\geq 2}\left(C_{2}2^{-p\left(\left(\frac{1}{2}-\delta\right)\ell(i)+ \nu_{i}\right)}\right)^{\tau}(p2^{\nu_{i}})^{1-\tau}\right)\] Observe that the sum is finite if \(\tau\geq\frac{1}{p+1}\) and in this case \[\sum_{\nu_{i}\geq 2}2^{-p\left(\left(\frac{1}{2}-\delta\right)\ell(i)+\nu_{i} \right)\tau}\left(p2^{\nu_{i}}\right)^{1-\tau}=C_{2}^{\tau}2^{-p\left(\frac{1} {2}-\delta\right)\ell(i)\tau}p^{1-\tau}\sigma,\] where \(\sigma=\sigma(p,\tau)=\frac{2^{-2((p+1)\tau+1)}}{1+2^{-(p+1)\tau+1}}\). Then, for \(F_{\ell}\coloneqq C_{2}^{\tau}2^{-p\left(\frac{1}{2}-\delta\right)\ell\tau}p^{1 -\tau}\sigma\), so far we have estimated \[\sum_{\mathbf{\nu}\in\mathcal{F}^{*}}\prod_{i:\nu_{i}>1}\left(v_{\nu_{i}}^{\tau}w_ {\nu_{i}}^{1-\tau}\right)\leq\prod_{i\in\mathbb{N}}\left(1+F_{\ell(i)}\right).\] We can further estimate, recalling the hierarchical indexing (14), \[\prod_{i\in\mathbb{N}}\left(1+F_{\ell(i)}\right)\leq\exp\left(\sum_{i\in \mathbb{N}}\log\left(1+F_{\ell(i)}\right)\right)\leq\exp\left(\sum_{\ell\in \mathbb{N}_{0}}2^{\ell}\log\left(1+F_{\ell}\right)\right)\leq\exp\left(\sum_{ \ell\in\mathbb{N}_{0}}2^{\ell}F_{\ell}\right).\] The sum can be written as \[\sum_{\ell\in\mathbb{N}_{0}}2^{\ell}F_{\ell}=C_{2}^{\tau}p^{1-\tau}\sigma\sum_{ \ell\in\mathbb{N}_{0}}2^{\left(1-\left(\frac{1}{2}-\delta\right)p\tau\right)\ell},\] which is finite for \(\tau>\frac{1}{p\left(\frac{1}{2}-\delta\right)}\) and in this case equals \(C_{2}^{\tau}p^{1-\tau}\sigma\left(1-2^{1-\left(\frac{1}{2}-\delta\right)p\tau} \right)^{-1}\). **Remark 28**.: _When \(p=2\) the condition \(\tau>\frac{1}{p\left(\frac{1}{2}-\delta\right)}\) just above gives \(\tau>1\) for any \(\delta>0\). This means that we are not able to show that piecewise linear sparse grids converges independently of the number of dimensions (although we see it in the numerical experiments below). Conversely, if \(p\geq 3\) there exists \(\frac{2}{3}<\tau<1\) that satisfies all the conditions (remember that while \(\delta\) cannot be 0, it can be chosen arbitrarily small)._ Finally Theorem 21 implies the following convergence. **Theorem 29**.: _Let \(\mathbf{m}:\mathcal{S}_{\theta}\to\mathbb{M}\) denote the parameter-to-solution map of the parametric LLG problem under the assumptions of Section 6. Let \(\Lambda_{n}\subset\mathcal{F}\) denote the multi-index set (43) defined using the profits \(\mathcal{P}_{\nu}\) (53). Let \(\mathcal{I}_{\Lambda_{n}}\) denote the corresponding piecewise polynomial sparse grids interpolant of degree \(p-1\) for \(p\geq 3\) with nodes (44). Assume that the corresponding sparse grid satisfies \(\mathcal{H}_{\Lambda_{n}}\subset\mathcal{S}_{\theta}\). Under the assumptions of Theorem 2, for any \(\frac{2}{3}<\tau<1\),_ \[\left\|\mathbf{m}-\mathcal{I}_{\Lambda_{n}}\mathbf{m}\right\|_{L^{2}_{\mu}(\mathcal{S }_{\theta};\mathbb{M})}\leq C_{\tau,p}\left(\#\mathcal{H}_{\Lambda_{n}}\right) ^{1-1/\tau}.\] _where \(C_{\tau,p}\) is a function of \(\tau\), \(p\) but is dimension-independent. In particular,_ \[C_{\tau,p}=C^{\frac{1}{\tau}}\exp\left(\frac{1}{\tau}C_{2}^{\tau}p^{1-\tau} \frac{2^{2(-(p+1)\tau+1)}}{1-2^{-(p+1)\tau+1}}\frac{1}{1-2^{1-\left(\frac{1}{2} -\delta\right)p\tau}}\right),\] _where in turn \(C\) is defined in Lemma 27 and \(C_{2}\) is defined in Lemma 23._ **Remark 30** (On optimality of the convergence rate \(-\frac{1}{2}\)).: _The best convergence rate with respect to the number of collocation nodes predicted by the theorem is \(-\frac{1}{2}\) and is obtained for \(\tau=\frac{2}{3}\). This is the same as the convergence rate of the parametric truncation with respect to the number of parameters: Denoting \(\mathbf{m}(\mathbf{y})\) the parametric solution for \(\mathbf{y}\in\mathbb{R}^{\mathbb{N}}\) and by \(\mathbf{m}_{N}(\mathbf{y})\coloneqq\mathbf{m}((y_{1},\ldots y_{N},0,0,\ldots))\), for any \(N\in\mathbb{N}\), its \(N\)-dimensional truncation, one can show that_ \[\left\|\mathbf{m}-\mathbf{m}_{N}\right\|_{L^{2}_{\mu}(\mathcal{X},L^{2}([0,T],H^{1}(D) ))}\lesssim N^{-1/2}.\] _Since it is not possible to have less that 1 collocation node per dimension, the sparse grid algorithm achieves the optimal approximation rate._ _In particular, piecewise quadratic approximation (\(p=3\)) has optimal convergence rate and using \(p>3\) does not improve the convergence rate (but may improve the constant \(C_{\tau,p}\)). For the same reason, sparse grid interpolation based on other 1D interpolations schemes (e.g. global polynomials) cannot give a better convergence rate (but may improve the constant)._ **Remark 31**.: _Given an approximation \(\mathbf{m}_{\Lambda}(\mathbf{y})=\mathcal{I}_{\Lambda}[\mathbf{m}](\mathbf{y})\) of the solution to the parametric LLG problem (15), it is easy to sample an approximate random solution of the random LLG problem (5) too. One has to sample i.i.d. standard normal random variables \(\mathbf{Y}=(Y_{i})_{i=1}^{N_{\Lambda}}\) and evaluate \(\mathbf{m}_{\Lambda}(Y_{1},\ldots,Y_{N_{\Lambda}})\). Here \(N_{\Lambda}\in\mathbb{N}\) is the support size of the multi-index set \(\Lambda\): \(N_{\Lambda}\coloneqq\min\left\{n\in\mathbb{N}:\forall\mathbf{\nu}\in\Lambda,\ \text{ supp}(\mathbf{\nu})\leq n\right\}\). Equivalently, \(N_{\Lambda}\) is the number of active parameters in the sparse grid interpolant \(\mathcal{I}_{\Lambda}\). The root-mean-square error is naturally the same as the one we estimated in the previous theorem_ \[\sqrt{\mathbb{E}_{\mathbf{Y}}\left\|\mathbf{m}(\mathbf{Y})-\mathbf{m}_{\Lambda}(\mathbf{Y}) \right\|_{\mathbb{M}}}=\left\|\mathbf{m}-\mathcal{I}_{\Lambda}\mathbf{m}\right\|_{L^{ 2}_{\mu}(\mathcal{S}_{\theta},\mathbb{M})}.\] _We can also approximately sample from the random solution of the stochastic PDE (2):_ 1. _Sample a Wiener process_ \(W\)__ 2. _Compute the first_ \(N_{\Lambda}\) _coordinates_ \(\mathbf{Y}=(Y_{1},\ldots,Y_{N_{\Lambda}})\) _of its Levy-Ciesielsky expansion_ \(W(t)=\sum_{i=1}^{\infty}Y_{i}\eta_{i}(t)\)__ 3. _Compute_ \(\mathbf{m}_{\Lambda}(\mathbf{Y})\)_, the approximate solution to the random LLG problem (_5_)_ 4. _Finally compute the inverse Doss-Sussmann transform_ \[\mathbf{M}_{\Lambda}(W,t,x)\coloneqq e^{W(t)G}\mathbf{m}_{\Lambda}(\mathbf{Y};t,x).\] _Recall that a convenient expression for \(e^{WG}\) is available in the third line of (3). The approximation error is again comparable to the one found in the previous theorem. Indeed, denoting \(\left\|\cdot\right\|\) the root-mean-square error, the Doss-Sussmann transform implies_ \[\sqrt{\mathbb{E}_{W}\left\|\mathbf{M}-\mathbf{M}_{\Lambda}\right\|_{\mathbb{M}}}= \left\|e^{WG}\left(\mathbf{m}-\mathbf{m}_{\Lambda}\right)\right\|.\] _Recall then the third line of (3) to write_ \[\left\|e^{WG}\left(\mathbf{m}-\mathbf{m}_{\Lambda}\right)\right\|=\left\|\mathbf{m}-\mathbf{m}_{ \Lambda}+\sin(W)G\left(\mathbf{m}-\mathbf{m}_{\Lambda}\right)-\cos(W)G^{2}\left(\mathbf{m}- \mathbf{m}_{\Lambda}\right)\right\|.\] _Finally, the triangle inequality and the fact that \(\mathbf{g}\in L^{\infty}(D)\) yield_ \[\left\|\mathbf{M}-\mathbf{M}_{\Lambda}\right\|\leq\left(1+\left\|\mathbf{g}\right\|_{L^{ \infty}(D)}+\left\|\mathbf{g}\right\|_{L^{\infty}(D)}^{2}\right)\left\|\mathbf{m}-\mathbf{ m}_{\Lambda}\right\|.\] ### Numerical tests We test numerically the convergence of the sparse grid method defined above. Since no exact sample path of the solution is available, we approximate them with the space and time approximation from [1]. This method is based on the _tangent plane scheme_ and has the advantage of solving one _linear_ elliptic problem per time step with finite elements. The time-stepping is based on a BDF formula. The method is high-order for both the finite elements and BDF discretization. We consider the problem on the 2D domain \(D=[0,1]^{2}\) with \(z=0\). The final time is \(T=1\). The noise coefficient is defined as \[\mathbf{g}(\mathbf{x})=\left(-\frac{1}{2}\cos(\pi x_{1}),-\frac{1}{2}\cos(\pi x_{2}), \sqrt{1-\left(\frac{1}{2}\cos(\pi x_{1})\right)^{2}-\left(\frac{1}{2}\cos(\pi x _{2})\right)^{2}}\right). \tag{56}\] Observe that \(\partial_{n}\mathbf{g}=0\) on \(\partial D\) and that \(|\mathbf{g}|=1\) on \(D\). The initial condition is \(\mathbf{m}_{0}=(0,0,1)\). The space discretization is order \(1\) on a structured triangular mesh with \(2048\) elements and mesh-size \(h>0\). The time discretization is order \(1\) on \(256\) equispaced timesteps of size \(\tau\). We use piecewise affine sparse grid, corresponding to \(p=2\). As for the multi-index selection, we compare two strategies: * The basic profit from Section 7.2. We recall that \[\tilde{\mathcal{P}}_{\mathbf{\nu}}=\prod_{i:\nu_{i}=1}2^{-\frac{1}{2}\ell(i)}\prod _{i:\nu_{i}>1}\left(2^{\nu_{i}+\frac{1}{2}\ell(i)}\right)^{-p}\left(\prod_{i} p2^{\nu_{i}}\right)^{-1}\qquad\forall\mathbf{\nu}\in\mathcal{F},\] where \(\ell(i)=\lceil\log_{2}(i)\rceil\). Compared to (51), here we have set \(C_{1}=C_{2}=\epsilon=1\) and \(\alpha=0\) for simplicity. * A modified version of the improved profit from Section 7.3, namely (57) \[\mathcal{P}_{\mathbf{\nu}}=\prod_{i:\nu_{i}=1}2^{-\frac{3}{2}\ell(i)}\prod_{i:\nu _{i}>1}\left(2^{\nu_{i}+\frac{1}{2}\ell(i)}\right)^{-p}\left(\prod_{i}p2^{\nu _{i}}\right)^{-1}\qquad\forall\mathbf{\nu}\in\mathcal{F},\] where again \(\ell(i)=\lceil\log_{2}(i)\rceil\). Compared to Section 7.3, we have set \(C_{1}=C_{2}=\gamma=1\) and neglected the factor \(r_{\ell}(\mathbf{\nu})\). We estimate the approximation error of the sparse grid approximations with \[\frac{1}{N}\sum_{i=1}^{N}\left\|\mathbf{m}_{\tau h}(\mathbf{y}_{i})-\mathcal{I}_{ \Lambda}[\mathbf{m}_{\tau h}](\mathbf{y}_{i})\right\|_{L^{2}(0,T,H^{1}(D))},\] where \(N=1024\), \(\left(\mathbf{y}_{i}\right)_{i=1}^{N}\) are i.i.d. standard normal samples of dimension \(2^{10}\) each. \(\mathbf{m}_{hk}(\mathbf{y}_{i})\) denotes the corresponding space and time approximations of the sample paths. Observe that if the time-step size is \(\tau=2^{-n}\), then the parameter-to-finite element solution map depends only on the first \(n+1\) levels of the LC. In our case, \(n=8\), so the maximum relevant level is \(L=9\), i.e. \(512\) dimensions. In the following numerical examples we always approximate fewer dimensions, which means that the time-discretization error is negligible compared to the parametric approximation error. Results are displayed in Figure 3. In the left plot, we observe that using basic profits leads to a sub-algebraic convergence rate which decreases as the number of approximated dimensions increases. Conversely, improved profits leads to a robust algebraic convergence of order about \(\frac{1}{2}\). Piecewise _quadratic_ interpolation is optimal as predicted in Section 7.3 and it delivers the same convergence rate as piecewise linear interpolation. Hence, the restriction in Theorem 29 is possibly an artifact of the proof. In view of Remark 30, it seems unnecessary to test higher polynomial degrees. In the right plot, we observe that the number of active dimensions (i.e., those dimension which are seen by the sparse-grid algorithm) grows similarly for all methods, with the basic profit having a slightly higher value. Figure 3. Approximation of \(\boldsymbol{y}\mapsto\boldsymbol{m}(\boldsymbol{y})\). Top: Error vs. number of collocation nodes; Bottom left: Number of effective dimensions vs. number of collocation nodes. Bottom right: Comparison of convergence of the sparse grid approximation (\(p=3\), i.e. piecewise quadratic) for different space and time discretization parameters. In both case time-step and mesh size are related by \(h=8\tau\). Let us conclude the section with a study of the dependence on space and time approximation parameters. We verify numerically that the approximation power of the method does not degrade when space and time approximations are refined, see the bottom right plot in Figure 3. ## 8. Multi-level sparse grid collocation In the present section, we show how the sparse grid scheme defined and studied in this work can be combined with a method for space and time approximation to define a fully discrete approximation scheme. Here we employ again the linearly implicit BDF-finite element scheme from [1]. Given \(\tau>0\), consider \(N_{\tau}\) equispaced timesteps on \([0,T]\). Given \(h>0\), define a quasi-uniform triangulation \(\mathcal{T}_{h}\) of the domain \(D\) with mesh-spacing \(h\). Denote, for any \(\mathbf{y}\in\mathcal{S}_{\theta}\), \(\mathbf{m}_{\tau h}(\mathbf{y})\) the corresponding space and time approximation of \(\mathbf{m}(\mathbf{y})\). Assume that there exists a constant \(C_{\mathrm{FE}}>0\) independent of \(h\) or \(\tau\) such that \[\left\|\mathbf{m}-\mathbf{m}_{\tau,h}\right\|_{L^{2}_{\mu}(\mathcal{S}_{\theta}; \mathbb{M})}\leq C_{\mathrm{FE}}(\tau+h).\] Moreover, we assume that the computational cost (number of floating-point operations) of computing a single \(\mathbf{m}_{\tau h}(\mathbf{y})\) is proportional to \[C_{\mathrm{sample}}(\tau,h)=\tau^{-1}h^{-d}.\] Indeed, the numerical scheme requires, at each timestep, solving a linear system of size proportional to the number of elements of \(\mathcal{T}_{h}\), which in turn is proportional to \(h^{-d}\). The latter operation can be executed with empirical linear complexity using GMRES with multigrid preconditioning. See also [38], for mathematically rigorous preconditioning strategies for LLG. Theorem 29 shows that there exists \(C_{\mathrm{SG}}>0\) and \(0<r<\frac{1}{2}\) such that, denoting \(\mathcal{I}_{\Lambda}\) the sparse grid interpolant and \(\mathcal{H}_{\Lambda}\) the corresponding sparse grid, \[\left\|\mathbf{m}-\mathcal{I}_{\Lambda}[\mathbf{m}]\right\|_{L^{2}_{\mu}(\mathcal{S}_ {\theta};\mathbb{M})}\leq C_{\mathrm{SG}}\left(\#\mathcal{H}_{\Lambda}\right) ^{-r}.\] A _Single Level_ approximation of \(\mathbf{m}\) can be defined as \[\mathbf{m}_{\Lambda,\tau,h}^{\mathrm{SL}}\coloneqq\mathcal{I}_{\Lambda}\left[\bm {m}_{h\tau}\right].\] The cost of computing the single level approximation is \[C_{\Lambda,\tau,h}^{\mathrm{SL}}\coloneqq\#\mathcal{H}_{\Lambda}C_{\mathrm{ sample}}(\tau,h)\] The approximation accuracy can be estimated as \[\left\|\mathbf{m}-\mathbf{m}_{\Lambda,\tau,h}^{\mathrm{SL}}\right\|_{L^{ 2}_{\mu}(\mathcal{S}_{\theta};\mathbb{M})} \leq\left\|\mathbf{m}-\mathcal{I}_{\Lambda}[\mathbf{m}]\right\|_{L^{2}_{ \mu}(\mathcal{S}_{\theta};\mathbb{M})}+\left\|\mathcal{I}_{\Lambda}[\mathbf{m}- \mathbf{m}_{\tau,h}]\right\|_{L^{2}_{\mu}(\mathcal{S}_{\theta};\mathbb{M})}\] \[\leq C_{\mathrm{SG}}\left(\#\mathcal{H}_{\Lambda}\right)^{-r}+C_{ \mathrm{stab}}C_{\mathrm{FE}}(h+\tau).\] where \(C_{\mathrm{stab}}>0\) is the stability constant of the sparse grid interpolation operator. A quasi-optimal single level approximation requires balancing the three approximation parameters \(\Lambda,\tau\) and \(h\) so that the summands in the previous estimate have similar values. This choice leads to the following error estimate with respect to the cost \(C_{\Lambda,\tau,h}^{\mathrm{SL}}\) \[\left\|\mathbf{m}-\mathbf{m}_{\Lambda,\tau,h}^{\mathrm{SL}}\right\|_{L^{2}_{\mu}( \mathcal{S}_{\theta};\mathbb{M})}\lesssim\left(C_{\Lambda,\tau,h}^{\mathrm{ SL}}\right)^{-\frac{1}{\Phi^{+}(d+1)}}. \tag{58}\] A _Multilevel_ approximation of \(\mathbf{m}\) can be defined following [55]. Let \(K\geq 0\) and consider a sequence of approximation parameters \(\left(\Lambda_{k}\right)_{k=0}^{K}\), \(\left(\tau_{k}\right)_{k=0}^{K}\) and \(\left(h_{k}\right)_{k=0}^{K}\). Denote \(\mathbf{m}_{k}=\mathbf{m}_{\tau_{k},h_{k}}\) for \(0\leq k\leq K\) and \(\mathbf{m}_{-1}\equiv 0\). Define the multilevel approximation as \[\mathbf{m}_{K}^{\mathrm{ML}}\coloneqq\sum_{k=0}^{K}\mathcal{I}_{\Lambda_{k}}\left[ \mathbf{m}_{K-k}-\mathbf{m}_{K-k-1}\right].\] The computational cost is proportional to \[C_{K}^{\mathrm{ML}}=\sum_{k=0}^{K}\#\mathcal{H}_{\Lambda_{k}}C_{\mathrm{ sample}}(\tau_{K-k},h_{K-k}).\] To guarantee approximation, we require the following assumption on the sparse grid approximation of differences: For any \(0\leq k\leq K\), \[\|\mathbf{m}_{k}-\mathbf{m}_{k-1}-\mathcal{I}_{\Lambda}[\mathbf{m}_{k}-\mathbf{m}_{k-1}]\|_{L^{ 2}_{\mu}(\mathcal{S}_{\theta};\mathbb{M})}\leq C_{\mathrm{SG}}\left(\#\mathcal{ H}_{\Lambda}\right)^{-r}(h_{k}+\tau_{k}).\] For the multilevel-approximation to be quasi-optimal, all terms in the multilevel expansion shall have similar magnitude to the \(K\)-th (finest) time and space approximation. To this end, we choose the multi-index sets \(\Lambda_{k}\) so that \[\left(\#\mathcal{H}_{\Lambda_{K-k}}\right)^{-r}\leq C_{\mathrm{FE}}\left(C_{ \mathrm{SG}}(K+1)\right)^{-1}\frac{\tau_{K}+h_{K}}{\tau_{k}+h_{k}}. \tag{59}\] As a consequence, the multilevel error can be estimated as \[\left\|\mathbf{m}-\mathbf{m}_{K}^{\mathrm{ML}}\right\|_{L^{2}_{\mu}(\mathcal{S}_{ \theta};\mathbb{M})}\leq 2C_{\mathrm{FE}}(\tau_{K}+h_{K}).\] The error can be related to the computational cost as done in [55]. We obtain the improved error-to-cost relation \[\left\|\mathbf{m}-\mathbf{m}_{K}^{\mathrm{ML}}\right\|_{L^{2}_{\mu}(\mathcal{S}_{ \theta};\mathbb{M})}\lesssim\left(C_{K}^{\mathrm{ML}}\right)^{-\frac{1}{d+1}}. \tag{60}\] We compare numerically single- and multilevel schemes on the following example of relaxation dynamics with thermal noise. The domain is \(D=[0,1]^{2}\) with \(z=0\). The final time is \(T=1\). The noise coefficient \(\mathbf{g}\) is set to one fifth of the coefficient defined in (56). The initial condition \(\mathbf{m}^{0}\) coincides with (56). The time and space approximations are both of order \(1\). The sparse grid scheme is piecewise linear and the multi-index sets are built using the improved profit (57) from the previous numerical experiments. Observe that, in the following convergence tests refinement leads automatically to an increase of the number of approximated parameters and a reduction of the parametric _truncation_ error. We consider \(K=5\) and define \(\tau_{k}=2^{-k-2}\), \(h_{k}=2^{-k}\), and \(\Lambda_{k}\) using the same profit-maximization as in the previous section. For the single level approximation, we choose \(\Lambda_{k}\) minimal such that \(\#\mathcal{H}_{\Lambda_{k}}>2^{2k}\). The last choice corresponds to assuming that the sparse grid approximation converges with order \(r=\frac{1}{2}\) with respect to the number of collocation nodes. We compute a sequence of single level approximations \(\mathbf{m}_{\Lambda_{k},\tau_{k},h_{k}}^{\mathrm{SL}^{2}}\) for \(k=0,\ldots K\) and report the results in Figure 4. For the multilevel approximation, we follow formula (59). The constants \(C_{\mathrm{FE}}\approx 0.7510\), \(C_{\mathrm{SG}}\approx 0.1721\) and \(r\approx 0.4703\) are determined with short sparse grid and finite element convergence tests. We obtain \begin{tabular}{l|c c c c c c} K & \(\#\mathcal{H}_{\Lambda_{0}}\) & \(\#\mathcal{H}_{\Lambda_{1}}\) & \(\#\mathcal{H}_{\Lambda_{2}}\) & \(\#\mathcal{H}_{\Lambda_{3}}\) & \(\#\mathcal{H}_{\Lambda_{4}}\) & \(\#\mathcal{H}_{\Lambda_{5}}\) \\ \hline 0 & 1 & & & & & \\ 1 & 1 & 3 & & & & \\ 2 & 1 & 3 & 10 & & & \\ 3 & 1 & 4 & 18 & 82 & & \\ 4 & 2 & 7 & 27 & 131 & 602 & \\ 5 & 2 & 10 & 42 & 193 & 887 & 1500 \\ \end{tabular} The last figure 1500 is chosen smaller than the one required by the formula (4082) to guarantee reasonable computational times. Again results are reported in Figure 4. Since the solution in closed form is not available, we approximate it with a reference solution. We consider 128 Monte Carlo samples of \(W\) and approximate the corresponding sample paths in space and time with timestep \(\tau_{\mathrm{ref}}=2^{-9}\) and mesh size \(h_{\mathrm{ref}}=2^{-7}\). Computing the error for the single- and multilevel approximation requires first sampling the interpolants on the Monte Carlo sample parameters and then interpolating in the reference space. The convergence test confirms that the multilevel method is superior to the single level method.
2302.06548
Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement Learning
Tomorrow's robots will need to distinguish useful information from noise when performing different tasks. A household robot for instance may continuously receive a plethora of information about the home, but needs to focus on just a small subset to successfully execute its current chore. Filtering distracting inputs that contain irrelevant data has received little attention in the reinforcement learning literature. To start resolving this, we formulate a problem setting in reinforcement learning called the $\textit{extremely noisy environment}$ (ENE), where up to $99\%$ of the input features are pure noise. Agents need to detect which features provide task-relevant information about the state of the environment. Consequently, we propose a new method termed $\textit{Automatic Noise Filtering}$ (ANF), which uses the principles of dynamic sparse training in synergy with various deep reinforcement learning algorithms. The sparse input layer learns to focus its connectivity on task-relevant features, such that ANF-SAC and ANF-TD3 outperform standard SAC and TD3 by a large margin, while using up to $95\%$ fewer weights. Furthermore, we devise a transfer learning setting for ENEs, by permuting all features of the environment after 1M timesteps to simulate the fact that other information sources can become relevant as the world evolves. Again, ANF surpasses the baselines in final performance and sample complexity. Our code is available at https://github.com/bramgrooten/automatic-noise-filtering
Bram Grooten, Ghada Sokar, Shibhansh Dohare, Elena Mocanu, Matthew E. Taylor, Mykola Pechenizkiy, Decebal Constantin Mocanu
2023-02-13T17:45:03Z
http://arxiv.org/abs/2302.06548v1
# Automatic Noise Filtering with Dynamic Sparse Training ###### Abstract. Tomorrow's robots will need to distinguish useful information from noise when performing different tasks. A household robot for instance may continuously receive a plethora of information about the home, but needs to focus on just a small subset to successfully execute its current chore. Filtering distracting inputs that contain irrelevant data has received little attention in the reinforcement learning literature. To start resolving this, we formulate a problem setting in reinforcement learning called the _extremely noisy environment_ (ENE), where up to 99% of the input features are pure noise. Agents need to detect which features provide task-relevant information about the state of the environment. Consequently, we propose a new method termed _Automatic Noise Filtering_ (ANF), which uses the principles of dynamic sparse training in synergy with various deep reinforcement learning algorithms. The sparse input layer learns to focus its connectivity on task-relevant features, such that ANF-SAC and ANF-TD3 outperform standard SAC and TD3 by a large margin, while using up to 95% fewer weights. Furthermore, we devise a transfer learning setting for ENEs, by permuting all features of the environment after 1M timesteps to simulate the fact that other information sources can become relevant as the world evolves. Again, ANF surpasses the baselines in final performance and sample complexity. Our code is available online.1 Footnote 1: See [https://github.com/bramgrooten/automatic-noise-filtering](https://github.com/bramgrooten/automatic-noise-filtering) deep reinforcement learning; noise filtering; sparse training 1 Footnote 1: See [https://github.com/bramgrooten/automatic-noise-filtering](https://github.com/bramgrooten/automatic-noise-filtering) ## 1. Introduction Future robots will likely perceive a plethora of information about the state of the world, but only parts of it are going to be relevant to their current task. For instance, a household robot receiving abundant information about all objects and processes in the house.2 For its current task, e.g. making pancakes, only a small subset of these information sources, or _features_, are relevant. Agents should automatically detect which features are task-relevant, without humans having to predefine this. Other examples may be: a hearing aid distinguishing between voices and auditory noise, a surgical robot receiving all possible information about the patient, or a self-driving car that needs to ignore distracting billboards. Footnote 2: For example: cleanliness of floors, furniture, cupboards, kitchen utensils; CO, CO levels and temperature in each room; up-to-date stock of all food and non-food items in the fridge and/or basement; mood, nonwishment, and health of all inhabitants; etc. To illustrate the current situation: Soft Actor-Critic (SAC) (Sarjani et al., 2017) fails to learn a decent policy on an environment with 90% added noise features, see Figure 1. We simulate the noisy real-world environment by adding synthetic noise features to an existing state Figure 1. Performance of SAC on Humanoid-v3 environments expanded with a different number of pure noise features. Once the environment contains too much noise, SAC struggles to learn a decent policy. Standard dense networks cannot filter through the noise well enough on this problem. space. This allows us to study the problem in a controlled environment to understand where we stand and what can be done. We need to invent methods that can effectively filter through the noise while learning to perform the environment's task. Our **research question** becomes: _How can we design RL agents to learn and perform well in an extremely noisy environment?_ Dynamic Sparse Training (DST), a class of methods stemming from the Sparse Evolutionary Training (SET) algorithm (Steintein and Hinton, 1992), is promising in this regard. By starting from a randomly sparsified network and subsequently pruning and growing connections (weights) during training, DST searches for the optimal network topology. DST is able to perform efficient feature selection for unsupervised learning, as shown by (Stein and Hinton, 1992; Hinton, 1992). Further, (Stein and Hinton, 1992) discovered that sparse networks can find minimal task representations in deep RL by pruning redundant input dimensions. Not long after, (Stein and Hinton, 1992) successfully applied DST in deep RL, reducing the number of parameters without compromising performance. This leads us to a plausible approach to our research question. We think that the adaptability of DST can improve an agent's sparse network structure such that task-relevant features are emphasized by receiving more connections than noise features. The combination of sparsity and adaptability enables the agent to filter through the noise more effectively, outperforming dense network approaches. The underlying hypothesis follows: **The Adaptability Hypothesis**: _A sparse neural network layer can adapt the location of its connections (weights) to gain a better performance faster than a dense layer can adapt the weight values to achieve the same gain._ Note that newly grown connections still need to adjust their weight values through gradient descent, but we hypothesize that this generally happens quicker than a dense network modifies all of its weights. Relocated weights may receive a more informative gradient when connected to task-relevant features. Briefly, the hypothesis states: dropping and growing connections is easier than adjusting the weights. This is inspired by our own brain's plasticity, which also dynamically drops and grows synapses (Stein and Hinton, 1992; Hinton, 1992; Hinton, 1992). To verify our hypothesis, we propose a new algorithm called Automatic Noise Filtering (ANF), which can easily be combined with deep RL methods. It has a sparse input layer with adapting connectivity through dynamic sparse training. We compare ANF to two strong baseline deep RL algorithms: SAC (Stein and Hinton, 1992) and TD3 (Stein and Hinton, 1992), which have fully dense layers throughout their networks. We devise the _extremely noisy environment_ (ENE), further defined in Section 2, which expands the state space of an existing RL environment with a large number of noise features. We apply this approach to four continuous control tasks from MuJoCo Gym (Gym, 2009; Sohn et al., 2010). Contributions. * We formulate a problem setting termed the _extremely noisy environment_ (ENE), where up to 99% of the input features consist of pure noise. Agents need to detect the task-relevant features autonomously. * We propose Automatic Noise Filtering (ANF), a dynamic sparse training method that outperforms baseline deep RL algorithms by a large margin, especially on environments with high noise levels. * We devise a transfer learning setting of extremely noisy environments and show that ANF has better performance and forward transfer than the baselines SAC and TD3. * We show that highly sparse ANF agents with up to 95% fewer parameters can still surpass their dense baselines on the extremely noisy environments. * We extend the ENE by adjusting the noise distribution in two ways, increasing the difficulty. ANF maintains its advantage on these challenging extensions.3 Footnote 3: See an illustrative video here: [https://youtu.be/v547UnsTQk8](https://youtu.be/v547UnsTQk8) Outline.In Section 2 we formulate the problem setting. Section 3 gives an overview of the background and related work. Our method is introduced in Section 4, along with the first experiments. In Section 5 we explore the transfer learning setting. Sections 6 through 9 provide further analysis, where we perform an ablation study and discover how far we can extend our problem and algorithm. Finally, Section 10 concludes the paper. Additional results, details, and discussion are in the Appendix. ## 2. Problem Formulation We introduce a problem setting where agents have to act in environments that contain a lot of noise. As the noise features generally greatly outnumber the task-relevant features in this setting, we simply call it the extremely noisy environment (ENE). Extremely noisy environment.To create an ENE, we take any reinforcement learning environment that generates feature vectors as states. The ENE expands this feature vector by concatenating many additional features consisting only of pure noise, sampled from any given distribution. An agent is not told which features are useful (task-relevant) and which are useless (noise), so it has to learn to ignore the distracting noise features by itself, see Figure 2. In our main experiments, the noise features produce pure Gaussian noise, sampled i.i.d. from \(\mathcal{N}(0,1)\). The fraction of noise features in an ENE is denoted by \(n_{f}\in[0,1)\). For example, for \(n_{f}=0.5\) we enlarge the original state space of a MuJoCo Gym environment by lengthening the state feature vector by a factor of 2. In general, the Figure 2. Extremely noisy environments (ENEs) contain many noise features and some relevant features. We use ENEs where up to 99% of the features are noise. Our method Automatic Noise Filtering (ANF) learns to predominantly connect with the input neurons that provide useful information and outperforms dense baselines by a large margin, especially in the noisiest environments. dimensionality of the new state space is \[d_{ene}=\left\lceil\frac{d_{og}}{1-n_{f}}\right\rceil\] where \(d_{og}\) is the number of dimensions in the original state space. As \(n_{f}\) increases to 1, the dimensionality of the ENE expands. _Transfer learning setting._ Next to the ENE, we introduce an even more challenging problem setting where, after every \(T_{p}\) timesteps, all input features are permuted at random. This permutation simulates the fact that other features can become relevant over time. Previously irrelevant features might suddenly become relevant, for example, when a household robot gets a new task.4 In our case, the change in environment is _not_ announced to the agent. Agents need to detect the change and transfer their representations quickly to adapt to the new instantiation of the _permutated extremely noisy environment_ (PENE), see Figure 3. A previously task-relevant feature may or may not still be relevant after the permutation, inducing the need to rediscover the distribution of the features and filter through the noise. Footnote 4: Features could also gradually become more relevant, as the world evolves (i.e. concept drift). This is outside the scope of our research, we focus on the sudden change. ## 3. Background and Related Work Our proposed ANF algorithm is based on dynamic sparse training (DST). In this section, we briefly overview the related work of DST in reinforcement learning (RL) and existing noise filtering methods. _Sparse training._ Dynamic sparse training is a subfield of the sparse training regime (Rasmari et al., 2017), where weights deemed superfluous are pruned away to increase the efficiency of a neural network. In dense-to-sparse training, dense networks are gradually pruned to higher sparsity levels throughout training (Gross et al., 2016; Gross et al., 2016; Gross et al., 2016). In sparse-to-sparse training, where DST belongs, a network begins with a high sparsity level from scratch (Dong et al., 2017; Wang et al., 2018). The existing connections can either stay fixed (_static_ sparse training) or be pruned and regrown during training (_dynamic_ sparse training). In supervised learning, especially computer vision, many promising results have been achieved with sparsity over the last few years (Gross et al., 2016; Gross et al., 2016; Gross et al., 2016). These algorithms benefit from potential performance boosts, decreased computational costs, and better generalization (Gross et al., 2016; Gross et al., 2016). Furthermore, DST has been used successfully for an efficient feature selection algorithm (Dong et al., 2017), which inspired our project. _DST in RL._ Applying sparse training in reinforcement learning is useful, as real-world applications often deal with latency constraints (Gross et al., 2016), which limits the number of parameters. Unfortunately, in the area of RL it seems that applying sparse training is more challenging than in supervised learning, as the achievable sparsity levels without loss in performance are generally lower (Gross et al., 2016). Only a few papers have applied sparse training to deep RL so far. In the _offline_ RL setting, (Beng et al., 2016) have reached 95% sparsity with almost no performance degradation. While this is impressive, we believe that offline RL is more similar to supervised learning than online RL. Moreover, it does not support learning in changing environments (Sokar et al., 2017). Therefore, we focus on the _online_ RL setting throughout the paper, and even go into the transfer learning setting (Wang et al., 2018). To the best of our knowledge, the first work applying DST to online RL is from (Zhou et al., 2018). They outperform dense networks with the algorithms DS-TD3 and DS-SAC, which combine sparse evolutionary training (SET) (Wang et al., 2018) with TD3 (Gross et al., 2016) and SAC (Gross et al., 2016). The methods of (Zhou et al., 2018) form the foundation of our ANF algorithm. Sokar et al. (2018) reached a global sparsity level of 50%, which was later improved upon by (Gross et al., 2016; Gross et al., 2016), who experimented with sparsity levels up to 99%. They showed that the sparsity level reachable without loss of performance largely depends on the environment. Graesser et al. (Graesser et al., 2016) compared DST methods such as SET (Wang et al., 2018) and RigL (Gross et al., 2016) in many deep RL environments. Their performance proved to be quite similar, so we choose to use only SET. _Noise in RL._ There exist different types of noise that an agent may encounter. Let us characterize the two main categories: * Type 1: uncertainty in perception, for example when an automated vehicle cannot clearly see a traffic sign since the sun is right next to it. * Type 2: distracting, task-irrelevant percepts, for example the bright colors of a billboard when crossing Times Square in New York City. Type 1 noise, i.e. measurement errors, is often researched by adding noise _on top of_ existing features to produce more robust agents (Dong et al., 2017; Wang et al., 2018; Wang et al., 2018). This type of noise is outside the scope of this work. Instead, we focus on type 2 noise and investigate it by adding synthetic features _alongide_ the existing features, creating a state space of higher dimensionality. The goal is to discover algorithms that can perform tasks well while having access to all available features, without having to pre-select the task-relevant ones. Feature selection should be carried out automatically by the RL agents. To the best of our knowledge, the first work to make an existing RL environment noisier by adding extra features was FS-NEAT (Sokar et al., 2018). It introduced an evolutionary algorithm to select relevant features. Most of the follow-up work takes this evolutionary approach (Beng et al., 2016; Gross et al., 2016; Gross et al., 2016). Figure 3. In _permutated_ extremely noisy environments (PENE) the order of the input features gets shuffled after a certain number of timesteps. Our ANF agents automatically adjust their network structure to this new environment. They show superior forward transfer compared to fully dense methods, even though ANF agents might need to prune and regrow many connections in the sparse input layer. We hypothesize that adapting the location of sparse connections is easier than adjusting the weights of all connections in a dense network. 26], while we use the efficiency of deep learning, stochastic gradient descent, and dynamic sparse training. Our work extends environments that provide the current state as a feature vector. However, it is worth mentioning that environments with visual (pixel) inputs have likewise been augmented to include a noisy challenge, such as distracting backgrounds [(40)]. Other methods that have some similarities to our approach include recognizing distractor signals [(36)], reducing state dimensionality [(8; 14)], and identifying fake features in federated learning [(27)]. ## 4. Automatic Noise Filtering In this section, we explain how our ANF algorithm works, after which we show and interpret the results of our main experiments. ANF is a simple method that can be applied to any MLP-based deep RL algorithm. It is built upon the DS-TD3 and DS-SAC algorithms of [(39)], which use sparse evolutionary training (SET) from [(33)] as the underlying dynamic sparse training method. In both the actor and critic networks, ANF begins by randomly pruning the input layer to the desired sparsity level \(s_{i}\). During training, we drop weak connections of the input layer (weights with the smallest magnitude) after every topology-change period \(\Delta T\). After dropping a certain fraction \(d_{f}\) of the existing weights, ANF randomly grows the same number of connections to maintain the sparsity level \(s_{i}\). By giving new connections enough time to increase their weights, ANF detects task-relevant features without explicit supervision. We provide pseudocode for ANF-SAC in Appendix A. One aspect that sets ANF apart from the previous works on non-noisy settings [(20; 39)] is that we only sparsify the input layer. This helps us to pinpoint the support of DST on our Adaptability Hypothesis. Furthermore, in extremely noisy environments it is essential to filter through the large fraction of noise. Dynamic sparse training can perform this filtering elegantly. It works well to focus the DST principle on the first layer only, as this is where the distinction between relevant and noise features is made. In Section 9 we investigate models that also have sparse hidden layers. Another difference between ANF and DS-TD3/SAC [(39)] is that we mask the running averages of first and second raw moments of the gradient within the Adam optimizer [(25)] for non-existing connections. When connections are dropped and later regrown, they do not have access to previous information if implemented in a truly sparse manner. This aspect has been overlooked in the implementation of some sparsity research papers that apply Adam and only simulate true sparsity with binary masks on top of the weight matrices. Our research also utilizes such binary masks while keeping the truly sparse implementation in mind. See Appendix C for further discussion. _Experimental setup._ We integrate our ANF method in two popular deep RL algorithms: SAC and TD3. This means we compare the algorithms ANF-SAC and ANF-TD3 with their fully-dense counterparts as baselines. Furthermore, we compare to the closely related DS-SAC and DS-TD3, which both use their default global sparsity level of 50%. All neural networks have two hidden layers of 256 neurons with the ReLU activation function. After a hyperparameter search for ANF, we set the input layer sparsity \(s_{i}\) to 80%, the topology-change period \(\Delta T=1000\) timesteps, and the drop fraction \(d_{f}=0.05\). Further hyperparameter settings replicate prior work [(19; 21; 39)]. See Appendix B for additional details. Our experiments are carried out in four continuous control environments from the MuJoCo Gym suite: Humanoid-v3, HalfCheetah-v3, Walker2d-v3, and Hopper-v3. We first run an experiment without any added noise features as a baseline and then start increasing the noise level. The fraction of noise features, \(n_{f}\), ranges over the set \(\{0.8,0.9,0.95,0.98,0.99\}\). Note that the state spaces of these settings increase by \(5\times,10\times,20\times,50\times,\) and \(100\times,\) respectively. We train our agents for 1 million timesteps and evaluate them by running 10 test episodes after every 5000 timesteps. We measure the average return over these last 10% of training, as done in [(20)], for overview graphs such as Figure 4. Throughout the paper, we run 5 random seeds for every setting. In the graphs, we show the average curve as well as a 95% confidence interval. \begin{table} \begin{tabular}{l l l l l} \hline \hline Environment & State dim. & Action dim. & State dim. & State dim. \\ & _Original_ & _Original_ & _ENE (\(n_{f}\) = 8)_ & _ENE (\(n_{f}\) = 99)_ \\ \hline Humanoid-v3 & 376 & 17 & 1880 & 37600 \\ HalfCheetah-v3 & 17 & 6 & 85 & 1700 \\ Walker2d-v3 & 17 & 6 & 85 & 1700 \\ Hopper-v3 & 11 & 3 & 55 & 1100 \\ \hline \hline \end{tabular} \end{table} Table 1. State and action space dimensions. Figure 4. Performance of _Automatic Noise Filtering_ (ANF) compared to its baselines for different noise fractions \(n_{f}\). The curves show average return in the last 10% of training over 5 seeds, with shaded regions representing 95% confidence intervals. When the number of noise features in an environment increases, the performance of the standard fully-dense networks of SAC deteriorates much faster than ANF-SAC. Similar graphs for ANF-TD3 are shown in Figure 15 of Appendix D.1. _Results._ First of all, the horizontal lines in Figure 4 show that even in environments without noise ANF-SAC is able to reach similar or better performance than SAC and DS-SAC for Humanoid-v3 and HalfCheetah-v3. By adjusting the connectivity of the input layer, ANF is able to select the set of most important features, the so-called _minimal task representation_(Kumar et al., 2017). Furthermore, when the noise level increases our ANF method outperforms the dense baseline by a significant margin on all environments. Especially in the noisiest environments, when \(n_{f}\geq.95\), a large gap is visible between ANF-SAC and SAC for HalfCheetah, Walker2d, and Hopper. The Humanoid environment is an exception, as ANF outperforms its baseline much earlier here but then struggles with the high noise levels as well. Table 1 shows that Humanoid-v3 differs noticeably from the other three environments by the size of its state space. The learning curves in Figure 6 indicate that SAC and TD3 are unable to learn a decent policy within 1M timesteps in this challenging extremely noisy environment. ANF learns to ignore the distracting noise and reaches a performance level _similar even to SAC and TD3 in the environment without noise.5_ In the environments HalfCheetah, Walker2d, and Hopper, we continue to observe this behavior up to a noise fraction of 98%, as shown in Figure 5. ANF outperforms its baselines by a large margin in each environment. Footnote 5: Which is a return of \(\sim\)4500, see SAC’s dashed line in Figure 4, Humanoid-v3. _Topology shift._ To analyze what is actually happening, we visualize the connectivity of ANF. In Figure 7, we present a graph that shows the development of the network's topology over time. The graph clearly demonstrates a topology shift in the input layer: on the one hand, the average number of connections to task-relevant features rises, while on the other hand, noise features receive fewer weights. Together with the increased performance shown in Figure 4, this fully supports our Adaptivity Hypothesis. ## 5. Transfer Learning During a robot's lifetime, it may happen that other information sources become relevant to its task. Moreover, the agent may receive an entirely new task, which can require it to focus its attention on totally different state features. We simulate this change in a _permutated extremely noisy environment_ (PENE), as described in Section 2. The PENE rearranges all input features with a fixed permutation after every \(T_{p}\) timesteps. For our experiments, this means that the relevant and noise features are now mixed instead of concatenated. The agents will have Figure 5. Learning curves on 3 environments with 98% noise features. This level of noise is too high for standard SAC and TD3 to learn a decent policy. ANF can still filter through the distraction and find a well-performing policy. Figure 6. Learning curves for Humanoid-v3 with 90% noise features. While standard SAC and TD3 are too distracted by the noise to learn the task, ANF finds the task-relevant features and is able to improve. Figure 7. Average number of connections in the input layer of one of ANF-TD3’s critic networks, on HalfCheetah-v3 with 90% noise features. At the start of training every input neuron has around \(256\cdot 0.2\approx 51\) connections, because the input layer sparsity is 80% and connections are allocated uniformly at random. During training, ANF gradually prunes connections from the noise features and grows connections to the relevant features. A similar graph for ANF-SAC is shown in Figure 16 of Appendix D.1. to rediscover which input neurons are receiving task-relevant signals. Note that the PENE setting does _not_ announce the change in environment to the agent. _Experimental setup._ We set \(T_{p}\) to \(1\)M timesteps, such that agents have enough time to learn. We run on the same four environments with a noise fraction of \(n_{f}=0.95\). In these experiments, we now train for 4 million timesteps, meaning that agents encounter four different instances (sub-environments) of feature permutations. Similar to the experiments of Section 4, we compare ANF-SAC and ANF-TD3 with their fully dense baselines and DS-SAC/TD3. We show 95% confidence intervals over 5 seeds. _Results._ Figure 8 shows the results for ANF-TD3 on Humanoid and HalfCheetah. See Appendix D.2 for the graphs of the remaining algorithms and environments. It is evident that the performance drops considerably after each permutation of features. However, ANF is able to recover faster than the dense baselines in all environments. The method does not need to be adjusted for the challenging PENE setting; ANF keeps adapting the sparse input layer as before. For Humanoid, some beneficial internal representations may be transferred forward, as the performance increases much earlier in the third sub-environment (between 2M and 3M timesteps) than when it is trained from scratch (between 0 and 1M timesteps). However, on the fourth sub-environment some ANF agents struggled a bit: each random seed determines not only the initialization of the agent, but also the random permutations of the environment. Thus, some sub-environments can be more challenging than others. _Maintaining plasticity._ Agents that have to learn continually must be able to maintain plasticity. Standard methods are unable to do so, as shown by (Krishnan et al., 2018). Since the connections of the input layer can drop and grow dynamically, ANF ensures that the agent has sufficient adaptability to adjust to a new environment. We analyze this plasticity by looking into the connectivity of the input layer once more, as done earlier in Figure 7 for the ENE experiments. Now, in Figure 9, we see that the average number of connections to task-relevant features quickly recovers after an environment change in the PENE. At every 1M steps, the PENE shuffles the features, which makes the average number of connections to relevant features drop considerably, close to the initial value. This is because many task-relevant signals are now coming in at input neurons that were previously receiving noise. The fact that ANF has not pruned all connections to the irrelevant noise features after training on the first sub-environment is actually an advantage in this PENE setting. It means that ANF may be able to reach a high number of connections to new task-relevant features faster, as they already have some'spare' connections waiting. ## 6. Louder Noise All of our experiments so far have been executed with noise features sampled from the standard Gaussian distribution of \(\mathcal{N}(0,1)\). But what would happen if we increase the standard deviation, i.e. the noise amplitude? We expect the louder noise to be more distracting, increasing the difficulty of the ENE. We hope to discover whether ANF can cope with this additional challenge. Figure 8. Performance of ANF-TD3 and its baselines on permuted extremely noisy environments (PENE) with 95% noise features. After every 1M timesteps, the environment’s features are shuffled with a random permutation. ANF is able to cope with this challenge, while the fully dense networks of TD3 are struggling. DS-TD3 performs decently, but ANF has an advantage by focusing its sparsity on the input layer. Similar graphs for ANF-SAC and other environments are shown in Appendix D.2. **Experimental setup.** We run ANF and its dense baselines on the ENE of HalfCheetah-v3, but the noise is now sampled from \(\mathcal{N}(0,\sigma^{2})\). We let the standard deviation \(\sigma\) increase exponentially, ranging over the set \(\{1,2,4,8,16\}\). The ENE contains 90% of these louder noise features (\(n_{f}=0.9\)). _Results._ From the experimental results, we can conclude that louder noise does make the ENE more challenging. In Figure 10, it is clearly visible that as the noise amplitude increases, the performance decreases. Fortunately for ANF, this decrease is much less pronounced compared to its dense baseline. In fact, the final return of ANF-SAC on an ENE with noise amplitude \(\sigma=16\) is almost the same as SAC's performance on the standard \(\sigma=1\) environment. ANF can cope well with even the loudest noise.6 Footnote 6: See [https://youtu.be/v547Um=TQ&s](https://youtu.be/v547Um=TQ&s) for a video comparing the actual motion of HalfCheetah when controlled by ANF-SAC vs. SAC with different noise amplitudes. ## 7. Imitating Real Features Upon closer inspection of the data distribution of the original state features, we discovered that these are far from a standard Gaussian distribution. In Appendix E, we present visualizations of the distributions of task-relevant features before and after training.7 To get closer to real-world noise, we want the noise features of our ENEs to imitate the original features. We expect that this increases the difficulty of our extremely noisy environments, as the noise is now much more similar to the task-relevant features. Footnote 7: The challenging distribution shift of RL is clearly visible! _Experimental setup._ For each of the original features, we make a histogram of its final distribution (after training an agent in the standard environment), as shown in Appendix E. In the experiments of this section, the ENE samples from these histograms8 to generate noise features for the next state. We repeatedly sample from the distribution of each original feature until we have enough noise features. We run this experiment on HalfCheetah-v3, with 90% of these noise features that imitate the task-relevant features. Footnote 8: By first sampling a bin according to the histogram’s probability mass function, and then sampling a value uniformly at random within the chosen bin. _Results._ In Figure 11, we see that the imitated noise indeed raises the difficulty of the ENE. The performance of both ANF-TD3 and TD3 decreases considerably compared to the standard \(\mathcal{N}(0,1)\) noise. However, ANF is still able to outperform its dense baseline by a large margin, even in this challenging ENE. ## 8. Does Anf Need to be Dynamic? In this section, we perform an ablation study to show that the dynamic network topology updates of ANF are a necessary component of the algorithm. Removing these dynamic updates during training would give a static sparse training algorithm. This algorithm starts with a randomly sparsified input layer just like ANF, but it does not drop or regrow any connections during training. _Experimental setup._ We compare ANF to its fixed-connectivity counterpart, which we call _Static-ANF_. In addition, we compare the standard dense algorithms of SAC and TD3. We run on Humanoidv3 with 90% noise features. Figure 11. Learning curves for ANF-TD3 and its dense baseline on the challenging ENE, where the noise features imitate the task-relevant features. This increases the difficulty, but ANF still achieves the highest return. The ENE has 90% of these realistic noise features in this experiment. See Figure 24 in Appendix D.4 for the graph of ANF-SAC. Figure 10. ANF-SAC and its dense baseline on ENEs with louder noise. The noise features are sampled from \(\mathcal{N}(0,\sigma^{2})\). Noise amplitude \(\sigma\) is increased exponentially, notice the log-scale on the horizontal axis. ANF tolerates louder noise much better than standard SAC, maintaining a high performance up to \(\sigma=16\). In this experiment \(n_{f}=0.9\). The graph for ANF-TD3 is given in Figure 23 of Appendix D.3. Figure 9. Average number of connections in the input layer of an ANF-SAC critic network, on HalfCheetah-v3 with \(n_{f}=0.95\). Every 1M timesteps, the PENE permutes the order of the features. The ANF agent adjusts its network structure quickly, growing connections to the task-relevant features, which are now sent to different input neurons. A similar graph for ANF-TD3 is shown in Figure 21 of Appendix D.2. _Results._ The graphs in Figure 12 show that ANF indeed needs to be dynamic, as it significantly outperforms its static version. Intuitively, this is consistent with the concept shown in Figure 7, where ANF changes its connectivity to emphasize its focus on the task-relevant features. This emphasis is lost if one removes ANF's ability to dynamically adjust the network structure. Nevertheless, it is remarkable to see that Static-ANF is able to surpass standard dense TD3 in this setting. It seems that just having fewer connections to the 90% noisy input features already helps to lower the overall distraction. ## 9. How Sparse Can We Go? We already showed that sparsifying the input layer can significantly improve performance in extremely noisy environments. In this section, we investigate whether we can further sparsify the agent's networks by also pruning connections in other layers. This would reduce the network size (total number of parameters) even further, while hopefully maintaining performance. _Experimental setup._ Instead of only having an 80% sparse _input_ layer, the networks now also have a sparse _hidden_ layer for which the connectivity is frequently adjusted with DST. We keep the output layer dense, just as in (Nagy et al., 2018). The sparsity distribution is uniform, meaning that both the input layer and the hidden layer have the same sparsity level. We compute the required layer sparsity levels such that the global sparsity level (over the full network) is at s, where \(s\) ranges over {8.0, 90, 95, 98}. We compare with our standard ANF algorithm, which has a global sparsity of 74.6% for \(n_{f}\)=0.9 (actor network on Humanoid). We run on Humanoid-v3 and HalfCheetah-v3, as the achievable sparsity level before performance degradation differs significantly between these two environments. _Results._ We see in Figure 13 that on extremely noisy environments, nearly the same performance can be reached with further sparsified networks, meaning that a large proportion of the parameters can be pruned. In Figure 13 (left), we see that ANF-TD3 with a global sparsity of 80% still surpasses standard dense TD3 in final return on HalfCheetah-v3. As the global sparsity increases, the performance gradually decreases. This is quite different for Humanoid-v3, in Figure 13 (right). Notice that ANF-TD3 can go up to a global sparsity level of 95%, with barely any performance degradation. This means it can use 20% fewer parameters than standard TD3, reducing the network size considerably, as shown in Table 2. ## 10. Conclusion & Limitations In this work, we formulated the problem setting of extremely noisy environments and showed that our Automatic Noise Filtering algorithm succeeds at this challenge where standard deep RL methods struggle. By using dynamic sparse training, the ANF algorithm adjusts its network topology to focus on task-relevant features. Our experiments provide an initial empirical verification of our Adaptability Hypothesis, which roughly states that dropping and growing sparse connections is easier than adjusting dense weights. Further research is necessary to grant more conclusive evidence, as our work is limited to continuous control tasks that have feature vectors as states. We exclusively studied SAC and TD3; integrating ANF in other deep RL methods is open for future work. The input layer sparsity is an important hyperparameter for ANF. Making this an adaptive parameter could be beneficial. Further, we believe expanding ANF's compatibility towards other neural network types is a promising future research direction. Combining ANF with convolutional NNs or transformers would open the possibility of operating on noisy image or video data. \begin{table} \begin{tabular}{l c c c} \hline \hline Algorithm & Environment & Return (\(\uparrow\)) & \# Params. (\(\downarrow\)) \\ \hline ANF-TD3 & Humanoid & **4968.3** & 262,400 \\ TD3 & Humanoid & 817.3 & 1,032,448 \\ Sparser(80\%)-ANF-TD3 & Humanoid & 4963.1 & 206,489 \\ Sparser(95\%)-ANF-TD3 & Humanoid & 4806.4 & **51,622** \\ ANF-TD3 & HalfCheetah & **11086.4** & 75,776 \\ TD3 & HalfCheetah & 9452.6 & 110,592 \\ Sparser(80\%)-ANF-TD3 & HalfCheetah & 10640.3 & 22,118 \\ Sparser(95\%)-ANF-TD3 & HalfCheetah & 8357.9 & **5,529** \\ \hline \hline \end{tabular} \end{table} Table 2. Comparison of different algorithms, along with their parameter counts for the actor network. The critic networks have comparable numbers of parameters. Figure 12. Comparison of ANF to its static sparse counterpart and standard (fully dense) TD3. ANF-TD3 dynamically updates the sparse network topology, while the topology of the other two methods remains static. The learning curves show that the dynamic updates are an essential part of ANF. This experiment is run with 90% noise features. A similar graph for SAC is shown in Figure 25 of Appendix D.5. Figure 13. Performance of ANF-TD3 and its sparser versions, on 90% noise features. Sparser-ANF also prunes weights in the hidden layer, instead of only the input layer. Further sparsifying the network does not improve performance, but can drastically reduce the size of the network. The graphs for ANF-SAC are shown in Figure 26 of Appendix D.6. ## Acknowledgments This publication is part of the project AMADeuS (with project number 18489) of the Open Technology Programme, which is partly financed by the Dutch Research Council (NWO). This research used the Dutch national e-infrastructure with the support of the SURF Cooperative, using grant no. EINF-3098. Part of this work has taken place in the Intelligent Robot Learning (IRL) Lab at the University of Alberta, which is supported in part by research grants from the Alberta Machine Intelligence Institute (Amii); a Canada CIFAR AI Chair, Amii; Compute Canada; Huawei; Mitacs; and NSERC. We thank Joan Falco Roget, Mickey Beurskens, Anne van den Berg, and Rik Groet for the fruitful discussions. Finally, we thank the anonymous reviewers and Antonie Bodley for their thorough proofreading and useful comments.
2308.13678
Textureless Deformable Surface Reconstruction with Invisible Markers
Reconstructing and tracking deformable surface with little or no texture has posed long-standing challenges. Fundamentally, the challenges stem from textureless surfaces lacking features for establishing cross-image correspondences. In this work, we present a novel type of markers to proactively enrich the object's surface features, and thereby ease the 3D surface reconstruction and correspondence tracking. Our markers are made of fluorescent dyes, visible only under the ultraviolet (UV) light and invisible under regular lighting condition. Leveraging the markers, we design a multi-camera system that captures surface deformation under the UV light and the visible light in a time multiplexing fashion. Under the UV light, markers on the object emerge to enrich its surface texture, allowing high-quality 3D shape reconstruction and tracking. Under the visible light, markers become invisible, allowing us to capture the object's original untouched appearance. We perform experiments on various challenging scenes, including hand gestures, facial expressions, waving cloth, and hand-object interaction. In all these cases, we demonstrate that our system is able to produce robust, high-quality 3D reconstruction and tracking.
Xinyuan Li, Yu Guo, Yubei Tu, Yu Ji, Yanchen Liu, Jinwei Ye, Changxi Zheng
2023-08-25T21:35:14Z
http://arxiv.org/abs/2308.13678v2
# Textureless Deformable Surface Reconstruction with Invisible Markers ###### Abstract Reconstructing and tracking deformable surface with little or no texture has posed long-standing challenges. Fundamentally, the challenges stem from textureless surfaces lacking features for establishing cross-image correspondences. In this work, we present a novel type of markers to proactively enrich the object's surface features, and thereby ease the 3D surface reconstruction and correspondence tracking. Our markers are made of fluorescent dyes, visible only under the ultraviolet (UV) light and invisible under regular lighting condition. Leveraging the markers, we design a multi-camera system that captures surface deformation under the UV light and the visible light in a time multiplexing fashion. Under the UV light, markers on the object emerge to enrich its surface texture, allowing high-quality 3D shape reconstruction and tracking. Under the visible light, markers become invisible, allowing us to capture the object's original untouched appearance. We perform experiments on various challenging scenes, including hand gestures, facial expressions, waving cloth, and hand-object interaction. In all these cases, we demonstrate that our system is able to produce robust, high-quality 3D reconstruction and tracking. ## 1 Introduction In numerous applications, from animation synthesis and motion analysis to video recognition and robotic manipulation, a fundamental task is to reconstruct a 3D deformable object and track its surface correspondences over time. Typical use cases include 3D capture of hand gestures [58, 70], facial expressions [13, 21], and cloth deformation [63, 26]. To establish surface correspondences, a crucial clue is the deformable object's surface texture. A well textured surface provides rich features describing the surface's local appearance, allowing many existing methods to find surface point correspondences across multiple views of images at a time instant as well as across a sequence of images over time [4, 5, 6, 17, 22, 40, 41, 42, 61]. Moreover, recent advances on deep neural networks also inspire the use of neural networks to 3D reconstruct deformable objects that are well textured [23, 43]. However, when an object has little texture or is _textureless_--such as human skin or a piece of clean paper--these existing methods all fall short. Several recent works aim to 3D reconstruct textureless deformable objects by leveraging neural networks trained on specific datasets (e.g., papers and cloths) [7, 18, 56]. Yet, they focus on surface reconstruction, unable to predict surface correspondences for tracking. They are trained on specific datasets and cross-category generalization remains challenging. Lastly, they all face a chicken-and-egg problem: to prepare a dataset to train these networks in the first place, one needs to 3D reconstruct and track deformable objects that are textureless. Perhaps the most straightforward solution to work with textureless surfaces is by attaching optical makers to the surface, thereby explicitly introducing rich textures for 3D reconstruction and tracking. But this approach is intrusive to the object's surface, not able to preserve the object's authentic surface appearance, and thus has limited applications. In this work, we overcome these limitations by introducing a novel type of optical markers--one that can be put on the object surface while fully preserving its appearance in normal lighting condition. Our markers are made of _fluorescent dyes_, unseen under the visible light but visible under the ultraviolet (UV) light. To make use of these fluorescent markers, we propose a multi-camera imaging system that captures the deformable object under regular lights and UV lights in an interleaving fashion. Under UV lights, markers on the object surface are captured; they provide rich features that enable high-quality 3D reconstruction and tracking of the object's geometry. Under regular lights, markers disappear, and we are able to capture and reconstruct the object's original surface appearance. Our imaging system can be used in a studio setting for volumetric acquisition and key point tracking, thus accel erating the process of digital asset production. It can also produce training data for neural networks that aim to infer from a conventional video the 3D shape and motion of a deformable object. One example is the neural network that tracks 3D hand gestures in a video [58]. Its training dataset must include sequences of 3D hand models and video images that capture the hand's untouched appearance. Yet, conventional optical markers, which would change the surface appearance, can not fulfill this requirement. As a consequence, while various such networks have been proposed, high-quality datasets for training them remain scarce. We conduct experiments on three types of deformable objects, including hand, face, and cloth/paper. Testing on challenging scenes, such as hand-object interaction and gestures with crossed figures, we demonstrate that our system produces high-quality 3D reconstruction and feature tracking even in the presence of heavy occlusions. The dataset that we have captured in this work will be released to the public for research use. In summary, we highlight the following contributions: * We propose to apply fluorescent markers on textureless surfaces, and analyze the reflectance response of the marker on different types of surfaces. * We develop a multi-camera imaging system with UV lights to capture the textureless surface simultaneously with and without the marker. * We develop a template-based algorithm for 3D reconstruction and tracking correspondences of the textureless deformable surface. * We validate our system on a variety of challenging scenes, and we will release a dataset for deformable object reconstruction and tracking. ## 2 Related Work **Invisible light imaging.** In computational photography, the idea of invisible light imaging has been explored: use specialized light sources and sensors for emitting and capturing light outside of the visible spectrum (i.e., in infrared \(>750nm\) or ultraviolet \(<380nm\) region). This setup complements visible light imaging (between \(380nm\) to \(750nm\)) in order to enhance imaging capacity. Some techniques combine color images with near-infrared (NIR) images for denoising [34, 59, 66], deblurring [52, 67], super-resolution [28, 54] and geometry estimation [15, 65, 16]. Notably, Wang [60] use infrared illumination to relight faces, in order to reduce the effect of uneven face color in a video conference setting. Later, Krishnan and Fergus [34] propose the "dark flash" that uses near-infrared (NIR) and near-ultraviolet (NUV) flash light to replace the dazzling conventional flash. Their goal is to obtain high-quality images under weak ambient illumination without disturbing the human target. Inspired by their work, many methods have then been proposed to use NIR images (captured with NIR illumination) to denoise and deblur degraded color images captured under low-light conditions [59, 66, 29, 67, 52]. Other than that line of works, Blasinski and Farrell [9] propose using narrow-band multi-spectral flash for color balancing, and Choe [15] derive an NIR reflectance model and use NIR images to recover fine-scale surface geometry. Unlike the widely studied NIR imaging techniques, ultraviolet (UV) imaging has received much less attention. The dark flash techniques [34, 59] analyze the object's spectral response under the UV light for denoising. In our work, we leverage fluorescent ink that is only visible under UV light to enrich features on object surface, without changing the object's appearance under visible light. **UV fluorescence imaging.** UV light has been used to reveal substances or structures that are unseen under the visible light illumination. This is because some substances exhibit fluorescent reflectance, one that absorbs short wavelength light (, UV light) and emit light at a longer wavelength (, visible light) [35, 47]. This phenomenon lays the foundation in a wide range of imaging applications, including forensics [62, 44], biomedical imaging [33, 24], and material analysis [68, 57]. In computer vision, the fluorescent reflection has been exploited for shape reconstruction [55], immersive range scanning [30], inter-reflection removal [19], multi-spectral reflectance estimation [10], and material classification [3], to name a few. Motivated by graphics rendering tasks, Wilkie [64] derive a reflectance model for diffuse fluorescent surfaces. Inspired by this model are latter works on fluorescence-based reflectance analysis [31] and shape reconstruction [47]. There are also works on analyzing the spectral response of fluorescent reflection, leading to techniques for appearance separation [2, 69], fluorescent relighting [20, 36], and camera spectral sensitivity estimation [25]. We also leverage fluorescence. But in contrast to those existing work, we proactively introduce fluorescent markers to the object surface to enrich its surface texture. We also design an imaging system that simultaneously captures images under the UV light (to trigger the marker) and the visible light (to cancel the maker) via time multiplexing. **Deformable surface reconstruction & tracking.** Our overarching goal is to recover the 3D shape and track correspondences on a dynamic deformable surface. In this direction, various sensor configurations have been explored, including the use of single camera [17, 37, 41], multiple cameras [8, 12, 48], and color cameras in tandem with depth cameras [11, 71] and with event cameras [39]. By far the most successful are methods that rely on local appearance to find feature point correspondences and then match 2D image features to a 3D shape template for surface reconstruction and tracking [4, 5, 14, 40]. However, these methods are hampered when the surface is sparsely or repetitively textured, or has no texture at all, as in those cases, the number of feature point correspondences are insufficient and the matching quality is not reliable. When the deformable surface are weakly textured, dense pixel-level template matching may be helpful [37, 38, 41, 46]. Nevertheless, none of these methods can handle surfaces with no texture (, textureless surfaces)--for example, a piece of white paper. In contrast, we are able to reconstruct high-fidelity 3D surface and track its deformation even for textureless surfaces, thanks to the markers that introduce local features while preserving the surface's original appearance. ## 3 Method We now introduce the physical principle behind our fluorescent markers, their spectral response, and our imaging system that leverages fluorescent markers for robust feature tracking and surface reconstruction. ### Invisible Marker **Fluorescent dyes.** Fluorescent material exhibits the Stokes shift [51]--an optical phenomenon wherein the material absorbs short wavelength light, but re-emit light at a longer wavelength. This phenomenon is caused by the material molecules' quantum behavior: when electrons of a fluorescent material are irradiated by short wavelength light (, UV light), they enter into an excited state after absorbing the light energy, and then immediately de-excite and emit outgoing light at a longer wavelength (usually in the visible-light spectrum). This process is illustrated in Fig. 1-a. Our invisible markers are made of fluorescent dyes, which are soluble in organic solvent (e.g., alcohol or acetone). The solution is transparent; and using it as the ink in a fountain pen, we draw dot- or line-shaped markers on the object surface. The resulting markers, due to their transparency, are invisible under the visible light, thereby preserving the object's authentic appearance. The markers emit visible light (and thus become visible) only under the UV light. Moreover, the emitted light fade immediately--often within \(10^{-8}sec\)--after the UV light is off. Therefore, by using the UV light to trigger the markers' emission and synchronizing the trigger with the camera shutters, we can capture images with and without the markers visible in a time multiplexing manner (see details in Section 3.2). We look for fluorescent dyes that satisfy two more criteria: 1) the fluorescent emission under UV light has high contrast and strong visibility; and 2) the dyes are biologically safe and non-toxic to human skin. In this work, we find that the MaxMax UV dyes1 meet our needs. In particular, we use two types of fluorescent dyes emitting blue and red lights, respectively. We refer to them as _UV-blue_ and _UV-red_ dyes. To prepare the fluorescent ink to draw markers, we dissolve the UV-blue dye in 70% alcohol and UV-red in acetone. Both types of markers can be triggered for emission under \(365nm\) UV light (see Fig. 1-b). We choose this UV light, because it is in the range of UVA, the safest UV spectrum to human skin, abundant in natural sunlight. By using two types of dyes, we are able to use their emission colors to differentiate and segment multiple objects (see Fig. 2). This is useful in challenging scenes such as hand-object interactions. Footnote 1: [https://maximax.com/phosphorsdyesandinks](https://maximax.com/phosphorsdyesandinks) Fluorescent emission has narrow-banded wavelengths (Fig. 1-b). Thus, the hue values it introduces on captured fluorescent images stay in a small range. Exploiting this fact, we can easily detect markers on the fluorescent images: we convert the images into the HSV color space, and label pixels whose hue values fall into the fluorescent material's emission range. This marker detection in HSV color space is robust even when the surface itself is fluorescent. This is because the narrow-banded fluorescent emission peaks at a certain spectral location, and it is very unlikely that the surface's emission, if any, peaks at the same location as our two types of markers, which by themselves have different emission peaks. Figure 1: **(a)** The physical principle of fluorescence. **(b)** The absorption and emission spectra of the fluorescent dyes used in our invisible markers. We use two types of dyes with the peak emission in blue (UV-blue) and red (UV-red) colors, respectively. The purple curve indicates the spectral profile of our UV LED light peaked at \(365nm\). Figure 2: Objects covered with our fluorescent ink (UV-blue and UV-red). **(left)** Image under the visible light. **(right)** Image under the UV light, wherein the blue and red emission becomes apparent. **Spectral response.** Our fluorescent ink is made by dissolving fluorescent dyes in the solvent (i.e., alcohol or acetone). Caution is needed when choosing the solution's concentration level. The higher the concentration is, the brighter the fluorescent emission under the UV light becomes. On the one hand, if the fluorescent emission is too bright, the markers will saturate image pixels, rendering the marker detection based on hue values much harder. On the other hand, if the concentration is too low, the markers appear too dim, and the detection also becomes less robust. We choose the concentration level through systematic measurements. We test both UV-blue and UV-red dyes. For each type of dyes, we prepare the fluorescent solution of different concentrations spaced by \(1/5\) dilution ratio (i.e., \(1/5\), \(1/25\), \(1/125\),...), covering concentration levels from \(1/5\) to \(1/15625\). We then measure the spectral responses of each sample using a modular multimode microplate reader (BioTek Synergy H1). Under \(365nm\) UV light, the measurement records the response of fluorescent emission in the range of \(400nm\) to \(700nm\) (visible light spectrum) with a \(2nm\) step. A subset of the measured response curves are shown in Fig. 3. Through the measurements, we find that for both types of dyes, the desirable concentration level is \(1/625\), and we thereafter use this ratio throughout our experiments. We also measure the spectral response of our markers painted on different types of surfaces, including the skin of a mouse (after hair removal), a piece of paper, and a cotton cloth. We include a mouse skin (kindly provided by a bio-medical lab) because its composition is very close to the human skin. We use a hyperspectral camera (Pika XC2) to measure the spectral response of fluorescent emission under \(365nm\) UV light (see Fig. 4). The spectral response curves are shown in Fig. 5. The results indicate that each type of markers has similar spectral response on various types of surfaces, suggesting that our hue-based marker detection algorithm is applicable to a wide range of surfaces. **Biological safety.** Since we will use our system to capture human hands and faces, we address the safety concerns carefully. First, the fluorescent dyes we use are non-toxic and stable under non-direct sunlight. When applying markers on human skin, we only use the UV-blue dye dissolved in 70% alcohol, because the alcohol solvent, commonly used for medical sterilization, is safe for skin contact. The UV-red dye is dissolved in acetone, which is commonly used as nail polish remover. But the acetone may irritate human skin. Therefore, we do not apply it on human skin and only use it on inanimate objects (such as paper and cloth). UV light is present in natural sunlight--it constitutes about 10% of the total electromagnetic radiation from the sun, and about 95% of UV light is in the UVA spectrum (_i.e._, from \(315nm\) to \(400nm\)) [32]. But long-term exposure or short-term over-exposure to UV light can cause potential harm to human skin and eyes. To prevent from the harm, in our experiments, we strictly follow the Threshold Limit Values and Biological Exposure Indices (TLV/BEI) guidelines [1] to limit the UV illumination. When we use our imaging system to acquire human hand gestures or faces, a typical acquisition session lasts no more than 10 minutes, during which the UV lights are turned on for only 36 seconds (see details on our illumination triggering in Section 3.2). We measure using a spectrometer the UV irradiance when our imaging system is in use, and the average UV irradiance is \(24\mu W/cm^{2}\). As a comparison, the UV irradiance on a bright sunny day under direct sun light is about \(250\mu W/cm^{2}\)[32], that is, around ten times of our UV light irradiance. Therefore, we believe our system will not cause any UV over-exposure risk. To be extra cautious, we always apply sunscreen cream on human hands and faces before painting the markers and exposing them to our UV lights. Also, our system operator always wears a UV protective glass with Optical Density (OD) \(>6\) during the acquisition process. In sum, our markers and UV lights cause no significant safety hazard, and we have taken additional safety measures for protection. ### Acquisition System **System setup.** We build a multi-camera system that captures surface deformation for 3D reconstruction and surface tracking. Our imaging system consists of 42 global-shutter color cameras each with a \(16mm\) lens and 60 UV LED light units. Cameras and lights are uniformly mounted on a rhombicuboctahedral rig frame, facing inward to the frame center (see Fig. 6). Thanks to the rhombicuboctahedral frame being a discrete approximation of a sphere, their distances to the frame center are almost the same (\(\approx 75cm\)), thus avoiding uneven lighting attenuation. All cameras capture videos with a resolution of \(2448\times 2048\) at \(60fps\). They have a 30-degree field of view and a aperture of \(f/5.6\) in order to capture all-focus images within Figure 3: The spectral response of fluorescent solutions with different concentration levels of the UV-red (left) and UV-blue (right) dyes. the range of the rig (\(\approx 40cm\)). In addition, we mount a \(400nm\) long-pass filter in front of each cameras. The filter blocks the visible blue light component caused by the UV lights from contaminating the captured images. Each UV LED unit has a power of \(2.58W\), emitting light with a peak wavelength at \(365nm\) and a beam angle of \(155^{\circ}\). The UV LEDs response rapidly to the triggering voltage: their rise and fall times are within \(1ns\)--an important trait for camera-light synchronization described next. In addition, we use 7 light panels on the top and bottom sides of the frame. These light panels are always on during a capture session in order to supply more light such that we can use a faster shutter speed and reduce captured motion blur and surface specularity. **Synchronization.** We capture the deformable surface both with and without the markers visible in a time multiplexing fashion. To this end, we group the cameras into two sets: the first set (referred to as _UV cameras_) is triggered when the UV lights are turned on, and the second set (referred to as _reference cameras_) is triggered when the UV lights are off. In this way, the UV cameras capture the object with markers, while the reference cameras capture its original untouched appearance. In practice, we use 33 UV cameras and 9 reference cameras in order to 3D reconstruct and track the markers more accurately. Since the cameras capture at \(60fps\), the time gap between two consecutive frames is \(16ms\). If this time gap is much larger than the camera's exposure time, then we can shift the triggering time of the two sets of cameras within the \(16ms\) time window so that their exposure time periods do not overlap with each other (see Fig. 6-a). In practice, we use an exposure time of \(2ms\), although an exposure time up to \(8ms\) works as well. As a result, the time difference between the frames captured by the two sets of cameras in the same time window is \(2ms\). Although the time difference is very short, we still use interpolation to reduce the amount of possible mis-alignment (see more details about the interpolation in Section 3.3). We custom-build an FPGA board to precisely trigger the on and off of the UV lights and synchronize the triggering events with the two sets of cameras. **Calibration & data management.** All cameras are geometrically calibrated using ChArUco patterns attached to each face of a rhombicuboctahedron. We also estimate the intrinsic and extrinsic parameters of all cameras, from which we compute the geometric transformation between each pair of cameras. This allows us to re-project images captured by one camera to another camera (with the re-projection error less than 0.6 pixels). Moreover, we chromatically calibrate all cameras through adjusting their white balance. With 42 cameras capturing 2K images at \(60fps\), we need an infrastructure to support high-speed data transfer and storage. To this end, we set up three high-performance servers, each with 16TB SSD hard drive and PCI-E to USB capture cards. On each server, we build an M2 SSD RAID system with read and sequential write speeds up to 28Gbps, and connect it to 14 cameras. In this way, we can directly save uncompressed raw images on the servers. The RAID systems can host up to 1 hour of acquired images. The data are later transferred to a NAS system for further processing. ### Feature Tracking and Reconstruction **2D marker detection & 3D reconstruction.** To detect markers on captured images, we convert the images into the HSV color space. The pixels corresponding to fluorescent markers typically exhibit high Saturation (S) and Value (V) values, and their Hue (H) values fall into a small range related to the dye's emission profile (i.e., \(H\in[0,15]\) for UV-red and \(H\in[110,125]\) for UV-blue). We label all the pixels that satisfy these criteria as marker pixels and detect the center position of each marker. The marker positions will be used in the subsequent step for template fitting. Thanks to our markers, the deformable object captured by the multi-view fluorescent images has rich textures, even Figure 4: We measure the fluorescent spectral response on different types of surfaces. **(a)** Mouse skin sample. **(b)** Various paper and cloth samples. **(c)** Equipment for measuring the spectral response. Figure 5: Spectral responses of two types of markers on sketch paper (left), mouse skin (middle) and cardboard (right). when the object itself has no texture at all. The rich textures allow us to follow the standard 3D photogrammetric reconstruction algorithm (_e.g._, COLMAP [49, 50]) to obtain a 3D point cloud for each frame in time. In practice, we compute dense disparity maps through semi-global matching [27], and use the commercial software Agisoft Metashape2 for point cloud reconstruction. Footnote 2: [https://www.agisoft.com/](https://www.agisoft.com/) **Template fitting.** Next, we fit each frame of 3D point cloud to an object-dependent predefined template. Depending on specific tasks (e.g., to capture hand gestures or facial expressions), different templates will be used, and their details will be provided in Section 4. Here we present our fitting algorithm that can be generally applied to different tasks. Consider a 3D point cloud \(\mathcal{S}\) consisting of \(M\) points \(\textbf{p}=\{\mathbf{p}_{1},\mathbf{p}_{2},...,\mathbf{p}_{M}\}\) and a 3D shape template described by \(N\) vertices \(\textbf{v}=\{\mathbf{v}_{1},\mathbf{v}_{2},...,\mathbf{v}_{N}\}\) and \(K\) faces. Our goal is to deform the shape template so that it aligns closely to the 3D point cloud. We adopt the embedded deformation graph [53] to deform the template: for every vertex \(i\) on the template shape, its deformed position is described by \(\mathbf{v}_{i}+\mathbf{t}_{i}\). To ensure deformation smoothness, every vertex \(i\) also has a local region of influence. Its influence is described by a rotational matrix \(\mathbf{R}_{i}\in SO(3)\), which maps any point \(\mathbf{p}\) in its local region to the position \(\mathbf{p}^{\prime}\) according to \[\mathbf{p}^{\prime}=\mathbf{R}_{i}(\mathbf{p}-\mathbf{v}_{i})+\mathbf{v}_{i}+\mathbf{t}_{i}. \tag{1}\] We determine \(\mathbf{t}_{i}\) and \(\mathbf{R}_{i}\) (for \(i=1..N\)) by solving the following optimization problem: \[E_{\text{total}}=E_{\text{fit}}+\lambda_{\text{m}}E_{\text{marker}}+\lambda_{ \text{s}}E_{\text{smooth}}. \tag{2}\] The first term \(E_{\text{fit}}\) measures the \(\ell_{2}\) distance between the 3D point cloud and the deformed template mesh in two ways: for each point \(j\) in the point cloud \(\mathcal{S}\), its distance to the closest vertex and the closest face on the template. \(E_{\text{fit}}\) is thus a summation of two terms, namely, \[\begin{split} E_{\text{fit}}=&\sum_{j=1}^{M}\underbrace {\|\mathbf{v}_{c(i)}+\textbf{t}_{c(j)}-\mathbf{p}_{j}\|_{2}^{2}}_{\text{point to vertex distance}}+\\ &\beta\sum_{j=1}^{M}\underbrace{\|\mathbf{n}_{c(i)}^{\top}(\mathbf{v}_{ c(j)}+\textbf{t}_{c(i)}-\mathbf{p}_{j})\|_{2}^{2}}_{\text{point to face distance}},\end{split} \tag{3}\] where \(c(j)\) indicates the index of the deformed template vertex closet to the point \(\mathbf{p}_{j}\), \(\mathbf{n}_{c(j)}\) denotes the vertex normal, and \(\beta\) is a weight for balancing the two terms. The second term \(E_{\text{marker}}\) measures the distance between the detected 2D markers and the projected marker positions from the template mesh. Consider \(N_{f}\) markers. Their 3D positions on the undeformed template mesh, denoted by \(\mathbf{x}_{j}\) for \(j=1..N_{f}\), are initialized at the beginning of the capture session (see Section 4 for object-specific details of this initialization). \(\mathbf{x}_{j}\) can be expressed using the barycentric coordinate \(\mathbf{\alpha}_{j}=[\alpha_{j,1}\ \alpha_{j,2}\ \alpha_{j,3}]\) on the template triangle where it is located, that is, \(\mathbf{x}_{j}=\sum_{k=1}^{3}\alpha_{j,k}\mathbf{v}_{j,k}\), where \(\mathbf{v}_{j,k}\) (\(k=1,2,3\)) are the vertex positions of the template triangle. With these notations, \(E_{\text{marker}}\) is defined as \[E_{\text{marker}}=\sum_{i=1}^{N_{v}}\sum_{j=1}^{N_{f}}w_{ij}\|\mathbf{p}_{ij}-\bm {\pi}_{i}\mathbf{\tilde{x}}_{j}\|_{2}^{2}, \tag{4}\] where \(\mathbf{\tilde{x}}_{j}\) is the \(j\)-th marker's 3D position on the deformed template mesh (i.e., \(\mathbf{x}_{j}=\sum_{k=1}^{3}\alpha_{j,k}(\mathbf{v}_{j,k}+\mathbf{t}_{j,k})\)), Figure 6: **(a)** Conceptual illustration of our system with trigger scheme (green color refers to reference cameras, and purple color refers to UV cameras). **(b)** The real physical setup of our system with zoom-in view of the UV LED unit. **(c)** Samples images captured by our cameras (images with the green borders are captured by the reference cameras). is the number of UV camera views; \(w_{ij}\) is the confidence weight for the marker \(j\) being viewed from the \(i\)-th camera. If the marker \(j\) is occluded from the \(i\)-th camera, \(w_{ij}\) vanishes. Moreover, \(\mathbf{p}_{ij}\) is the 2D position of a marker \(j\) (if not occluded) on the \(i\)-th camera view, and \(\mathbf{\pi}_{i}\) is the projection matrix of the \(i\)-th camera. The last term \(E_{\text{smooth}}\) regulates the smoothness of the template mesh's deformation. This is where the local influence of each vertex \(i\) (and hence \(\mathbf{R}_{i}\)) is involved. Following a similar term defined in [53] (called \(E_{\text{reg}}\) therein), \(E_{\text{smooth}}\) encourages the mesh deformation to be locally rigid, defined as \[E_{\text{smooth}}=\sum_{j=1}^{N}\sum_{k\in\mathcal{N}(j)}\gamma_{jk}\|\mathbf{R}_{ j}\mathbf{v}_{kj}-\mathbf{v}_{kj}+\mathbf{t}_{j}-\mathbf{t}_{k}\|_{2}^{2}, \tag{5}\] where \(\mathbf{v}_{kj}\) is a shorthand for \(\mathbf{v}_{kj}=\mathbf{v}_{k}-\mathbf{v}_{j}\); \(\mathcal{N}(j)\) is the neighboring vertices of vertex \(i\) (here defined as the 10-nearest neighbors of \(i\)); and \(\gamma_{jk}\) is a weight parameter determined by distance between \(\mathbf{v}_{j}\) and \(\mathbf{v}_{k}\) (see more details in [53]). When solving the optimization problem (2), we express each \(\mathbf{R}_{i}\) in Lie algebra \(so(3)\) and use the Levenberg-Marquardt optimizer. The optimization starts with large \(\lambda_{\text{s}}\) and \(\lambda_{\text{m}}\) values, and gradually reduces them in iterations until the optimization converges. **Feature warping.** Apart from 3D reconstruction of the deformed shape using fluorescent images, we also need to align the 3D shape and features with reference-camera videos wherein markers are invisible and the object appearance remains untouched--for example, the reference video together with precisely aligned 3D shape deformation populate a training dataset useful for many deep neural networks. But the videos captured by the UV cameras and reference cameras are not precisely synchronized (recall Section 3.2). There is a \(2ms\) time difference. In many cases, this time difference is sufficiently small so that we can safely ignore it. However, when the object moves too quickly, this time difference causes a slight misalignment. We alleviate this misalignment by linearly interpolating the 3D shapes and features of two consecutive frames to the time instant at which the reference frame is captured. This strategy, albeit simple, is effective for reducing the misalignment (see experiment in Fig. 7). ## 4 Experiments We perform experiments on various scenes to analyze the performance of our proposed imaging system and reconstruction algorithms. **Influence of delayed trigger.** We first analyze how much mis-alignment could be caused by the trigger delay between the reference and UV cameras, and how effective our interpolation algorithm is at mitigating this artifact. This analysis is very important as our goal is to leverage the "invisible" makers in the UV view for reconstruction and tracking under the reference view. We perform experiments using a board paper with grid. The grid is visible in both reference and UV views. We can therefore use the grid corners as features for measuring the pixel shift caused by the trigger delay. Sample images of the target are shown in Fig. 7. We illustrate feature points from corresponding reference and UV views (marked in orange and green, respectively), as well as those from the consecutive UV view (marked in red). We can see that features from the two consecutive UV images have apparent shift (whose time interval is \(16ms\)). The feature mis-alignment between the corresponding UV and reference views is slight in this case, but could vary with different motion speeds (_i.e_., the faster the speed, the larger the shift). We then quantitatively measure the amount of misalignment for different trigger delays, and evaluate the effectiveness of our linear interpolation algorithm described in Section 3.3. We first compute the 3D coordinates of the feature \begin{table} \begin{tabular}{|c|c||c|c|c|c|c|c|c|} \hline & \(\sigma\) & 0 & 1 & 2 & 3 & 4 & 5 & 15 \\ \hline \hline \multirow{3}{*}{**Fact**} & 1ms & 0.49 & 0.20 & 0.56 & 1.06 & 1.56 & 2.06 & 8.12 \\ & 2ms & 1.02 & 0.53 & 0.26 & 0.62 & 1.12 & 1.63 & 8.21 \\ & 4ms & 2.23 & 1.69 & 1.16 & 0.64 & 0.28 & 0.56 & 7.85 \\ \hline \end{tabular} \end{table} Table 1: Average point-to-ray distances (in \(mm\)) w.r.t. \(\sigma\) and trigger delays Figure 7: **(top)** Sample images of our grid target under UV and reference views. **(bottom)** Point-to-ray distance curves w.r.t. various \(\sigma\) values for \(1ms\) and \(4ms\) trigger delays. points via ray triangulation from all UV views. Assume the 3D points computed by the UV views at time \(t\) and time \(t+1\) are \(\mathbf{v}^{t}\) and \(\mathbf{v}^{t+1}\), respectively. As the reference views are too sparse for accurate triangulation, we only trace out rays from features on the reference view. We use \(\mathbf{r}^{t}\) to denote the rays traced out from reference view at time \(t\). We then linearly interpolate \(\mathbf{v}^{t}\) and \(\mathbf{v}^{t+1}\) to obtain an intermediate point \(\bar{\mathbf{v}}^{\sigma}\). We calculate the point-to-ray distance from \(\bar{\mathbf{v}}^{\sigma}\) to \(\mathbf{r}^{t}\) and use it to measure the accuracy of feature alignment in 3D. We test on three different trigger delays: \(1ms\), \(2ms\), and \(4ms\). We compute the point-to-ray distance with respect to various interpolation parameter \(\sigma\), from \(0\) to \(16\). When \(\sigma=0\), it is equivalent as directly using \(\mathbf{v}^{t}\) (_i.e._, no interpolation). We test on motions of different types (_e.g._, linear and circular), as well as different speed. The average point-to-ray distances of various configurations are reported in Table 1. We also plot the curves of point-to-ray distance with respect to different \(\sigma\) for\(1ms\) and \(4ms\) delays (see Fig. 7). The distance variation at each \(\sigma\) value is caused by different motions. From these measurements, we can see that when \(\sigma\) equals to the trigger delay, we have the smallest amount mis-alignment (the point-to-ray distances are smaller than \(0.3mm\) in all cases). Our interpolation is particularly useful when the trigger delay is large. For example, in the case of \(4ms\) delay, the mis-alignment is \(2.23mm\) without interpolation (_i.e._, \(\sigma=0\)). Our interpolation significantly brings down the average error. **3D reconstruction and tracking results.** Here we show the 3D reconstruction and correspondence tracking results on three types of scenes: facial expressions, hand interactions, and deformed paper/cloth. Video results are available in the supplementary material. Figure 8 shows our results on capturing various facial expressions. We use the UV-blue ink to draw dot patterns on the tester's face. We apply our template-fitting algorithm to recover the 3D face mesh. Here we use the 3D face template provided by R3DS Wrap4D. Other popular face template, such as 3DMM [72], can also be used. In the first few frames, we use the rest pose (_i.e._, neural expression) for initializing the marker positions on the template. As the rest pose can be easily registered to the template (even without using the marker guidance), we can project the fluorescent images to the registered template to initialize marker locations on the template. In the rest of the video, we will use these points to compute the marker term (Eqn. 4) for template fitting. By using the fluorescent markers, our 3D reconstruction results are more accurate. The tracked correspondences are more reliable and robust. Figure 9 shows results of various hand scenes, including gestures and hand-object interactions. In these scenes, we use UV-blue ink to draw dot patterns on the hand and UV-red ink to paint on the object. The hue difference between the two types of ink help us separate the hand and object, which can significantly improve the template fitting. Fig. 10 shows a comparison of hand template fitting results with vs. without separating the object in hand. We can the direct fitting result (_i.e._, without separating the object) is highly inaccurate, since the object points are also being considered in the fitting. For hand scenes, we use the MANO [45] model as the 3D template. Similar to capturing the face, we also use the rest pose in the first few frames for registering marker points on the template. As we have multiple viewpoints and we use marker guidance, our approach can handle occlusions very well (_e.g._, the crossed-hand scene). Again, our 3D reconstruction and correspondence tracking are highly accurate and robust (see video results). By using the interpolation scheme, our recovered 3D hand model and tracked features can align with the marker-free view very well. Figure 11 shows results of a deformed paper. For paper and cloth, we assume they are rectangular shaped and thus use a 3D finite plane as template. Either UV-blue or UV-red in can be used to draw on these targets. In the experiment Figure 8: **Facial expressions.** From top to bottom, we show the marker-free images, UV images with markers, tracked feature points, and recovered 3D face meshes warped on the marker-free views. shown Fig. 11, we draw a \(13\times 14\) grid and use their corners as features. By imposing a grid with the same size on the plane template, the feature points are easily initialized on the template mesh. Unlike the face and hand scene that require rest pose for initializing marker features on the pre-defined template, the features are directly imposed on the paper template, due to the simplicity of the template and with knowledge of the pattern. Our results demonstrate that our tracking is robust even with strong distortions. To validate the accuracy of our 3D reconstruction and view alignment, we show rendering results with novel textures on the marker-free images. We first apply the novel texture on our recovered surface mesh. We then project the re-textured surface back to the reference view to generate images with the novel texture. We can see that the distortions of the novel texture are highly consistent with the surface distortions, which indicates that both our 3D reconstruction and view alignment are reliable and accurate. **Tracking results with vs. without markers.** In order to show the effectiveness of our fluorescent markers on lack-of-texture surfaces, we compare the correspondence tracking results that use our markers versus without using the markers (_i.e._, directly apply the tracking algorithm on the marker-free reference view). Fig. 12 and Fig. 13 show the comparison results on the paper and face scenes (video comparison results are available in the supplementary material). We use the tracking algorithm provided in the commercial software R3DS Wrap4D2. We can see that our tracked correspondences are more reliable and robust, especially on the paper scene, which lacks texture in large area. In the paper scene, the features are initialized as grid pattern. Under the marker-free view, most of the features will lose track within 20 frames (the entire video has \(\sim\)300 frames). In the face scene, as human skin has good amount of local features that can be tracked, the tracking result under the marker-free view are reliable for many feature points. However, feature mismatch or lose-track happens at times (see highlighted features in the zoom-in view in Fig. 13). With our fluorescent markers, the tracking result is more robust, with the mismatch rate largely reduced. Figure 10: Comparison of hand template fitting results with vs. without separating the object point cloud. Figure 9: **Hand interactions. From top to bottom, we show marker-free images, UV images with markers, recovered 3D hand and object point clouds, and recovered 3D hand meshes warped on the marker-free views.** ## 5 Conclusions In summary, we have presented a method leveraging the fluorescent markers that are only visible under UV light for 3D reconstructing and tracking of deformable surfaces. In contrast to existing methods, our technique works with textureless surfaces. We developed a practical imaging system for simultaneously capturing videos with and without markers via time multiplexing. We developed a template-based algorithm for 3D reconstruction and tracking and an interpolation scheme to align the recovered 3D model and tracked correspondences back to the markerless views. We have performed experiments on challenging deformable scenes, including facial expressions, gestures, hand-object interaction, and deformed paper/cloth. Results demonstrate that by using the fluorescent markers, we can obtain high-quality 3D reconstruction and robust feature tracking. Our method can be used in a studio setting for high-quality digital media production, as well as provide datasets with ground-truths for training deformable object reconstruction and tracking algorithms.
2302.01227
The All-Pairs Vitality-Maximization (VIMAX) Problem
Traditional network interdiction problems focus on removing vertices or edges from a network so as to disconnect or lengthen paths in the network; network diversion problems seek to remove vertices or edges to reroute flow through a designated critical vertex or edge. We introduce the all-pairs vitality maximization problem (VIMAX), in which vertex deletion attempts to maximize the amount of flow passing through a critical vertex, measured as the all-pairs vitality of the vertex. The assumption in this problem is that in a network for which the structure is known but the physical locations of vertices may not be known (e.g. a social network), locating a person or asset of interest might require the ability to detect a sufficient amount of flow (e.g., communications or financial transactions) passing through the corresponding vertex in the network. We formulate VIMAX as a mixed integer program, and show that it is NP-Hard. We compare the performance of the MIP and a simulated annealing heuristic on both real and simulated data sets and highlight the potential increase in vitality of key vertices that can be attained by subset removal. We also present graph theoretic results that can be used to narrow the set of vertices to consider for removal.
Alice Paul, Susan Martonosi
2023-02-02T17:07:07Z
http://arxiv.org/abs/2302.01227v1
# The All-Pairs Vitality-Maximization (VIMAX) Problem ###### Abstract Traditional network interdiction problems focus on removing vertices or edges from a network so as to disconnect or lengthen paths in the network; network diversion problems seek to remove vertices or edges to reroute flow through a designated critical vertex or edge. We introduce the _all-pairs vitality maximization problem (VIMAX)_, in which vertex deletion attempts to maximize the amount of flow passing through a critical vertex, measured as the all-pairs vitality of the vertex. The assumption in this problem is that in a network for which the structure is known but the physical locations of vertices may not be known (e.g. a social network), locating a person or asset of interest might require the ability to detect a sufficient amount of flow (e.g., communications or financial transactions) passing through the corresponding vertex in the network. We formulate VIMAX as a mixed integer program, and show that it is NP-Hard. We compare the performance of the MIP and a simulated annealing heuristic on both real and simulated data sets and highlight the potential increase in vitality of key vertices that can be attained by subset removal. We also present graph theoretic results that can be used to narrow the set of vertices to consider for removal. ## 1 Introduction Network disruption has important applications to infrastructure design [9, 48, 13], energy transmission [10, 37], robust network design [18, 22, 24], biological systems [59], illicit trade networks [4], and counterterrorism [7, 62]. Much of this work focuses on three primary problem types: 1) network flow interdiction, in which an attacker is trying to decrease the flow capacity of the network by interdicting vertices or edges such that the maximum flow between a source and sink is minimized (e.g., [3, 6, 8, 44, 45, 61, 73, 23]); 2) shortest path interdiction, in which an attacker interdicts vertices or edges such that the shortest path between a source and sink is maximized (e.g., [39, 56, 75]); and 3) network diversion, in which a minimum cost, minimal cutset of edges is identified such that when removed, any source-sink path in the network is forced to travel through a particular set of critical edges (e.g., [14, 19, 20]). Of interest in this paper is the concept of vertex (equivalently, edge) _vitality_, which measures the reduction in the maximum flow between the source and sink when that vertex (or edge) is removed from the graph [5, 42]. A vertex having high vitality is needed to achieve a high volume of flow from source to sink, and as such, this vertex will have a high volume of flow passing through it when the maximum flow is achieved. We define the _all-pairs vitality_ of a vertex \(v\) to be the summed reduction in the maximum flow between all pairs of nodes (themselves excluding vertex \(v\)), when vertex \(v\) is removed from the graph. We present the following combinatorial optimization problem, the _all-pairs vitality maximization problem (VIMAX)_: Given a connected, directed, general capacity graph \(G=(V,E)\) with vertex set \(V\), edge set \(E\), and a key vertex of interest, \(k\), identify a subset of vertices \(S\), whose removal from the graph \(G\) maximizes the all-pairs vitality of \(k\). This problem was first introduced in the second author's unpublished manuscript for the specific context of undirected, unit-capacity graphs, for which the maximum flow between a pair of vertices represents the number of edge-disjoint paths between that pair [47]. VIMAX can be considered a network disruption problem that is distinct from the three forms outlined above. Covert organizations, such as terrorist groups or drug cartels, tend to communicate along longer paths that are difficult to trace, suggesting a trade-off between efficiency and secrecy that could render path-length-based attacks ineffective [4, 26, 50]. Moreover, we leverage the possibility that critical vertices in certain types of networks can become vulnerable if they are forced to become more active. (As an example, Osama bin Laden and, subsequently, Ayman Al-Zawahiri were known to be leaders of the al-Qaeda terrorist network, yet they remained in hiding for many years before U.S. intelligence could pinpoint their geographic locations.) If we assume the volume of communication, money, or illicit substances passing through a vertex is a proxy for that corresponding member's visibility to intelligence officers, and communication between pairs of members in the organization is proportional to path capacity, then VIMAX can identify members of the organization whose removal will maximize communication through an important but clandestine leader. Unlike in network diversion problems, we do not require all flow in the remaining graph to be routed through this vertex (indeed in a network diversion problem, the volume of flow passing through the critical vertex might be quite small after vertex or edge removal); instead we seek to _maximize_ the total flow routed through this vertex. In this paper, we examine VIMAX from both computational and theoretical perspectives. In Section 2, we frame this work in the context of the existing literature. In Section 3, we define VIMAX, present it as a mixed integer linear program, and demonstrate that it is NP-Hard. Section 4 presents a simulated annealing heuristic for solving VIMAX. The computational performance of these two methods is compared in Section 5. Section 6 presents mathematical properties of VIMAX that can be leveraged to streamline computations. Section 7 provides future extensions of this work and concludes. ## 2 Literature review We first contrast the network interdiction and diversion problems commonly seen in the literature with the VIMAX problem we will present in this paper. We then discuss the relationship between vitality and other graph centrality metrics. Finally, we present research on optimization approaches that could be useful to the problem of vitality maximization. ### Network Interdiction and Diversion Network interdiction models address the logistical problem of removing edges or vertices from a graph to inhibit the flow of resources through a network. This has applications to military operations and combating drug or human trafficking [41, 67, 75]. Analysis of complex network interdiction typically focuses on disconnecting the network, increasing the lengths of shortest paths, cutting overall flow capacity, or reducing the desirability of paths in the network [1, 25, 12, 27, 28, 29, 30, 33, 36, 38, 49, 55, 56, 66, 67, 74, 75]. The most well-known model involves maximum flow network interdiction and its variants [3, 8, 16, 44, 48, 57, 60, 61, 73]. Of note, [73] introduces the "dualize-and-combine" method that is commonly used in network interdiction literature, as well as in this paper. Smith and Song thoroughly survey the network interdiction literature, and demonstrate that the assumptions widely held across the papers they survey make interdiction problems a special case of Stackelberg games [64]. A related problem to network interdiction is the _network diversion problem_ in which an attacker seeks to interdict, at minimum cost, a set of edges (equivalently, vertices) such that all source-sink flow must be routed through at least one member of a pre-specified set of "diversion" edges or vertices. This problem was first posed by [20]. Applications include military operations, in which it might be beneficial to force a foe to divert its resources through a target edge that is heavily armed; and information networks, in which communications are routed through a single edge that can more easily be monitored [43]. Cullenbine _et al._ also study the network diversion problem [19]. They present an NP-completeness proof for directed graphs, a polynomial-time solution algorithm for \(s-t\) planar graphs, a mixed integer linear programming formulation that improves upon that given in [20], and valid inequalities to strengthen the formulation. Lee _et al._ examine an extension of the network diversion problem known as the _multiple flows network diversion problem_ in which there are many source-sink pairs being considered simultaneously [43]. They define a set \(S\) of possible source nodes and \(T\) of possible sink nodes. They are interdicting a _minimum cost_ set of edges such that all remaining flow in the network passes through the diversion edge. They formulate the problem as a mixed integer linear program, and compare its performance to standard combinatorial Benders decomposition and a branch-and-cut combinatorial Benders decomposition. Without loss of generality, vertex interdiction be formulated as arc interdiction in which each vertex \(v\) in the original graph is represented by two vertices \(v_{i}\) and \(v_{o}\) in a modified graph having a single arc between them, \((v_{i},v_{o})\). Each arc \((u,v)\) in the original graph is then transformed to a corresponding arc \((u_{o},v_{i})\) in the modified graph. Interdicting this arc in the modified graph is equivalent to interdicting the vertex in the original graph. For undirected graphs, the graph is first transformed into a directed one before doing the transformation. There are several aspects of [43] worth noting as they connect to our work. First, after the interdiction set is removed from the graph, there is no guarantee that the total flow passing through the diversion edge is particularly large. In the vitality maximization problem that we present here, we are identifying an interdiction set of vertices such that the flow through the target vertex is maximized, thus ensuring that the target being surveilled has ample flow. Although our formulation does not associate a cost with each vertex that is interdicted, it is disadvantageous for the removal subset to be very large, as that would inherently cause the flow through the target vertex to drop. Second, we adopt their testing scheme of examining the performance of the algorithms we develop on grid networks (planar), as well as random \(G_{n,m}\) graphs [40], and a drug trafficking network [52]. A question conversely related to network interdiction and diversion is that of network resilience and detection of attacks. Sharkey _et al._ survey literature on four types of resilience: robustness, rebound, extensibility, and adaptability, with a primary focus on research addressing network robustness and the ability of a network to rebound following an attack [63]. Dahan _et al._ study how to strategically locate sensors on a network to detect network attacks [21]. ### Vitality and Other Graph Centrality Measures Vitality is one of several types of graph centrality metrics. Centrality metrics quantify the importance of a given vertex in a network. The book of Wasserman and Faust provides a detailed examination of social network analysis stemming from the field of sociology and includes discussion of many commonly known centrality metrics, including degree, betweenness, and closeness [70]. The survey of Rasti and Vogiatzis presents centrality metrics commonly used in computational biology [58]. The degree of a vertex is the number of neighbors it has. The betweenness of a vertex is the number of shortest paths between all pairs of vertices on which the vertex lies. Closeness measures the average shortest path length between the vertex and all other vertices in the graph. Vogiatzis _et al._ present mixed integer programming formulations for identifying groups of vertices having the largest degree, betweenness, or closeness centrality in a graph [69]. Stephenson and Zelen first proposed _information centrality_ and applied it to a network of men infected with AIDS in the 1980s [65]. They are among the first to develop a centrality metric that does not require an assumption that information must flow along shortest paths. They use the theory of statistical estimation to define the information of a signal along the path to be the reciprocal of the variance in the signal. Assuming the noise induced along successive edges of a path is independent, the variance along each path is additive, and the total variance in the signal grows with the path length. They then use this assumption to evaluate the total information sent between any pair of vertices \((s,t)\). From here, they define the centrality of a vertex \(i\) to be the harmonic average of the sum of the inverses of the information sent from from vertex \(i\) to every other vertex. They point out that "information... may be intentionally channeled through many intermediaries in order to 'hide' or'shield' information in a way not captured by geodesic paths." This appears to be the case in terrorist and other covert networks as well [11]. Centrality metrics can be used to guide network disruption approaches. Cavallaro _et al._ show that targeting high betweenness vertices efficiently reduces the size of the largest connected component in a graph based on a Sicilian mafia network [12]. Grassi _et al._ find that betweenness and its variants can be used to identify leaders in criminal networks [32]. There also exist centrality measures related to network flows, as surveyed in [42]. In particular, for any real-valued function on a graph, Koschutzki _et al._ define the _vitality_ of a vertex (or edge) to be the difference in that function with or without the vertex (or edge). When the function represents the maximum flow between a pair of vertices, the _vitality_ of a vertex \(k\) in a graph (equivalently, an edge \(u\)) with respect to an \(s-t\) pair of vertices is defined to be the reduction in the maximum flow between \(s\) and \(t\) when vertex \(k\) (equivalently, edge \(u\)) is removed from the graph. Moreover, when one examines the same reduction in maximum flow in the network over all possible \(s-t\) pairs with respect to a given vertex, we have what Freeman _et al._ define as _network flow centrality_[26], or what we refer to as _all-pairs vitality_ in this paper. The _most-vital edge_ or component is the one whose removal decreases the maximum flow through the network by the greatest amount. Identifying the most-vital edge in a network is a long-studied problem dating back to the work of [15], [72], and [60]. More recent examination includes the work of [2], who formulate a mathematical program to maximize resilience, using a defender-attacker-defender model. They additionally cite several applications for the most-vital edge problem including electric power systems, supply chain networks, telecommunication systems, and transportation. Ausiello _et al._ provide a method for calculating the vitality of all edges (with respect to a given \(s\) and \(t\)) with only \(2(n-1)\) maximum flow computations, rather than the \(m\) computations expected if one were to calculate the vitality of each edge individually [5]. None of the found literature pertaining to vitality focuses on the problem presented here: that of identifying a set of removal vertices to maximize the vitality of a key vertex (VIMAX). ## 3 Optimization framework We will show that VIMAX can be formulated as an integer linear program. We start by presenting terminology that will be used in the paper. ### Definitions We consider a connected, directed graph \(G=(V,E)\) with vertex set \(V\), edge set \(E\), and a key vertex of interest, \(k\). Each edge \((i,j)\) has a capacity \(u_{ij}\) reflecting the maximum amount of flow that can be pushed along that edge. The graph has a _key vertex_, \(k\), which could represent, for example, an important but elusive participant in an organization. The _vitality maximization problem (VIMAX)_ seeks to identify a subset of vertices whose removal from the graph \(G\) maximizes the all-pairs vitality of \(k\). Thus, the objective is to identify a set of vertices to remove from the graph to make the key vertex \(k\) as "active" as possible by forcing flow to pass through that vertex. For any source-sink \(s\)-\(t\) pair, let \(z_{st}(G)\) be the value of the maximum \(s\)-\(t\) flow in graph \(G\). We call \(Z_{k}(G)\) the _flow capacity of graph \(G\) with respect to vertex \(k\)_, which is the all-pairs maximum flow in \(G\) that does not originate or end at \(k\). Thus, \[Z_{k}(G)=\sum_{\begin{subarray}{c}s,t\in V\setminus\{k\}\\ s\neq t\end{subarray}}z_{st}(G). \tag{1}\] The _all-pairs vitality of \(k\)_, \(\mathcal{L}_{k}(G)\), equals the flow capacity of the graph with respect to \(k\)_minus_ the flow capacity with respect to \(k\) of the subgraph \(G\setminus\{k\}\) obtained when vertex \(k\) is deleted: \[\mathcal{L}_{k}(G)=Z_{k}(G)-Z_{k}(G\setminus\{k\}). \tag{2}\] To measure how the removal of a subset of vertices impacts the vitality of the key vertex, we define the _vitality effect_ of subset \(S\) on key vertex \(k\) to be the change in the key vertex \(k\)'s vitality caused by removing subset \(S\): \(\mathcal{L}_{k}(G\setminus S)-\mathcal{L}_{k}(G)\). If the vitality effect of \(S\) on \(k\) is positive, then removing subset \(S\) from the graph has diverted more flow through \(k\), a desired effect. The goal of this research is to identify the subset of vertices \(S\) that maximizes the vitality effect, which is equivalent to maximizing the value of \(\mathcal{L}_{k}(G\setminus S)\). We formally define the _all-pairs vitality maximization problem_ (VIMAX) as \[max_{S\subseteq V}\ \mathcal{L}_{k}(G\setminus S). \tag{3}\] From expressions (1) and (2), we see that there is no guarantee that the vitality effect on \(k\) of removing any subset \(S\) need ever be positive. When subset \(S\) is removed from the graph, the overall flow capacity \(Z_{k}(G\setminus S)\) generally decreases, and never increases, because \(S\)'s contribution to the flow is removed. In order for subset \(S\)'s removal to have a positive vitality effect on key vertex \(k\), the remaining flow must be rerouted through \(k\) in sufficiently large quantities to overcome the overall decrease in flow through the network. However, as we will show in Section 5.3, identification of an optimal or near-optimal removal subset often dramatically increases the vitality of the key vertex. ### Mixed Integer Linear Programming Formulation To formulate VIMAX as an optimization problem, we first formulate a linear program to solve for the vitality of \(k\) in any graph \(G\). Then we expand that formulation into a mixed integer programming formulation that seeks the optimal subset \(S\) of vertices to remove from the graph to maximize the vitality of \(k\) in the resulting graph. #### 3.2.1 Vitality Max-Flow Subproblems. Following the approach of [39], we take the dual of problem \(Z_{k}(G\setminus\{k\})\) to convert it into a minimum cut problem having the same optimal objective function value, and embed it in the formulation of \(\mathcal{L}_{k}(G)\). Since the dual problem is a minimization problem, the objective function will correctly correspond to the vitality. Letting \(V^{\prime}=V\setminus\{k\}\), and letting \(E^{\prime}\) be the set of edges that remain after removing vertex \(k\) and its incident edges, we obtain the following linear program for finding \(\mathcal{L}_{k}(G)\): Maximize \[\sum_{\begin{subarray}{c}s,t\in V^{\prime}\\ s\neq t\end{subarray}}v_{s,t}-\sum_{\begin{subarray}{c}s,t\in V^{\prime}\\ s\neq t\end{subarray}}\sum_{(i,j)\in E^{\prime}}u_{i,j}y_{i,j,s,t}\] subject to \[\sum_{j:(i,j)\in E}x_{i,j,s,t}-\sum_{j^{\prime}:(j^{\prime},i)\in E}x_{j^{ \prime},i,s,t}=\begin{cases}v_{s,t}&\text{if }i=s\\ -v_{s,t}&\text{if }i=t\\ 0&\text{otherwise}\end{cases}\] (4) \[x_{i,j,s,t}\leq u_{i,j},\forall(i,j)\in E,\forall s,t\in V^{\prime}\] \[y_{i,s,t}-y_{j,s,t}+y_{i,j,s,t}\geq 0,\forall(i,j)\in E^{\prime}, \forall s,t\in V^{\prime}\] \[-y_{s,s,t}+y_{t,s,t}\geq 1,\forall s,t\in V^{\prime}\] \[v_{s,t}\geq 0,\forall s,t\in V^{\prime}\] \[x_{i,j,s,t}\geq 0,\forall(i,j)\in E,\forall s,t\in V^{\prime}\] \[y_{i,j,s,t}\geq 0,\forall(i,j)\in E^{\prime},\forall s,t\in V^{\prime}\] \[y_{i,s,t}\text{ unrestricted},\forall i,s,t\in V^{\prime}.\] Variables \(x_{i,j,s,t}\) and \(v_{s,t}\) are the primal variables from the maximum flow formulation of problem \(Z_{k}(G)\). \(x_{i,j,s,t}\) represent the optimal \(s-t\) flow pushed along edge \((i,j)\), and \(v_{s,t}\) represent the optimal \(s-t\) flow values. Variables \(y_{i,s,t}\) and \(y_{i,j,s,t}\) are the dual variables from the minimum cut formulation of problem \(Z_{k}(G\setminus\{k\})\). We can interpret \(y_{i,s,t}\) as vertex potentials: For every edge \((i,j)\), if \(y_{i,s,t}<y_{j,s,t}\), meaning vertex \(i\) has lower potential than vertex \(j\) when computing the minimum \(s-t\) cut, then edge \((i,j)\) must cross the cut. In such a case, dual variable \(y_{i,j,s,t}=1\), and edge capacity \(u_{i,j}\) is counted in the objective function. #### 3.2.2 VIMAX: Choosing an Optimal Removal Subset. Now that we have expressed the vitality of \(k\) in \(G\) as a linear program, we can return to VIMAX, which finds a subset \(S\) of vertices whose removal maximizes the vitality of \(k\). Given a set \(S\), the linear program in Equation 4 applied to graph \(G\setminus S\) solves for \(\mathcal{L}_{k}(G\setminus S)\). We must modify the LP above to choose a subset \(S\) that maximizes the objective function \(\mathcal{L}_{k}(G\setminus S)\). We can formalize this by creating binary variables \(z_{i}\) for each vertex \(i\) such that \(z_{i}=1\) if vertex \(i\) remains in the graph, and \(z_{i}=0\) if vertex \(i\) is removed from the graph (that is, \(i\) is included in subset \(S\)). We also define variables \(w_{i,j}\) for each edge that indicate whether or not edge \((i,j)\) remains in the graph following the removal of \(S\) and/or \(k\). We define linking constraints so that whenever both vertices \(i\) and \(j\) remain in the graph (that is, \(z_{i}=z_{j}=1\)), then \(w_{i,j}\) must equal 1, and whenever either vertex \(i\) or \(j\) is selected for deletion (that is, \(z_{i}=0\) or \(z_{j}=0\) or both) then \(w_{i,j}\) must equal 0. (Due to this relationship between \(w_{i,j}\) and the binary \(z_{i}\), the \(w_{i,j}\) are effectively constrained to be binary variables without explicitly declaring them as such.) To Equation 4, we make the following adjustments to the original primal and dual constraints. We constrain the primal flow variables \(x_{i,j,s,t}\leq u_{i,j}w_{i,j}\), reflecting whether or not edge \((i,j)\) remains in the graph. We also modify the dual potential constraints so that \(y_{i,j,s,t}=0\) whenever vertices \(i\) and \(j\) are at the same potential (as before) or edge \((i,j)\) no longer exists in the graph. Introducing the variables \(z_{i}\) and \(w_{i,j}\) and the modifications on our vitality constraints, we can now write the full mixed-integer linear program. Given a graph \(G=(V,E)\), a key vertex \(k\), and a maximum size, \(m\), of the removal set, the following mixed-integer linear program solves VIMAX. Maximize \(\sum_{\begin{subarray}{c}s,t\in V^{\prime}\\ s\neq t\end{subarray}}v_{s,t}-\sum_{\begin{subarray}{c}s,t\in V^{\prime}\\ s\neq t\end{subarray}}\sum_{\begin{subarray}{c}(i,j)\in E^{\prime}\\ i,j\end{subarray}}u_{i,j}y_{i,j,s,t}\) subject to \(\sum_{\begin{subarray}{c}i\in V\\ z_{k}=1\\ w_{i,j}\leq z_{i},\forall(i,j)\in E\\ w_{i,j}\leq z_{j},\forall(i,j)\in E\\ w_{i,j}\geq z_{i}+z_{j}-1,\forall(i,j)\in E\end{subarray}}\sum_{\begin{subarray} {c}j:(i,j)\in E\\ y_{i,j}\leq z_{i},\forall(i,j)\in E\end{subarray}}x_{j^{\prime},i,s,t}=\begin{cases}v _{s,t}&\text{if }i=s\\ -v_{s,t}&\text{if }i=t\\ 0&\text{otherwise}\end{cases}\) (5) \[\begin{array}{l}x_{i,j,s,t}\leq u_{i,j}w_{i,j},\forall(i,j)\in E,\forall s,t\in V^{\prime}\\ y_{i,s,t}-y_{j,s,t}+y_{i,j,s,t}\geq-(1-w_{i,j}),\forall(i,j)\in E^{\prime}, \forall s,t\in V^{\prime}\\ -y_{s,s,t}+y_{t,s,t}\geq 1,\forall s,t\in V^{\prime}\end{array}\] \[\begin{array}{l}z_{i}\text{ binary, }\forall i\in V\\ w_{i,j}\geq 0,\forall(i,j)\in E\\ v_{s,t}\geq 0,\forall s,t\in V^{\prime}\\ x_{i,j,s,t}\geq 0,\forall(i,j)\in E,\forall s,t\in V^{\prime}\\ y_{i,j,s,t}\geq 0,\forall(i,j)\in E^{\prime},\forall s,t\in V^{\prime}\\ y_{i,s,t}\text{ unrestricted, }\forall i,s,t\in V^{\prime}\end{array}\] Extending the approach of [53] to general capacity, directed graphs, we can show that VIMAX is NP-Hard. In the case that \(m=1\) and we can remove at most one vertex, we can do brute-force and solve the above MIP setting \(z_{i}=0\) and all other \(z_{j}=1\) for all \(i\in V^{\prime}\). **Theorem 1**.: _The all-pairs vitality maximization problem is NP-Hard._ Proof.: The proof of this can be found in Appendix A. ## 4 Simulated Annealing Heuristic As an alternative to solving VIMAX exactly with a MIP, we develop a simulated annealing heuristic. Each iteration of simulated annealing begins with a candidate removal subset. In the first iteration, this is the empty set, and in subsequent iterations the initial solution is the best solution found at the conclusion of the previous iteration. The objective function value of each solution is computed as the vitality of the key vertex when this subset is removed from the graph. Each call to the algorithm consists of an _annealing phase_ and a _local search phase_. During the annealing phase, neighboring solutions of the current solution are obtained by toggling a single vertex's, or a pair of vertices', inclusion or exclusion from the candidate removal subset, subject to the constraint that \(|S|\leq m\). If the neighboring solution improves the objective function value, it is automatically accepted for consideration. If the neighboring solution has a worse objective function value, it will be accepted to replace the current solution with an _acceptance probability_ governed by a temperature function, \(T\). When the temperature is high (in early iterations), there is a high probability of accepting a neighboring solution even if its objective function value is worse than that of the incumbent solution. This permits wide _exploration_ of the solution space. In later iterations, the temperature function cools, reducing the likelihood that lower objective function value solutions will be considered. This permits _exploitation_ of promising regions of the solution space. Given temperature \(T\), the probability of accepting a solution having objective function value \(e_{0}\) when the best objective function value found so far is \(e_{max}>e_{0}\) is given by \(P=e^{-(e_{max}-e_{0})/T}\). The initial temperature, \(T\), is chosen so that the acceptance probability of a solution having at least \(90\%\) of the initial objective function value is at least \(95\%\). In subsequent iterations, \(T\) is cooled by a multiplicative factor of \(0.95\). After a set number of annealing iterations, a single iteration of local search is conducted on the best solution found so far by toggling each vertex sequentially to determine if its inclusion or exclusion improves the objective function value. The best solution found is returned. We use a Gomory-Hu tree implementation of the all-pairs maximum flow problem to rapidly calculate the vitality of the key vertex on each modified graph encountered by the heuristic [31, 34]. For mathematical reasons that are discussed in Section 6, we can exclude leaves from consideration in any removal subset. These two enhancements permit the simulated annealing heuristic to run very fast on even large instances, as we discuss in Section 5.3. ## 5 Computational Analysis We now present performance comparisons on a variety of datasets of the MIP formulation and the simulated annealing heuristic. Following the approach of [43], we generate grid networks, which are planar. We also test the performance of the methods on random networks [40] and on a real drug trafficking network [52]. We first describe these data sets and the computational platform used, and then we present the results. Code and data files are available at our Github repository: [https://github.com/alicepaul/network_interdiction](https://github.com/alicepaul/network_interdiction). ### Data #### 5.1.1 Grid Networks. We generate grid networks in a similar fashion as [43]. We generate square \(M\times M\) grids with \(M\) varying from five to eight. Such graphs have an edge density of \(\frac{4}{M(M+1)}\), which ranges from \(13.3\%\) for \(M=5\) to \(5.6\%\) for \(M=8\). On each grid, we generate edge capacities independently and uniformly at random from the integers from \(1\) to \(M\). For each case, we likewise consider two scenarios, testing a maximum removal subset size of \(m=1\) or \(m=M\) (that is, \(\sqrt{|V|}\)). For each grid size and removal subset size combination, we generate three trial graphs. For each trial, a key vertex is selected uniformly at random over the vertices. #### 5.1.2 Random Networks. Random \(G_{n,m}\) graphs are parametrized by a number of vertices, \(n=|V|\), and a number of edges, \(m=|E|\)[40]. Each graph is sampled by finding a random graph from the set of all connected graphs with \(n\) nodes and \(m\) edges. We test our methods on graphs having the same number of vertices and same number of edges as the grid networks above: \(|V|=\{25,36,49,64\}\) vertices, with \(|E|=\{40,60,84,112\}\), respectively. On each graph, we generate edge capacities independently and uniformly at random from the integers from 1 to \(\sqrt{|V|}\). For each case, we likewise consider two scenarios, testing a maximum removal subset size of \(m=1\) or \(m=\sqrt{|V|}\). For each graph size and removal subset size combination, we generate three trial graphs. For each trial, the key vertex is selected to have the highest betweenness centrality. #### 5.1.3 Drug Trafficking Network. Lastly, we test our models on a real-world covert cocaine trafficking group, prosecuted in New York City in 1996 [52]. This network consists of 28 people between whom 151 phone conversations were intercepted over wiretap over a period of two months. An edge exists between persons \(i\) and \(j\) if at least one conversation between them appears in the data set. There are 40 edges in this graph, corresponding to an edge density of 10.6%. We can consider a unit capacity version of the network, as well as a general capacity version in which the capacity on edge \(i-j\) is equal to the number of conversations between them appearing in the data. The weighted network is shown in Figure 1, where line width is proportional to the number of wiretapped calls occurring between two operatives. According to Natarajan _et al._, some individuals in the network are known to have the roles described in Table 1.We test a maximum removal subset size of \(m=1\) or \(m=5\approx\sqrt{|V|}\). Because the Colombian bosses (vertices 1, 2, and 3) are high-level leaders important to the functioning of the organization, we treat these vertices as the key vertices on which we attempt to maximize vitality. ### Computational Framework The performance of the MIP and the simulated annealing heuristic was tested on a computer with a 3 GHz 6-Core Intel Core i5 processor and 16 GB of memory. The Single-VIMAX and VIMAX MIP instances were run in python 3.9.6 calling the CPLEX solver through the CPLEX python API, and were each limited to two hours of computation time. The simulated annealing heuristic was also coded in python and limited to 10,000 iterations on each trial instance. Initial results were collected using the Extreme Science and Engineering Discovery Environment (XSEDE) supercomputers [68] and up to five hours of computation time but did not show significantly different results. In addition \begin{table} \begin{tabular}{|c|c|} \hline Vertex & Role \\ \hline 1, 2, 3 & Colombian Bosses \\ 5 & Courier \\ 6 & Managing Operations \\ 7 & Task Distribution \\ 14 & Technical Operations \\ 16 & Security \\ \hline \end{tabular} \end{table} Table 1: Roles of notable vertices in the cocaine trafficking network of Natarajan _et al._[52]. to the general VIMAX MIP, a single vertex removal MIP (Single VIMAX) was also tested. Single vertex removal simulated annealing results are not reported, as they are effectively equivalent to brute force search. ### Results Table 2 presents the results of all completed trials. The first five columns explain the graph type, number of vertices (\(|V|\)), number of edges (\(|E|\)) and for the general VIMAX problem allowing multiple removals, the maximum allowed size, \(m\), of the removal subset. Column six gives the initial vitality of the key vertex in the original graph with no vertices removed. Columns seven through ten provide results on the performance of the single vertex removal MIP (Single VIMAX); columns eleven through fifteen provide results from the multi-removal MIP (VIMAX); and columns sixteen through nineteen provide results from the multi-removal simulated annealing heuristic. (There is no need to use simulated annealing for Single VIMAX because it can be solved by sequentially testing the removal of each vertex.) For the three methods, the best vitality found within the time or iteration limit, the MIP gap if available, the percentage increase of the best vitality found by the method over the original vitality of the key vertex in the full graph, and the running time in seconds are given. For the multi-removal methods, the size of the best found removal subset (\(|S|\)) is also given. MIP instances that terminated due to time limit have Time reported as \({}^{\prime}-^{\prime}\). First we note that Table 2 provides a proof-of-concept demonstrating that it is possible to Figure 1: Cocaine trafficking network of Natarajan _et al._ Line width is proportional to number of wiretapped calls made between pairs of operatives [52]. [MISSING_PAGE_POST] increase (sometimes dramatically) the vitality of the key vertex through subset removal. Removing a single vertex increased the vitality by 42%-200% in all grid network instances for which the MIP solved to optimality within the time limit, and by up to 82% in the random graph instances; single vertex removal was not able to increase the vitality of the key vertex in the drug network. When allowing multiple removals, simulated annealing was able to identify removal subsets that increased the vitality on the key vertex by as much as 1,373%. Unsurprisingly, the full VIMAX MIP allowing multiple removals is substantially harder to solve than the single removal MIP. On grid and random networks, the MIP failed to terminate within the two-hour time limit on all instances with at least \(n=36\) nodes. On the \(n=36\) random and the \(7\times 7\) and \(8\times 8\) grid network instances, the single removal MIP also did not terminate within the time limit, but an improving solution was returned in more cases. The large MIP gaps on the MIP allowing multiple removals indicate a failure to find improving integer solutions. For multiple vertex removal, the simulated annealing heuristic yielded excellent solutions in a fraction of the time required by even the single removal MIP. On the large instances for which the multiple removal MIP reached the time limit, the simulated annealing heuristic found substantially better solutions than the MIP incumbents. For those instances in which the multiple removal MIP solved to optimality, the solutions found by simulated annealing are often optimal and always near-optimal. The effectiveness of vertex removal to maximize vitality appears to depend on the network structure and choice of key vertices. While the drug network has approximately the same number of vertices and edges as the 25-node instances of the random and grid networks, the key vertices (corresponding to vertices Boss 1, Boss 2, and Boss 3 in Figure 1) chosen in these trials are less amenable to vitality maximization. The drug network has a large number of leaves, whereas the grid networks do not. As we will see in Section 6, vertices, such as leaves, that do not have at least two vertex-disjoint paths to the key vertex will never appear in an optimal removal subset. Lastly, in these trials, we chose to restrict the removal subset size to at most \(m\) vertices. The reason to restrict the removal subset size is to reduce the solution space, and thus the complexity, of the problem. This decision is justifiable because we know removing too many vertices will cause overall flow in the network to drop such that the vitality on the key vertex cannot increase. Thus, an important question is what should be an appropriate value of \(m\) to effectively reduce the solution space without compromising the quality of solutions found? We do not have a definitive answer to this question. However, we see that in many of the trials, the best removal subset identified by any method has a size strictly less than \(m\approx\sqrt{|V|}\), suggesting that this choice of \(m\) is reasonable for the sizes and types of graphs considered here. ## 6 Leveraging Structural Properties of Vitality Thus far, we have established that subset removal can dramatically increase the vitality of a key vertex. However, solving this problem exactly as a MIP is computationally intractable for even modestly sized graphs. Fortunately, simulated annealing is an appealing alternative that yields very good solutions in dramatically less time than the MIP. In this section, we explore mathematical properties that characterize vertices that can be ignored by subset removal optimization approaches. We demonstrate how these properties can be leveraged to simplify the graph on which VIMAX is run. ### Identifying Vitality-_Reducing_ Vertices To reduce the complexity of the optimization formulation, we turn to identifying conditions that cause a vertex to have a vitality-_reducing_ effect on the key vertex. This allows us to ignore such vertices in any candidate removal subset and reduce the solution space of the VIMAX problem. Our first observation is that the presence of a cycle is necessary for the removal of a vertex to increase the vitality of a key vertex. The vitality of a leaf is always equal to \(0\), so the removal of any subset that results in \(k\) becoming a leaf also cannot increase the vitality of \(k\). As a corollary, if \(k\) has neighbor set \(N(k)\) and more than \(|N(k)|-2\) of \(k\)'s neighbors are removed, the vitality effect on \(k\) will be nonpositive. We can generalize this further. When there are not at least two vertex-disjoint paths from \(i\) to \(k\), any removal subset including \(i\) will have a vitality effect on \(k\) no greater than the same subset excluding \(i\), as stated by the following theorem1: Footnote 1: A more general cut theorem holds for the specific case of an undirected graph in which all edges in the graph have unit capacity [47]. In such a graph, the value of the maximum \(s-t\) flow equals the number of edge disjoint paths between \(s\) and \(t\) in the graph. In this case, the relationship between the size of the cut between the key vertex \(k\) and a candidate for removal, \(i\), and the connectivity between vertices along the boundaries of that cut conveys information about the vitality effect on \(k\) of removing \(i\). The reader is also referred to [54] for an overview of how this theorem might be implemented in practice for unit capacity, undirected graphs. **Theorem 2**.: _Let \(G\) be a graph with key vertex \(k\), and let \(i\) be a vertex such that there do not exist at least two vertex-disjoint paths starting at \(i\) and ending at \(k\). Let \(S\) be any vertex subset containing \(i\), and let \(T=S\setminus\{i\}\). Then, \(\mathcal{L}_{k}(G\setminus S)\leq\mathcal{L}_{k}(G\setminus T)\). Therefore, \(T\) will have at least as large a vitality effect on \(k\) as \(S\)._ Proof.: The proof of this can be found in Appendix B. Put simply, the existence of only one vertex-disjoint path between \(i\) and \(k\) means that \(i\) and \(k\) do not lie on a cycle. Therefore when \(i\) is removed, any \(s-t\) paths that previously passed through \(i\) cannot be rerouted through any alternate path passing through \(k\). Note that identifying vertices that do not have at least two vertex-disjoint paths to \(k\) is computationally straightforward. We can solve an all \(u-k\) pairs maximum flow problem on a related graph \(\hat{G}\) in which every vertex \(u\) is replaced with a pair of vertices connected by a unit capacity edge: \((u,u^{\prime})\). For every directed edge \(i-j\) in the original graph, we include directed edge \((i^{\prime},j)\) in the modified graph. Through the use of a Gomory-Hu tree, we can solve this in \(O(|V|^{3}\sqrt{|E|})\) time [31, 34]. Any vertex \(u\) corresponding to vertex \(u^{{}^{\prime}}\) in \(\hat{G}\) that has a maximum \(u^{{}^{\prime}}-k\) flow of one in \(\hat{G}\) does not have at least two vertex-disjoint paths to \(k\) in the original graph and can be ignored by any removal subset. We call the set of such vertices, \(Q\). Every vertex in \(Q\) should be maintained in the graph and not be considered for removal. These properties show that when seeking a vitality-maximizing subset for removal, we can ignore all subsets that include: * vertices in \(Q\) (i.e. they do not share a cycle with \(k\)); * more than \(|N(k)|-2\) of \(k\)'s neighbors. After performing preprocessing on the graph to identify \(N(k)\) and \(Q\), we can add the following constraints to the MIP formulation: \[\begin{split}& z_{i}=1,\forall i\in Q\\ &\sum_{i\in N(k)}z_{i}\geq 2\end{split} \tag{6}\] Although the above constraints provide a tighter formulation for VIMAX, the anticipated benefits of these constraints are likely to be modest. Table 3 shows \(|Q|\) (the number of vertices that do not have at least two vertex-disjoint paths to \(k\)) for each graph used for testing in Section 5. Unsurprisingly given their structure, all the vertices in the grid networks have at least two vertex-disjoint path to \(k\); thus none of these vertices can be eliminated from consideration and are omitted from Table 3. By contrast, the sparse drug trafficking network has nearly half of its vertices that do not have at least two vertex-disjoint paths to the key vertex; this is a significant reduction in the number of candidate vertices for removal, but VIMAX was readily tractable on this already-small network. Thus, this criterion alone is unlikely to render previously intractable MIP instances tractable. \begin{table} \begin{tabular}{|c|c|c|c||c|c|c|c|} \hline & & & & & & \% Decr & \% Inc \\ Graph Type & \(|V|\) & \(|E|\) & \(|Q|\) & \(|\hat{V}|\) & \(|\hat{E}|\) & Time & Obj \\ \hline \multirow{4}{*}{Drug Network} & \multirow{4}{*}{unit cap.} & 28 & 40 & 14 & 18 & 30 & 93.19 & 0.00 \\ & & 28 & 40 & 14 & 18 & 30 & 94.68 & 0.00 \\ & & 28 & 40 & 13 & 18 & 30 & 91.83 & 0.00 \\ \cline{2-10} & & 28 & 40 & 14 & 20 & 32 & 89.40 & 0.00 \\ & & 28 & 40 & 14 & 20 & 32 & 92.91 & 0.00 \\ & & 28 & 40 & 13 & 20 & 32 & 90.98 & 0.00 \\ \hline \multirow{4}{*}{Random} & \multirow{4}{*}{\(n=25\)} & 25 & 40 & 23 & 12 & 11 & 99.98 & 0.00 \\ & & 25 & 40 & 1 & 25 & 40 & 62.92 & 0.00 \\ & & 25 & 40 & 6 & 24 & 38 & 23.98 & 0.00 \\ \cline{2-10} & & 36 & 60 & 33 & 5 & 4 & 99.99 & 0.00 \\ & & 36 & 60 & 4 & 35 & 59 & - & 13.56 \\ & & 36 & 60 & 8 & 35 & 59 & - & 0.00 \\ \cline{2-10} & & 49 & 84 & 6 & 48 & 83 & - & 0.00 \\ & & 49 & 84 & 7 & 47 & 82 & - & 0.00 \\ & & 49 & 84 & 7 & 48 & 83 & - & 0.00 \\ \cline{2-10} & & 64 & 112 & 8 & 64 & 112 & - & 0.00 \\ & & 64 & 112 & 10 & 63 & 111 & - & 0.00 \\ & & 64 & 112 & 10 & 63 & 111 & - & 0.00 \\ \hline \end{tabular} \end{table} Table 3: Improvement in key VIMAX instance size parameters by identifying vitality-reducing vertices and using graph-simplification. \(|Q|\) is the number of vertices that do not have at least two vertex-disjoint paths to \(k\); vertices in \(Q\) can be ignored by VIMAX (see Section 6.1). \(|\hat{V}|\) and \(|\hat{E}|\) are the numbers of vertices and edges, respectively, in the reduced graph after applying the graph simplification method of Section 6.2. The last two columns report the percentage decrease in time and percentage increase in best objective function value of the graph simplification method compared to the Multi-Removal MIP results reported in Table 2. Entries denoted by ’-’ indicate instances in which the MIP did not terminate within two hours. ### Simplifying the Graph Because VIMAX grows rapidly in the number of vertices, we can improve the computational tractability of VIMAX by simplifying our original graph into a vitality-preserving graph having fewer vertices. We rely heavily on Theorem 2 to do this. Suppose that a vertex \(v\) disconnects the graph into two components \(T_{1}\) and \(T_{2}\) such that \(k\in T_{1}\). Then, by Theorem 2, an optimal solution will not contain any vertex in \(T_{2}\). Further, the maximum flows between pairs of vertices within \(T_{2}\) do not contribute to the vitality effect on \(k\). Therefore, all that is needed to preserve the vitality effect on \(k\) in the simplified graph is to preserve information about the maximum flow between all pairs of vertices \(s,t\) such that \(s\in T_{1}\) and \(t\in T_{2}\). For all vertices \(t\in T_{2}\) we create a single edge between \(t\) and \(v\) with capacity equal to the maximum flow between \(t\) and \(v\). This replaces all previous edges between vertices in \(T_{2}\). This affects the value of the all-pairs maximum flow problem but does not affect the vitality effect on \(k\) for any subset \(S\subset T_{1}\). Further, if any subset of vertices \(T^{\prime}\subseteq T_{2}\) all have the same new capacity value, we combine \(T^{\prime}\) into a single vertex with weight \(|T^{\prime}|\). When calculating the maximum flow between any pair of vertices \(s\) and \(t\) in the graph, we multiply the flow by the product of the weights of the vertices to account for this simplification. Using the process described in the previous section, we can identify the subset of vertices \(Q\subseteq V\setminus\{k\}\) that do not have at least two vertex-disjoint paths to \(k\). Given a vertex \(i\in Q\), we find a path from \(i\) to \(k\) and find the first vertex \(v\) along \(i\)'s path to \(k\) such that \(v\) has at least two vertex-disjoint paths to \(k\). Removing the vertex \(v\) disconnects the graph. Therefore, we follow the simplification process above and mark all vertices in the corresponding \(T_{2}\), including \(i\), as processed. We then repeatedly identify any unprocessed vertex in \(Q\) to further simplify the graph. After all vertices in \(Q\) have been processed, all these vertices will be weighted leaves in the new simplified graph where the weight depends on how many vertices have been combined. All other vertices will retain a weight of one. Figure 2 shows an example of this simplification process in which there are two components that have been simplified. Note that vertices 4, 6, and 7 have been combined together into a vertex with weight three. Further, vertices 5 and 8 have been combined together into a vertex with weight two. As argued above, the maximum flow between all pairs of vertices that were in the same simplified component never contribute to the vitality effect on \(k\). Therefore, we ignore these pairs in the optimization problem by removing the appropriate variables and constraints. We therefore just need to check that we have preserved the maximum flow between all pairs of vertices that were not in the same component. This is true by nature of the weights which are multiplied. For example, in Figure 2, we multiply by weight 4 for the maximum flow between vertex 4 and vertex 1, accounting for all the paths between vertices 4, 6, and 7 and vertex 1. Thus, our optimization problem still finds an optimal subset to remove on the simplified graph that is optimal in the original graph. The number of pairs of vertices decreases from 45 to 19 since the number of vertices excluding \(k\) decreases from 10 to 7 and we can ignore the flow between vertices 9 and 10 and between vertices 4 and 8 in the simplified graph. Table 3 shows the number of vertices (\(|\hat{V}|\)) and edges (\(|\hat{E}|\)) in each test graph after applying the graph simplification algorithm. The only graph types experiencing an appreciable reduction in size after simplification are the drug trafficking network and the smaller random graphs. We posit that highly connected graphs such as the grid networks are less amenable to the simplification method than sparser networks. In Table 3 we also include the percentage decrease in time and percentage increase in the best objective function value found via graph simplification to the Multi-Removal MIP removal results reported in Table 2. The time includes the time to perform the graph simplification, which is very efficient. For graphs with a significant reduction in the number of nodes and edges, we see a corresponding decrease in the runtime for the MIP. For the larger networks that did not terminate within the time limit, we only see the best vitality found improve in one instance. ## 7 Future Work and Conclusions In this paper we have presented the VIMAX optimization problem that identifies a subset of vertices whose removal maximizes the volume of flow passing through a key vertex in the network. VIMAX is NP-Hard. We have used the dualize-and-combine method of [73] to formulate VIMAX as a mixed integer linear program, and we compared its performance to that of a simulated annealing heuristic. We also demonstrated how identifying vertices not having at least two vertex-disjoint paths to the key vertex can be used to simplify the graph and reduce computation time on certain graph types. Additionally, this paper opens up a rich area of future research. * Graph Simplification: Additional properties of vitality-reducing vertices, such as those outlined in [54] for the unit capacity case, could be derived for the general capacity case and used to preprocess or simplify the graph to reduce the solution space of VIMAX. In particular, it would be beneficial to identify small cuts in the graph such that all vertices on the other side of the cut as \(k\) can be ignored from consideration. * Bender's Decomposition: Because the number of constraints in the VIMAX MIP grows on the order of \(O(|E||V|^{2})\), we can use Bender's decomposition algorithm to solve our problem for large graphs. The decomposition is presented in Appendix C, but preliminary testing did not improve the MIP performance. The survey of Smith and Song Figure 2: An example of a graph (left) and its simplified version (right) with vertex weights. Vertices 4, 6, and 7 have been combined together into a vertex with weight three. Further, vertices 5 and 8 have been combined together into a vertex with weight two. illustrates a variety of approaches that could be applied to improve the performance of the Bender's decomposition of VIMAX [64]. * Optimization: In this paper, we have focused on identifying vertices having high vitality effect on the key vertex without considering the cost or difficulty of removing them from the graph. An enhancement to VIMAX could include a budget constraint restricting the choice of subsets based on the difficulty of their removal. * Game theory and dynamic response: The disruption technique described in this paper focuses on the network at one snapshot in time and assumes that any subset removal occurs simultaneously and that the network remains static. Extensions to VIMAX might explore cascading effects of sequential vertex removal, similar to the literature on multi-period interdiction [23], cascading failures [17, 51, 76], agent-based models for counter-interdiction responses [46], and game theoretic responses of the network to disruptions, such as adding new edges. * Imperfect information: The VIMAX formulation presented here assumes complete and perfect knowledge of the network's structure. However, the complete structure of a covert network is typically not known to enforcement agencies, and can evolve rapidly [41]. Future work could address applying VIMAX to networks with uncertain or unknown structure. * Robust network design: We can use the results of this research to design networks, such as telecommunication and other infrastructure networks, to be robust to vitality-diverting attacks [18]. * Multiple key vertices: In the case that we want to maximize the flow through a subset \(S\) of key vertices, we can extend the definition of vitality maximization to maximize the all-pairs vitality of \(S\). The MIP and simulated annealing algorithm can be updated accordingly. VIMAX has broad applicability to problems including disrupting organized crime rings, such as those used in terrorism, drug smuggling and human trafficking; disrupting telecommunications networks and power networks; as well as robust network design. ## 8 Acknowledgements This work used the Extreme Science and Engineering Discovery Environment (XSEDE) [68], which is supported by National Science Foundation grant number ACI-1548562. Specifically, this work used the XSEDE Bridges-2 Extreme Memory and Regular Memory supercomputers at the Pittsburgh Supercomputing Center through allocation MTH210021. We thank consultant T. J. Olesky for their assistance troubleshooting batch calls to AMPL, which was made possible through the XSEDE Extended Collaborative Support Service (ECSS) program [71]. The authors would also like to acknowledge Doug Altner, Michael Ernst, Elizabeth Ferme, Sam Gutekunst, Danika Lindsay, Yaniv Ovadia, Sean Plott, and Andrew S. Ronan for their contributions to early efforts in this work [47, 35, 53]. This work was supported by the National Science Foundation Research Experiences for Undergraduates program (NSF-DMS-0755540). ## Appendix 0.A Proof of Theorem 1 In this section we prove Theorem 1 stating that the all-pairs vitality maximization problem is NP-Hard. Our proof extends the proof of [53] for the special case of undirected, unit-capacity edges. We first restate VIMAX as a decision problem: For a fixed value \(C\), does there exist a subset \(S\) such that \(\mathcal{L}_{k}(G\setminus S)\geq C\)? **Theorem 3**.: _The all-pairs vitality maximization problem is NP-Hard._ Proof.: We use a reduction from the 3-Satisfiability problem (3SAT). Given an instance of 3SAT with \(n\) boolean variables \(x_{1},x_{2},\ldots,x_{n}\) and \(m\) clauses in 3-conjunctive normal form \(c_{1},c_{2},\ldots,c_{m}\), the 3SAT decision problem is whether there is an assignment of variables to true/false values such that all clauses are satisfied. As an example with three variables, any assignment with \(x_{3}\) set to false would satisfy the two clauses (\(x_{1}\) or \(\overline{x_{2}}\) or \(\overline{x_{3}}\)) and (\(\overline{x_{1}}\) or \(x_{2}\) or \(\overline{x_{3}}\)). Given an instance of 3SAT, we construct a corresponding instance of VIMAX. We start building our directed graph \(G\) with three vertices \(d_{1}\), \(k\) (the key vertex), and \(d_{2}\) with an edge from \(k\) to \(d_{2}\) with capacity \(n+m\). Further, for each variable \(x_{i}\) we create four vertices \(\{a_{i},b_{i},t_{i},f_{i}\}\) and add edges \((d_{1},a_{i})\), \((a_{i},t_{i})\), and \((a_{i},f_{i})\) each with capacity two and edges \((t_{i},b_{i})\), \((f_{i},b_{i})\), \((t_{i},d_{2})\), \((f_{i},d_{2})\), and \((b_{i},k)\) each with capacity one. Then, for each clause \(c_{j}\), we create two variables \(u_{j}\) and \(v_{j}\) and add unit capacity edges \((d_{1},u_{j})\) and \((v_{j},k)\). To encode this clause, for each variable \(x_{i}\) in clause \(c_{j}\) we add unit edges \((u_{j},t_{i})\) and \((t_{i},v_{j})\); for each variable \(\overline{x_{i}}\) in clause \(c_{j}\) we add unit edges \((u_{j},f_{i})\) and \((f_{i},v_{j})\). Last, we create \(M=8\cdot(m+n+n\cdot m)\) leaves with unit edges to \(d_{1}\) and \(M\) leaves with unit edges from \(d_{2}\) and set \(C=(M+1)^{2}(n+m)\). An example graph of a single-clause, three-variable, 3SAT problem having clause (\(\overline{x_{1}}\) or \(x_{2}\) or \(\overline{x_{3}}\)) is given in Figure 3. Figure 3: Graph representation of a single clause 3SAT problem with three variables and the clause (\(\overline{x_{1}}\) or \(x_{2}\) or \(\overline{x_{3}}\)). All edge capacities equal one except where indicated otherwise. Note that the leaves adjacent to \(d_{1}\) and \(d_{2}\) essentially increase the weight of the flow between \(d_{1}\) and \(d_{2}\). In particular, if we define \[\mathcal{L}_{k}^{s,t}(G\setminus S):=z_{st}(G\setminus S)-z_{st}(G\setminus(S \cup\{k\}))\] and let \(V^{\prime}\) be all vertices excluding these leaves as well as \(d_{1}\), \(d_{2}\), and \(k\), then we can rewrite the all-pairs vitality as \[\mathcal{L}_{k}(G\setminus S)\] \[=\sum_{\begin{subarray}{c}s,t\in V(S\cup\{k\})\\ s\neq t\end{subarray}}\mathcal{L}_{k}^{s,t}(G\setminus S)\] \[=(M+1)^{2}\mathcal{L}_{k}^{d_{1},d_{2}}(G\setminus S)+(M+1)\sum_ {s\in V^{\prime}\setminus S}\Big{[}\mathcal{L}_{k}^{d_{1},s}(G\setminus S)+ \mathcal{L}^{s,d_{2}}(G\setminus S)\Big{]}+\sum_{\begin{subarray}{c}s,t\in V ^{\prime}\setminus S\\ s\neq t\end{subarray}}\mathcal{L}_{k}^{s,t}(G\setminus S)\] \[=(M+1)^{2}\mathcal{L}_{k}^{d_{1},d_{2}}(G\setminus S)+(M+1)\sum_ {s\in V^{\prime}\setminus S}\mathcal{L}^{s,d_{2}}(G\setminus S).\] The last line holds since paths from \(d_{1}\) to \(s\in V^{\prime}\setminus S\) or between \(s\) and \(t\in V^{\prime}\setminus S\) cannot travel through \(k\). Further, we can bound the second half of the sum above by bounding the vitality by the capacity out of the starting node for each maximum flow. \[(M+1)\sum_{s\in V^{\prime}\setminus S}\mathcal{L}^{s,d_{2}}(G \setminus S) \leq(M+1)(4m+9n+2m\cdot n)\] \[\leq\frac{1}{2}(M+1)^{2}.\] This shows that the maximum flow from pairs that are not \(\{d_{1},d_{2}\}\) contributes a trivial amount to the overall vitality. Therefore, finding a subset such that \(\mathcal{L}_{k}(G\setminus S)\geq C=(M+1)^{2}(n+m)\) is equivalent to finding a subset \(S\) such that \(\mathcal{L}_{k}^{d_{1},d_{2}}(G\setminus S)\geq n+m\). We now show that given an assignment of variables to boolean values that satisfy all clauses, we can find an equivalent subset \(S\) such that \(\mathcal{L}_{k}^{d_{1},d_{2}}(G\setminus S)\geq n+m\). Let \(S\) contain \(t_{i}\) for all \(i\) such that \(x_{i}\) is set to false and \(f_{i}\) for all \(i\) such that \(x_{i}\) is set to true. Consider the maximum flow between \(d_{1}\) and \(d_{2}\) in \(G\setminus S\). For each variable \(x_{i}\) such that \(t_{i}\in S\), we send two units of flow: one along the path \((d_{1}\)-\(a_{i}\)-\(f_{i}\)-\(b_{i}\)-\(k\)-\(d_{2})\) and one along the path \((d_{1}\)-\(a_{i}\)-\(f_{i}\)-\(d_{2})\). If, instead, \(f_{i}\in S\), then the paths change to use \(t_{i}\) instead of \(f_{i}\). Further for each clause \(j\), since this clause is satisfied, there exists at least one vertex \(t_{i}\) or \(f_{i}\) adjacent to \(u_{j}\) that is not in \(S\). Without loss of generality, let this vertex be \(t_{i}\). We send one unit of flow along the path \((d_{1}\)-\(u_{j}\)-\(t_{i}\)-\(v_{j}\)-\(k\)-\(d_{2})\). The overall flow has value \(2n+m\). Since all edges adjacent to \(d_{1}\) are saturated, this is a maximum flow. Now consider the maximum flow between \(d_{1}\) and \(d_{2}\) in \(G\setminus(S\cup\{k\})\). For each variable \(x_{i}\) such that \(t_{i}\in S\), we send one unit of flow along the path \((d_{1}\)-\(a_{i}\)-\(f_{i}\)-\(d_{2})\). If, instead, \(f_{i}\in S\), then the path changes to use \(t_{i}\) instead of \(f_{i}\). The overall flow has value \(n\). Since all edges adjacent to \(d_{2}\) are saturated in \(G\setminus(S\cup\{k\})\) this is a maximum flow. This shows that \(\mathcal{L}_{k}^{d_{1},d_{2}}(G\setminus S)\geq n+m\). We must now show the reverse direction to complete the proof. Suppose that we have found a subset \(S\) such that \(\mathcal{L}_{k}(G\setminus S)\geq C\). Then, given that all pairs except \(d_{1}\) and \(d_{2}\) contribute at most \(\frac{1}{2}(M+1)^{2}\) to the vitality, it must be the case that \(\mathcal{L}_{k}^{d_{1},d_{2}}(G\setminus S)\geq n+m\). We decompose the flow into unit flow paths from \(d_{1}\) to \(d_{2}\). Let \(f(s,t)\) be the number of these paths that go from \(s\) to \(t\) in the maximum flow from \(d_{1}\) to \(d_{2}\) in \(G\setminus S\) and \(f^{\prime}(s,t)\) be the number of paths from \(s\) to \(t\) in the maximum flow between \(d_{1}\) and \(d_{2}\) in \(G\setminus(S\cup\{k\})\). Then, \[\mathcal{L}_{k}^{d_{1},d_{2}}(G\setminus S)=\sum_{i=1}^{n}[f(a_{i},d_{2})-f^{ \prime}(a_{i},d_{2})]+\sum_{j=1}^{m}[f(u_{j},d_{2})-f^{\prime}(u_{j},d_{2})]. \tag{7}\] For the first term in Equation 7, we can verify that \([f(a_{i},d_{2})-f^{\prime}(a_{i},d_{2})]\leq 1\) if exactly one of \(t_{i}\) and \(f_{i}\) is in \(S\) and \(\{a_{i},b_{i}\}\cap S=\emptyset\) and at most zero otherwise. In particular, if \(t_{i}\) and \(f_{i}\) are both in \(S\) then \(f(a_{i},d_{2})=f^{\prime}(a_{i},d_{2})=0\). If both \(t_{i}\) and \(f_{i}\) are not in \(S\), then at most two units of flow can go from \(a_{i}\) to \(d_{2}\) in both graphs and both \(t_{i}\) and \(f_{i}\) can avoid using vertex \(k\). Only when exactly one of \(t_{i}\) or \(f_{i}\) has been chosen will at least one path be forced to go through vertex \(k\). For the second term, each term is also at most one given the unit capacity of the edge from \(d_{1}\) into \(u_{j}\). Therefore, \[\mathcal{L}_{k}^{d_{1},d_{2}}(G\setminus S)=\sum_{i=1}^{n}[f(a_{i},d_{2})-f^{ \prime}(a_{i},d_{2})]+\sum_{j=1}^{m}[f(c_{j},d_{2})-f^{\prime}(c_{j},d_{2})] \leq n+m. \tag{8}\] Since \(\mathcal{L}_{k}^{d_{1},d_{2}}(G\setminus S)\geq n+m\) this implies equality throughout and that \(|\{t_{i},f_{i}\}\cap S|=1\) for all \(i=1,2,\ldots,n\). For each variable for which \(t_{i}\) is in \(S\), we set that variable to false. Otherwise, we set the variable to true. Last, in order for every clause to contribute at least one to the overall vitality, \(u_{j}\) must be adjacent to some \(t_{i}\) or \(f_{i}\) not in \(S\). Given the design of our network, this indicates that the assignment satisfies that clause. Overall, this shows that every 3SAT decision problem can be reduced to a VIMAX decision problem and that VIMAX is NP-Hard. ## Appendix B Proof of Theorem 2 Here we prove Theorem 2 stating that the removal of any vertex not having at least two vertex-disjoint paths to the key vertex \(k\) can never increase the vitality of \(k\). **Theorem 4**.: _Let \(G\) be a graph with key vertex \(k\), and let \(i\) be a vertex such that there do not exist at least two vertex-disjoint paths starting at \(i\) and ending at \(k\). Let \(S\) be any vertex subset containing \(i\), and let \(T=S\setminus\{i\}\). Then, \(\mathcal{L}_{k}(G\setminus S)\leq\mathcal{L}_{k}(G\setminus T)\). Therefore, \(T\) will have at least as large a vitality effect on \(k\) as \(S\)._ Proof.: Let \(G\) be a graph with key vertex \(k\) and let \(i\) be a vertex such that there do not exist at least two vertex-disjoint paths starting at \(i\) and ending at \(k\). Then there exists a cut vertex \(v\) whose removal would disconnect the graph into at least two components. We consider two cases, \(v\neq i\) and \(v=i\). When \(v\neq i\), then \(v\) separates a component \(G_{k}\) that includes \(k\) from a component \(G_{i}\) that includes \(i\). Consider the maximum flow between an \(s-t\) pair \((s,t\neq k)\). * If both \(s\) and \(t\) are in \(G_{i}\), the flow between them is unaffected by the removal of vertex \(k\), whether or not vertex \(i\) is removed from the graph. This is because any optimal flow path that passes through vertex \(k\) must first go into and out of vertex \(v\), creating a flow cycle, \(s-\ldots-v-\ldots-k-\ldots-v-\ldots-t\), and thus is equivalent to a flow path that avoids \(G_{k}\) entirely, \(s-\ldots-v-\ldots-t\). * If both \(s\) and \(t\) are in \(G_{k}\), their contribution to the vitality of \(k\) is unaffected by the removal of \(i\) by the same logic as above: any optimal flow path that passes through vertex \(i\) must go into and out of vertex \(v\), creating a flow cycle, and thus is equivalent to a flow path that avoids \(G_{i}\) entirely. * If, without loss of generality, \(s\in G_{i}\) and \(t\in G_{k}\), then the removal of vertex \(i\) may reduce the flow between \(s-\ldots-v\), but the remainder of the path \(v-\ldots-t\) is unaffected. Thus no additional flow can be routed through \(k\) when \(i\) is removed than when \(i\) is present. When \(v=i\), then \(i\) separates a component \(G_{k}\) that includes \(k\) from the remainder of the graph, \(G_{i}\). In this case, the removal of \(i\) will eliminate all \(s-t\) flow between \(s\in G_{i}\) and \(t\in G_{k}\), regardless of whether or not \(k\) is in the graph. Thus, no additional flow can be routed through \(k\) when \(i\) is removed from the graph than when \(i\) is present. ## Appendix C Benders Decomposition Because the number of constraints in the VIMAX MIP grows on the order of \(O(|E||V|^{2})\), we can use Benders decomposition algorithm to solve our problem for large graphs. In our case, the integer master problem chooses the subset of vertices to remove; this problem has relatively few variables and constraints. Given a fixed removal subset, we are left with a large linear network flow subproblem that is guaranteed to have an integer optimal solution. We see in Equation 5 constraints that couple \(w_{i,j}\), \(x_{i,j,s,t}\), \(y_{i,j,s,t}\) and \(y_{i,s,t}\). We let the \(z_{i}\)'s and \(w_{i,j}\)'s be the variables in our master problem. Our initial master problem contains only the constraints related to the \(w_{i,j}\)'s and \(z_{i}\)'s, representing the choice of subset to remove. Thus the master problem is Maximize \[\mathcal{L}_{k}\] subject to \[\begin{array}{ll}&\sum_{i\in V}z_{i}\geq n-m\\ &z_{k}=1\\ &w_{i,j}\leq z_{i},\forall(i,j)\in E\\ &w_{i,j}\leq z_{j},\forall(i,j)\in E\\ &w_{i,j}\geq z_{i}+z_{j}-1,\forall(i,j)\in E\\ &z_{i}\text{ binary, }\forall i\in V\\ &w_{i,j}\geq 0,\forall(i,j)\in E\\ &\mathcal{L}_{k}\geq 0.\end{array}\] (9) Here, \(\mathcal{L}_{k}\) represents the optimal vitality of \(k\). It currently has no restrictions on its value. Solving Equation 9 determines a feasible \(\mathbf{z}\) and \(\mathbf{w}\), which we can use to compute the vitality of \(k\) in the dual of the linear subproblem. When taking the dual we let \(x^{{}^{\prime}}_{i,s,t}\) be the dual variables corresponding to the flow balance constraints of the \(x_{i,j,s,t}\)'s and \(x^{{}^{\prime}}_{i,j,s,t}\) be the dual variables corresponding to the capacity constraints on the \(x_{i,j,s,t}\)'s. Similarly, we let \(y^{{}^{\prime}}_{i,j,s,t}\) be the dual variables corresponding to the edge constraints on \(y_{i,j,s,t}\), and we let \(y^{{}^{\prime}}_{s,t}\) be the dual variables corresponding to the constraints on the relationship between \(y_{s,s,t}\) and \(y_{t,s,t}\). The linear subproblem becomes \[\begin{array}{ll}\text{Minimize}&\sum_{\begin{subarray}{c}s,t\in V^{\prime}\\ s\neq t\end{subarray}}\sum_{(i,j)\in E}u_{i,j}w_{i,j}x^{{}^{\prime}}_{i,j,s,t}+ \sum_{\begin{subarray}{c}s,t\in V^{\prime}\\ s\neq t\end{subarray}}y^{{}^{\prime}}_{s,t}-\sum_{\begin{subarray}{c}s,t\in V^ {\prime}\\ s\neq t\end{subarray}}\sum_{(i,j)\in E^{\prime}}(1-w_{i,j})y^{{}^{\prime}}_{i,j,s,t}\\ \text{subject to}&x^{{}^{\prime}}_{i,s,t}-x^{{}^{\prime}}_{j,s,t}+x^{{}^{ \prime}}_{i,j,s,t}\geq 0,\forall(i,j)\in E,\forall s,t\in V^{\prime}\\ &-x_{s,s,t}+x_{t,s,t}\geq 1,\forall s,t\in V^{\prime}\\ &\sum_{j:(i,j)\in E^{\prime}}y^{{}^{\prime}}_{i,j,s,t}-\sum_{k:(k,i)\in E^{ \prime}}y^{{}^{\prime}}_{k,i,s,t}=\begin{cases}y^{{}^{\prime}}_{s,t}&\text{if $i=s$}\\ -y^{{}^{\prime}}_{s,t}&\text{if $i=t$}\quad\forall i,s,t\in V^{\prime}\\ 0&\text{otherwise}\end{cases}\\ &y^{{}^{\prime}}_{i,j,s,t}\geq-u_{i,j},\forall(i,j)\in E^{\prime},\forall s,t \in V^{\prime}\\ &x^{{}^{\prime}}_{i,j,s,t}\geq 0,\forall(i,j)\in E,\forall s,t\in V^{ \prime}\\ &x^{{}^{\prime}}_{i,s,t}\text{ unrestricted},\forall i,s,t\in V^{ \prime}\\ &y_{s,t}\leq 0,\forall s,t\in V^{\prime}\\ &y^{{}^{\prime}}_{i,j,s,t}\leq 0,\forall(i,j)\in E^{\prime},\forall s,t \in V^{\prime}.\end{cases} \tag{10}\] At the beginning of each iteration \(c\), the master is solved and we obtain the optimal values for \(z_{i}\) and \(w_{i,j}\). Initially, we start with an infinite objective function and all \(z_{i}=1\). The dual of the linear subproblem, shown in Equation 10, is then solved with the optimal \(w_{i,j}\)'s substituted in. If the subproblem is unbounded, simplex returns the extreme ray, defining \(\mathbf{x}^{{}^{\prime}}_{c}\) and \(\mathbf{y}^{{}^{\prime}}_{c}\), and we add the constraint \[\sum_{\begin{subarray}{c}s,t\in V^{\prime}\\ s\neq t\end{subarray}}\sum_{(i,j)\in E}u_{i,j}w_{i,j}x^{{}^{\prime}}_{i,j,s,t,c }+\sum_{\begin{subarray}{c}s,t\in V^{\prime}\\ s\neq t\end{subarray}}y^{{}^{\prime}}_{s,t,c}-\sum_{\begin{subarray}{c}s,t \in V^{\prime}\\ s\neq t\end{subarray}}\sum_{(i,j)\in E^{\prime}}(1-w_{i,j})y^{{}^{\prime}}_{i,j,s,t,c}\geq 0.\] If the subproblem has an objective function value less than or equal to the incumbent value of \(\mathcal{L}_{k}\), then we add in the constraint \[\sum_{\begin{subarray}{c}s,t\in V^{\prime}\\ s\neq t\end{subarray}}\sum_{(i,j)\in E}u_{i,j}w_{i,j}x^{{}^{\prime}}_{i,j,s,t,c }+\sum_{\begin{subarray}{c}s,t\in V^{\prime}\\ s\neq t\end{subarray}}y^{{}^{\prime}}_{s,t,c}-\sum_{\begin{subarray}{c}s,t \in V^{\prime}\\ s\neq t\end{subarray}}\sum_{(i,j)\in E^{\prime}}(1-w_{i,j})y^{{}^{\prime}}_{i,j,s,t,c}\geq\mathcal{L}_{k}.\] Otherwise, the algorithm terminates. Preliminary testing of the Benders decomposition of VIMAX reveals the same problem that plagues large instances of the MIP formulation: the objective function values of the linear subproblems encountered are quite large compared to the objective function value of any feasible integer solution. Thus, the cuts added do not adequately constrain the master problem. Future work is needed to develop improved Benders decompositions.
2308.14117
Cross-Entropy-Based Approach to Multi-Objective Electric Vehicle Charging Infrastructure Planning
Pure electric vehicles (PEVs) are increasingly adopted to decarbonize the transport sector and mitigate global warming. However, the inadequate PEV charging infrastructure may hinder the further adoption of PEVs in the large-scale traffic network, which calls for effective planning solutions for the charging station (CS) placement. The deployment of charging infrastructure inevitably increases the load on the associated power distribution network. Therefore, we are motivated to develop a comprehensive multi-objective framework for optimal CS placement in a traffic network overlaid by a distribution network, considering multiple stakeholders' interested factors, such as traffic flow, PEV charging time cost, PEV travel distance, and the reliability of the distribution network. We leverage a cross-entropy-based method to solve the optimal CS placement and evaluate our method in a real-world 183-node traffic network in Chengdu, China, overlaid by a 26-region distribution network. It is demonstrated that our work provides various viable planning options favoring different objectives for the stakeholders' decision-making in practice.
Jinhao Li, Yu Hui Yuan, Qiushi Cui, Hao Wang
2023-08-27T14:19:15Z
http://arxiv.org/abs/2308.14117v1
# Cross-Entropy-Based Approach to Multi-Objective Electric Vehicle Charging Infrastructure Planning ###### Abstract Pure electric vehicles (PEVs) are increasingly adopted to decarbonize the transport sector and mitigate global warming. However, the inadequate PEV charging infrastructure may hinder the further adoption of PEVs in the large-scale traffic network, which calls for effective planning solutions for the charging station (CS) placement. The deployment of charging infrastructure inevitably increases the load on the associated power distribution network. Therefore, we are motivated to develop a comprehensive multi-objective framework for optimal CS placement in a traffic network overlaid by a distribution network, considering multiple stakeholders' interested factors, such as traffic flow, PEV charging time cost, PEV travel distance, and the reliability of the distribution network. We leverage a cross-entropy-based method to solve the optimal CS placement and evaluate our method in a real-world 183-node traffic network in Chengdu, China, overlaid by a 26-region distribution network. It is demonstrated that our work provides various viable planning options favoring different objectives for the stakeholders' decision-making in practice. Electric vehicle, charging infrastructure planning, multi-objective optimization, cross-entropy method. ## I Introduction The substantial increase in greenhouse gas emissions in the past decades has led to a steady rise in global temperature, creating worldwide awareness of the adverse impacts of fossil fuel usage [1]. In particular, energy-related carbon emissions generated by conventional combustion energy vehicles contribute to approximately \(15\%\) of global greenhouse gas emissions [2]. Mitigation efforts for global warming must be accelerated more urgently and rapidly. Hence, promoting the adoption of pure electric vehicles (PEVs) has been widely recognized as the most promising and effective solution, aligning with the goal of net-zero transition [3]. Large-scale PEV deployment faces significant challenges due to the lack of publicly available charging infrastructures, even though the government is providing both financial and policy incentives. The placement problem of charging stations (CS) in the traffic network is complex and influenced by multivariate factors, including the traffic throughput, the geospatial location of candidate sites, and the accessibility of charging services for PEV owners [4]. Moreover, the CS placement is closely related to the interests of major stakeholders, e.g., PEV owners and the traffic network operator. Specifically, PEV owners, being the largest group of stakeholders, prioritize accessible charging services within the shortest time in the occurrence of the charging demand as the ideal PEV charging infrastructure planning. The traffic network operator prefers the CS placement at traffic nodes with higher traffic throughput, such that the charging infrastructure is fully utilized. In addition to the stakeholders' interests, the impact of charging demand on the power distribution network brought by associated with the CS placement must be considered, as increasing charge demand can lead to significant voltage deviations and threaten the reliability of the distribution network [5]. PEV charging infrastructure planning has drawn increasing attention in recent studies, but is still in the early stage of development. Masoum et al. [6] proposed a smart load management approach for coordinating PEV chargers in distribution feeders, but neglected the impacts of the traffic network. Hajimiraha et al. [7] employed a robust optimization method to primarily address the positive environmental effects of PEV adoption in reducing carbon emissions. Jia et al. in [8] facilitated CS placement by minimizing integrated investment and operation costs of the charging infrastructure. Yao et al. [9] developed a user equilibrium-based traffic assignment model, which maximizes captured traffic flow under energy-related cost constraints. The developed model requires specific information of PEV owners, but such information is difficult to obtain in real practice. Wu et al. [10] proposed a stochastic flow-capturing local model for optimal CS placement in the traffic network. While the above studies tend to focus on either the traffic network or the distribution network, recent works by [11, 12] proposed flow-based models along with both network constraints to determine the optimal locations of charging infrastructure. However, these approaches did not consider the interests of PEV owners, as their primary focus was on maximizing the charging support to the overall traffic flow in the traffic network. Improving the accessibility of charging services for PEV owners is often overlooked but must be considered to facilitate further adoption of PEVs for transportation electrification. As discussed above, existing studies lack a comprehensive framework covering all important factors, e.g., traffic flow, the accessibility of charging services, and the reliability of the distribution network, for the optimal CS placement for charging infrastructure planning. We are motivated to bridge such a research gap by formulating the PEV charging infrastructure planning as a multi-objective optimization problem, including traffic-flow-oriented charging support maximization, PEV charging time cost minimization, PEV travel distance minimization, and distribution network reliability. We use the cross-entropy method, known for its robustness and fast convergence [13], to derive the optimal CS placement solution. Importantly, unlike previous studies that usually provided a single optimal solution for charging infrastructure planning, our work offers various planning options, as stakeholders often need a set of solutions to tradeoff multiple objectives or factors in their decision-making. The main contributions of our work are summarized as follows. * _Multi-Objective Modeling in the Traffic Network Coupled with Distribution Network:_ We model the PEV charging infrastructure planning in a traffic network coupled with a distribution network as a multi-objective optimization problem, considering multiple essential factors, such as traffic flow, PEV charging time cost, PEV travel distance, and the reliability of the distribution network. * _Cross-Entropy-Based Solution Method and Real-World Simulations:_ We leverage the cross-entropy method to solve the multi-objective CS placement problem, which is validated in the real-world \(183\)-node traffic network located in Chengdu, China, overlaid by a \(26\)-region distribution network. * _Providing Multiple Viable Solutions in Practice:_ Our method offers multiple viable CS placement options based on different preferences of the stakeholders, allowing stakeholders to explicitly tradeoff multiple objectives and facilitate better decision making in real practice. The remainder of this paper is organized as follows. We formulate the multi-objective PEV charging infrastructure planning model in Section II. Section III introduces the cross-entropy solution method for solving the optimal CS placement problem. A case study based on real-world data is presented in Section IV. Section V concludes this paper. ## II System Model We show the system model for PEV charging infrastructure planning in Fig. 1. We consider a traffic network with a set of traffic nodes denoted by \(\mathcal{N}\). The traffic network is overlaid by a multi-region distribution network, and we denote the region set as \(\mathcal{R}\). Our optimal PEV charging infrastructure planning seeks to deploy a set of CSs, denoted by \(\mathcal{S}\), at traffic nodes, aiming to optimize multiple objectives, e.g., maximize charging support to traffic flow, reduce PEV charging time, minimize PEV travel distance, and mitigate the effects of CS placement on the reliability of the distribution network. We present the detailed models for each objective from Section II-A to II-D, respectively. ### _Maximizing Charging Support to Traffic Flow_ One of the key objectives of the PEV charging infrastructure planning is to fulfill PEVs' charging demand, such that PEVs' travel needs can be well satisfied without worrying about the lack of charging infrastructure. Since charging activities are more likely to occur at traffic nodes carrying significant traffic flows [14], our infrastructure planning needs to, to the utmost extent, consider the traffic flow and place CSs at the nodes with high traffic. Specifically, we define the traffic flow at the \(n\)-th traffic node as \(f_{n}\), representing the total number of trips that both originate from and end at the \(n\)-th node. The objective of the traffic-flow-oriented charging support maximization problem can be formulated as \[J^{\text{flow}}=\sum_{n\in\mathcal{N}}\mathbb{I}_{n}f_{n}, \tag{1}\] where \(\mathbb{I}_{n}=\{0,1\}\) is the indicator variable to decide whether a CS should be placed at \(n\)-th traffic node. ### _PEV Charging Time Cost Minimization_ In addition to providing maximum charging support to the overall traffic flow, effective PEV charging infrastructure planning also needs to consider the interests of PEV owners, address the known range anxiety barrier of charging services, and ensure high accessibility of the charging sites [11]. As a result, when deploying CSs, careful attention should be paid to the time needed for charging services, including the PEV traveling time, queuing time, and actual charging time. #### Ii-B1 PEV Traveling Time Given the limited battery capacity of the PEV, it is essential to place CSs at the most appropriate nodes to ensure convenience for the PEV owners with the shortest traveling time. The PEV traveling time for each traffic node can be estimated using its average traveling time to all other nodes in the traffic network [15]. We let \(v\) denote the constant for average travel speed and \(d_{nm}\) denote the geospatial distance between the \(n\)-th and the \(m\)-th nodes. To monitor the actual charging activity, traffic congestion is also considered by incorporating the normalized traffic flow Fig. 1: The system model of PEV charging infrastructure planning. between the \(m\)-th and the \(n\)-th nodes, i.e., \(\hat{f}_{n}+\hat{f}_{m}\). The normalized traffic flow is defined as \(\hat{f}_{n}=f_{n}/F_{\text{max}}\), where \(F_{\text{max}}\) is the maximum traffic flow in the traffic network. The PEV traveling time for the \(n\)-th traffic node can be defined as \[\begin{split}\begin{split}\begin{split} \begin{split}\begin{split} t^{\text{travel}} =\frac{1}{v\left(\mathcal{N}\right|-1\right)}\end{split} \begin{split}\begin{split}\begin{split} \begin{split}\begin{split} \begin{split} n\text{level}\\ \end{split}\begin{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \text{level}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ \end{split}\begin{split} n\text{level}\\ optimization into a single-objective optimization problem formulated as \[\min \alpha^{\text{flow}}\vec{J}^{\text{flow}}+\alpha^{\text{ch}}\vec{J}^{ \text{ch}}+\alpha^{\text{dis}}\vec{J}^{\text{dis}}+\alpha^{\text{DN}}\vec{J}^{ \text{DN}} \tag{12}\] \[\textbf{s.t.}\sum_{n\in\mathcal{N}}\mathbb{I}_{n}\leq K,\] (13) \[\mathbb{I}_{n}=\{0,1\},\quad\forall n\in\mathcal{N}, \tag{14}\] where objectives \(J^{\text{flow}}\), \(J^{\text{ch}}\), \(J^{\text{dis}}\), and \(J^{\text{DN}}\) are normalized into range \([0,1]\) and then denoted by \(\vec{J}^{\text{flow}}\), \(\vec{J}^{\text{ch}}\), and \(\vec{J}^{\text{DN}}\), respectively. In particular, since the objective \(J^{\text{flow}}\) aims to maximize charging support with respect to the traffic flow, we normalize it unlike the other three minimized objectives, which can be formulated as \[\vec{J}^{\text{flow}}=\frac{J^{\text{flow}}_{\text{max}}-J^{\text{ flow}}}{J^{\text{flow}}_{\text{max}}-J^{\text{flow}}_{\text{min}}}. \tag{15}\] The constraint in Eq. (13) indicates that the number of CSs planned in the traffic network cannot exceed the budget of the deployed CSs, which is denoted by \(K\). Eq. (14) describes the decision variable of the CS placement problem, i.e., \(\mathbb{I}_{n}\) to determine whether to place a CS at the \(n\)-th traffic node. ## III Cross-Entropy-Based Solution Method To solve the PEV charging infrastructure planning problem formulated in Section II-E, we propose a cross-entropy solution method that balances different objectives to achieve optimal CS placement. Compared to classical and evolutionary algorithms, the cross-entropy approach has several advantages, including fast convergence, strong robustness, insensitivity to initialized points, and most importantly, better interpretability in optimization results [13]. We denote the objective value of the planning problem defined in Eq. (12) as \(\gamma=J(\mathcal{S})\). To minimize the objective \(\gamma\), the cross-entropy method first randomizes a family of probability distribution functions (PDFs) for the traffic nodes, indicating the probability of being placed with a CS. These PDFs are denoted as \(f(\mathcal{S};\mathbf{p})\), where \(\mathbf{p}\) is the probability vector defined as \(\mathbf{p}=[p_{1},p_{2},\cdots,p_{|\mathcal{N}|}]\). The initialized PDFs are then updated in an adaptive manner to derive the optimal probability for each traffic node to be placed with a CS. Specifically, in the \(t\)-th iteration, we first set an adaptively updating parameter denoted by \(\gamma_{t}^{\text{da}}\). The probability of a possible planning scheme whose objective value is lower than such a threshold \(\gamma_{t}^{\text{da}}\) can be formulated as \[l(\gamma)=P_{\mathbf{p}_{t}}\left\{J\left(\mathcal{S}\right)\leq\gamma_{t}^{\text {da}}\right\}=\mathbb{E}_{\mathbf{p}_{t}}\mathbb{I}\left(J\left(\mathcal{S} \right)\leq\gamma_{t}^{\text{da}}\right), \tag{16}\] where \(\mathbb{E}_{\mathbf{p}_{t}}\) is the expectation of the PDFs in the \(t\)-th iteration and \(\mathbb{I}(J(\mathcal{S})\leq\gamma_{t}^{\text{da}})\) is an indicator variable that is equal to \(1\) only if \(J(\mathcal{S})\leq\gamma_{t}^{\text{adb}}\). The primary objective of the cross-entropy method is to minimize the cross entropy or variance of the defined probability in Eq. (16). To achieve this, the cross-entropy method updates the PDFs using the elite solutions, which are part of candidate solutions with the lowest objective values, to guide the search towards the global optimum in subsequent iterations [17]. Additionally, the objective value of the worst solution among the elite solutions is set as the adaptively updating parameter \(\gamma_{t}^{\text{ada}}\) in each iteration. We denote the candidate population size, i.e., the number of generated CS placement plans, as \(N^{\text{CE}}\). The proportion of elite solutions is denoted by \(\delta\), and the set of elite solutions is denoted by \(\mathbf{\mathcal{S}}_{t}^{\text{elite}}\). The updating process of the probability parameter for the \(n\)-th traffic node in the \(t\)-th iteration can be formulated as \[p_{t,n}=\frac{1}{\delta N^{\text{CE}}}\sum_{\mathbf{\mathcal{S}}_{t}^{\text{elite}} }\mathbb{I}\left(J\left(\mathcal{S}\right)\leq\gamma_{t}^{\text{ada}}\right) \mathbb{I}_{n}. \tag{17}\] We set the maximum iteration number as \(T^{\text{CE}}_{\text{max}}\) and assume that the PDF in the cross-entropy method follows the Bernoulli distribution \(\mathcal{B}(\mathbf{p})\) with the success probability vector \(\mathbf{p}\). The detailed algorithmic procedure of our cross-entropy-based planning method is presented in Algorithm. 1. By executing the cross-entropy method, we can obtain the optimal solution for PEV infrastructure planning in the elite solutions with the lowest objective value. More importantly, the iteration process of the cross-entropy method clearly reveals how our algorithm dynamically balances different objectives to achieve optimal planning, providing insights in practice, which are analyzed in detail in Section IV. ``` Initialize the probability vector \(\mathbf{p}_{0}\). for\(t=1,\cdots,T^{\text{CE}}_{\text{max}}\)do Generate \(N^{\text{CE}}\) CS placement plans, where each node is selected based on its corresponding Bernoulli distribution. Calculate and rank the objective value \(\gamma\) for all plans. Update the adaptively updating parameter \(\gamma_{t}^{\text{ada}}\). Update the probability parameter for each node's Bernoulli distribution \(\mathcal{B}(p_{t,n})\). endfor ``` **Algorithm 1** The Cross-Entropy-Based Solution Method for PEV Charging Infrastructure Planning ## IV Case Studies ### _Simulation Settings_ We use realistic traffic flow data collected from a real-world \(183\)-node traffic network located in Chengdu, China. The traffic network is coupled with a simulated \(26\)-region distribution network. We illustrate traffic nodes located in each region of the distribution network with a specific color, as shown in Fig. 2. The time horizon of utilized traffic data is \(7\) days. We assume that the SoC of PEV when connected to a charger, denoted as \(SoC^{\text{ch}}\), follows a uniform distribution \(\mathcal{U}(0.05,0.95)\). We also set a stopping criterion for the cross-entropy method, specifying that the iteration is terminated when the objective difference of the optimal solutions in two consecutive iterations is smaller than \(10^{-5}\). Our algorithm is implemented using Python on an Intel i7-11800H 2.30 GHz/32 GB laptop. The initialized parameters of the cross-entropy-based method are presented in Table I. The weight coefficients of objectives, namely\(\alpha^{\text{flow}}\), \(\alpha^{\text{ch}}\), \(\alpha^{\text{dis}}\), and \(\alpha^{\text{DN}}\), are all set to a default value of \(0.25\), indicating their equal importance. However, in practice, the stakeholders, e.g., traffic network operator, distribution network operator, and PEV owners, may have different preferences for specific aspects of multi-objectives in the PEV charging infrastructure planning decision-making. For example, different weights can be assigned to multi-objectives, e.g., supporting more PEVs with charging demand, improving the charging service with shorter charging time cost, or focusing on the reliability of the distribution network. In our study, we analyze the optimal CS placement based on stakeholders' preferences, which are categorized into six scenarios summarized in Table II. The default weight setting, i.e., \(0.25\) for each objective, is the baseline Scenario for comparison. In contrast, we assign larger weights, such as \(0.5\) and \(0.7\), to \(\alpha^{\text{flow}}\), \(\alpha^{\text{ch}}\), and \(\alpha^{\text{DN}}\), to prioritize the corresponding objectives, respectively. ### _Charging Infrastructure Planning Solutions in Different Scenarios_ The simulation results obtained in the aforementioned seven scenarios, including the baseline scenario and additional six scenarios, are presented in Table III. The optimal objective value \(\gamma_{\text{min}}\), the number of candidates for CS placement, the captured traffic flow, additional power consumption for PEV charging services, and average PEV travel distance are included in Table III. The corresponding optimal CS placement solutions in seven scenarios are depicted in Fig. 3 (for the baseline scenario) and Fig. 4 (for Scenarios \(1\) to \(6\)) for cross comparisons. The results in Table III reveal that the optimal planning solutions from each scenario successfully achieve their corresponding favored objectives, e.g., supporting more traffic flow for Scenarios \(1\) and \(2\), while the baseline Scenario seems to balance each objective for the optimal CS placement. The detailed analysis is presented below. #### Iv-B1 Favoring Traffic-Flow-Oriented Charging Support Maximization When favoring the traffic flow objective \(J^{\text{flow}}\), our PEV infrastructure planning tends to select more candidate sites (\(37\) and \(39\) CS-placed traffic nodes in Scenarios \(1\) and \(2\), respectively, as shown in Table III) compared to other scenarios. The optimal planning in Scenarios \(1\) and \(2\) aims at supporting more PEVs with potential charging demand, i.e., \(78.47\%\) and \(83.66\%\) of traffic flow associated charging demand is well supported in these two traffic-flow-favored scenarios, respectively. Consequently, the selected candidate nodes tend to be densely clustered near the city center of Chengdu (indicated as the black star in Fig. 4), as depicted in Fig. 4a and 4b. Moreover, by measuring the pair-wise geospatial distances between non-selected traffic nodes and the candidate sites, we can observe that the traffic-flow-favored planning solutions offer the minimum average distances among all simulation scenarios, i.e., \(30\) and \(29\) kilometers for Scenarios \(1\) and \(2\), respectively. However, such dense planning increases charging demand in distribution network regions near the city center, with considerably additional power consumption (more than Fig. 3: The optimal CS placement in the baseline scenario. Fig. 2: The layout of the traffic network and distribution network in Chengdu. \(10,000\) MWh). Moreover, favoring traffic flow associated charging demand support result in low accessibility for potential PEV owners residing in outer city regions. #### Iv-B2 Favoring Charging Time Cost Minimization Compared to other scenarios favoring \(J^{\text{flow}}\) or \(J^{\text{DN}}\), the CS placement results in Scenarios \(3\) and \(4\) reveal that the candidate sites tend to be evenly distributed in the traffic network, with relatively fewer traffic nodes being selected, e.g., \(20\) and \(13\) candidates, respectively. Such planning solutions, though capturing lower traffic flow, can significantly reduce the PEV charging time, meeting the increasing charging demand of PEV owners within a considerably shorter charging time cost (\(89\) and \(72\) minutes, respectively, as presented in Table III). As a result, quality of service provided by the CS is improved. #### Iv-B3 Favoring Distribution Network Reliability The optimal CS placement solutions in Scenarios \(5\) and \(6\) generate the minimum additional power consumption among all simulated scenarios, with only \(10\) traffic nodes selected as candidate sites sparsely distributed outside Chengdu City. However, such planning solutions present poor performance in supporting traffic flow associated charging demand (e.g., \(0.03\%\) and \(0.02\%\)), leading to the average charging time of \(129\) and \(142\) minutes and the average travel distance of \(47\) and \(99\) kilometers, in Scenarios \(5\) and \(6\), respectively. These simulation results reveal that CS placement favoring distribution network reliability is less beneficial for the Fig. 4: The optimal CS placement solutions in Scenarios \(1\) to \(6\). majority of PEV owners as well as the traffic network operator, indicating a clear interest conflict among the stakeholders in PEV charging infrastructure planning. Hence, achieving a trade-off between satisfying the increasing charging demand and securing the distribution network is needed to balance the interests of major stakeholders. ### _Trade-off among CS Planning Objectives_ As discussed in Section IV-B, we see that compared to favoring one of the objectives in the multi-objective CS placement problem, treating these objectives with the same weight, i.e., the baseline scenario, appears to strike an effective balance among the four objectives of the charging infrastructure planning. As shown in Fig. 3, some of the selected candidates are located around the city center while other CSs are placed relatively far away from the city. Such a geospatial allocation pattern captures a fair proportion of traffic flow on the one hand, while alleviating excessive load stress on regions near the city center on the other hand. It is worth noting that the main focus of our work is not to provide a single optimal PEV charging infrastructure planning solution for a specific traffic network coupled with a distribution network. Instead, our study aims to offer a set of planning options considering multiple essential factors of the CS placement and potentially conflicting interests in stakeholders' decision-making. ## V Conclusion In this paper, we proposed a cross-entropy-based method for optimal PEV charging infrastructure planning. We formulated the CS placement problem in a traffic network coupled with an electric distribution network as a multi-objective optimization problem. We incorporated essential factors from the perspectives of the traffic network operator, PEV owners, and the distribution network operator, including traffic-flow-oriented charging support maximization, PEV charging time cost minimization, PEV travel distance minimization, and distribution network reliability optimization. The optimal planning solutions were obtained using the cross-entropy algorithm. We validated our method in a real-world \(183\)-node traffic network in Chengdu, China, which is coupled with a \(26\)-region distribution network. Our method offered various planning options for the stakeholders' decision-making in practice. The simulation results provided three noteworthy insights: 1)overemphasizing the charging support for PEVs poses a challenge in managing charging demand in regions with higher CS density for the distribution network; 2) reducing the average charging time cost is crucial to enhance the accessibility of charging infrastructures for PEV owners; 3) achieving a trade-off should be reached to balance the interests of the distribution network operator, traffic network operator, and PEV owners. Our work served a comprehensive case study for solving the real-world PEV charging infrastructure problem and highlighting the importance of effective CS placement considering multiple stakeholders' interests.
2307.04581
Galerkin-Bernstein Approximations of the System of Time Dependent Nonlinear Parabolic PDEs
The purpose of the research is to find the numerical solutions to the system of time dependent nonlinear parabolic partial differential equations (PDEs) utilizing the Modified Galerkin Weighted Residual Method (MGWRM) with the help of modified Bernstein polynomials. An approximate solution of the system has been assumed in accordance with the modified Bernstein polynomials. Thereafter, the modified Galerkin method has been applied to the system of nonlinear parabolic PDEs and has transformed the model into a time dependent ordinary differential equations system. Then the system has been converted into the recurrence equations by employing backward difference approximation. However, the iterative calculation is performed by using the Picard Iterative method. A few renowned problems are then solved to test the applicability and efficiency of our proposed scheme. The numerical solutions at different time levels are then displayed numerically in tabular form and graphically by figures. The comparative study is presented along with L2 norm, and L infinity norm.
Hazrat Ali, Nilormy Gupta Trisha, Md. Shafiqul Islam
2023-07-10T14:19:26Z
http://arxiv.org/abs/2307.04581v1
**Galerkin-Bernstein Approximations of the System of Time Dependent Nonlinear Parabolic PDEs** ## Abstract The purpose of the research is to find the numerical solutions to the system of time dependent nonlinear parabolic partial differential equations (PDEs) utilizing the Modified Galerkin Weighted Residual Method (MGWRM) with the help of modified Bernstein polynomials. An approximate solution of the system has been assumed in accordance with the modified Bernstein polynomials. Thereafter, the modified Galerkin method has been applied to the system of nonlinear parabolic PDEs and has transformed the model into a time dependent ordinary differential equations system. Then the system has been converted into the recurrence equations by employing backward difference approximation. However, the iterative calculation is performed by using the Picard Iterative method. A few renowned problems are then solved to test the applicability and efficiency of our proposed scheme. The numerical solutions at different time levels are then displayed numerically in tabular form and graphically by figures. The comparative study is presented along with \(L_{2}\) norm, and \(L_{\infty}\) norm. **Keywords:** Parabolic PDE System, Modified Galerkin Method, Modified Bernstein Polynomial, Backward Difference Method, Gray-Scott Model ## 1 Introduction Reaction-diffusion systems have been extensively studied during the \(20^{th}\) century. The study of the reaction-diffusion system reveals that different species have interactions with one another and that after these interactions, new species are created via chemical reactions. The solution of the reaction-diffusion system shows the chemical reaction's underlying mechanism and the various spatial patterns of the chemicals involved. Animal coats and skin coloration have been linked to reaction-diffusion processes, which have been considered to constitute a fundamental basis for processes associated with morphogenesis in biology. There are numerous notable examples of coupled reaction-diffusion systems such as the Brusselator model, Glycolysis model, Schnackenberg model, Gray-Scott model, etc. With the help of the system size expansion, a stochastic Brusselator model has been suggested and investigated in the study cited in [1]. The reaction-diffusion Brusselator model has been addressed by Wazwaz et al. through the decomposition technique [2]. Because of its potential to provide a close analytical solution, the fractional-order Brusselator model was studied by Faiz et al [3]. The Brusselator system stability of a reaction-diffusion cell as well as the Hopf bifurcation analysis of the system have been detailed by Alfifi [4]. Qamar has analyzed the dynamics of the discrete-time Brusselator model with the help of the Euler forward and nonstandard difference schemes [5]. The research article cited in [6] has been prepared by investigating the numerical analysis of the Glycolysis model using a well-known finite difference scheme. Adel et al [7] have examined the synchronization problem of the Glycolysis reaction-diffusion model and designed a novel convenient control law. David et al [8] have analyzed the stability of turing patterns of the Schnackenberg model. Liu et al [9] have developed the bifurcation analysis of the aforementioned model. Khan et al. [10] have established a scheme for the solution of the fractional order Schnackenberg reaction-diffusion system. Numerical explorations have been applied to analyze the pattern formations of the model in the research article cited in [11]. Gray and Scott [12] were the first to introduce the Gray-Scott model. They have proposed this model as an alternative to the autocatalytic model of Glycolysis [13]. For this model, Pearson [14] has employed experimental studies to depict several sophisticated spot-type structures. Mazin et al. [15] have conducted an experiment using a computer simulation to investigate a range of far-from-equilibrium occurrences that emerge in a bistable Gray-Scott model. Many renowned authors [16, 17] have evaluated the preceding model in which self-replicating structures have been noticed. McGough et al. [18] have conducted research on the bifurcation analysis of the patterns that are depicted in the model. In the research cited in [19], the linear stability and periodic stationary solutions of this model have been investigated. Some analytical results of this model have also been explored [20]. Several prominent authors have studied the spatiotemporal chaos of the model in the research studies cited in [21] and [22]. Furthermore, Wei [23] has analyzed the pattern formation of the two-dimensional Gray-Scott model. The model has also been explored by Kai et al. [24] using an innovative technique known as the second-order explicit implicit methodology. In recent years, the nonlinear Galerkin finite element approach has become increasingly prevalent as a means to investigate the model [25, 26]. Mach [27] has performed an in-depth examination of the quantitative evaluation of the model's numerical solution. In references [28] and [29], the Gray-Scott reaction-diffusion system has been the subject of extensive wave modeling studies by eminent scholars. The simulation of the coupled model has been carried out by Owolabi et al. [30] using the higher-order Runge-Kutta method. The well-known Gray-Scott model's numerical findings have been calculated using the help of the hyperbolic B-spline [31]. In order to analyze the ionic version of the model while it is being affected by an electric field, the Galerkin method has been deployed [32]. With the use of the hybrid-asymptotic numerical method, Chen et al. [33] have investigated the model's dynamic behavior and stability. In the research study cited in [34], special polynomials have been employed to numerically solve the Gray-Scott model. Han et al. [35] have conducted an exhaustive investigation on the three-dimensional Gray-Scott model. In the process of assessing the model, the cubic B-spline has proven to be of considerable use by Mittal et al [36]. In the disciplines of engineering and mathematical physics, the Weighted Residual Method is an approximation method that can be leveraged to resolve problems. Analysis of structures, thermal expansion, stream of fluids, movement of masses, and the electromagnetic potential, etc. are examples of prominent problem fields of concern. Several distinct Weighted Residual Method variations are within our reach. The Galerkin Weighted Residual Method (also known as GWRM) has been put into practice for centuries, long before the invention of computers. It is generally agreed that this strategy is one of the best and most often used approaches available. Lewis and Ward have provided a comprehensive overview of the process in the article that is referenced in [37]. This methodology has been effectively implemented in the well-known Black-Scholes model by Hossan et al. [38]. Shirin et al. [39] have employed the Galerkin method in conjunction with other special polynomials to analyze the Fredholm equations. In the research referred to in [40], the approach was utilized to solve boundary value problems. In addition, this method has been used to perform a numerical calculation of the eigenvalues associated with the Sturm-Liouville problem [41]. There have been several successful uses of this method for problems involving metal beams and polygonal ducts with rounded edges [42, 43]. The objective of this study is to employ the modified Galerkin Weighted Residual Method in conjunction with the appropriate special polynomials to numerically evaluate the one-dimensional reaction-diffusion systems. Based on our best information, this study is presently unavailable. In addition to that, the study has provided the validation necessary to use the approach in one-dimensional reaction-diffusion systems. The main merit and advantage of the study are that by solving this type of system of equations, we will be able to analyze the behavior of the ecological system and forecast its future. The article is split up into four sections. Section 2 provides a detailed explanation of the formulation of our proposed method to solve the system of nonlinear parabolic partial differential equations. In the third section, the approach's implications are shown while analyzing the aforementioned system. Numerical and graphical representations are included here as well. The fourth section contains some concluding remarks and a general discussion. ## 2 Mathematical Formulation Let us commence with the following system over the domain \([-L,L]\) \[\left.\begin{array}{l}\dfrac{\partial M}{\partial t}=\varepsilon_{1}\dfrac{ \partial^{2}M}{\partial x^{2}}-f(M,N)+p(1-M)\\ \dfrac{\partial N}{\partial t}=\varepsilon_{2}\dfrac{\partial^{2}N}{ \partial x^{2}}+f(M,N)-(p+q)N\end{array}\right\} \tag{2.1}\] The boundary and initial conditions are as follows: \[\left.\begin{array}{l}M(-L,t)=M(L,t)=\theta_{0}\\ N(-L,t)=N(L,t)=\gamma_{0}\end{array}\right\} \tag{2.2}\] and \[\left.\begin{array}{l}M(x,0)=M_{0}(x)\\ N(x,0)=N_{0}(x)\end{array}\right\} \tag{2.3}\] Let us assume the approximate solutions of System (2.1) be of the form \[\left.\begin{array}{l}\widetilde{M}(x,t)=\theta_{0}+\sum_{j=0}^{n}c_{j}(t)B _{j}(x)\\ \widetilde{N}(x,t)=\gamma_{0}+\sum_{j=0}^{n}d_{j}(t)B_{j}(x)\end{array}\right\} \tag{2.4}\] where \(B_{j}\)'s are the modified Bernstein polynomials and \(c_{j}\) and \(d_{j}\) are the coefficients dependent on time. The first terms of the approximate solutions (2.4) have come from the boundary conditions of the system. The modified Bernstein polynomials are defined as follows: \[B_{n,m}(x)=\binom{m}{n}\dfrac{(x-L)^{n}(U-x)^{m-n}(x-L)(U-x)}{(U-L)^{m}}\ \ \ \ \ \ \ \ \ n=0,1,2,...,m\] where \(U\) & \(L\) are the upper and lower limits of \(x\). The last terms of Solution (2.4) will vanish at the boundary points. Therefore, the residual functions are \[\left.\begin{aligned} R_{1}(x,t)&=\frac{\partial \widetilde{M}}{\partial t}-\varepsilon_{1}\frac{\partial^{2}\widetilde{M}}{ \partial x^{2}}+f(\widetilde{M},\widetilde{N})-p(1-\widetilde{M})\\ R_{2}(x,t)&=\frac{\partial\widetilde{N}}{\partial t }-\varepsilon_{2}\frac{\partial^{2}\widetilde{N}}{\partial x^{2}}-f( \widetilde{M},\widetilde{N})+(p+q)\widetilde{N}\end{aligned}\right\} \tag{2.5}\] Now we form the residual equations as: \[\int_{-L}^{L}R_{1}(x,t)B_{i}(x)dx =0 \tag{2.6}\] \[\int_{-L}^{L}R_{2}(x,t)B_{i}(x)dx =0 \tag{2.7}\] From the first residual equation, we can write \[\int_{-L}^{L}\Bigg{[}\frac{\partial\widetilde{M}}{\partial t}- \varepsilon_{1}\frac{\partial^{2}\widetilde{M}}{\partial x^{2}}+f(\widetilde{ M},\widetilde{N})-p(1-\widetilde{M})\Bigg{]}B_{i}(x)dx=0 \tag{2.8}\] Now we apply integration by parts in the above equation \[\int_{-L}^{L}\frac{\partial\widetilde{M}}{\partial t}B_{i}dx+ \int_{L}^{L}\varepsilon_{1}\frac{\partial\widetilde{M}}{\partial x}\frac{ \partial B_{i}}{\partial x}dx+\int_{-L}^{L}f(\widetilde{M},\widetilde{N})B_{i }dx-\int_{-L}^{L}p(1-\widetilde{M})B_{i}dx=\varepsilon_{1}\Big{[}\frac{ \partial\widetilde{M}}{\partial x}B_{i}\Big{]}_{-L}^{L} \tag{2.9}\] Then we substitute solution (2.4) in Equation (2.9). Therefore, the equation becomes, \[\int_{-L}^{L}\frac{\partial}{\partial t}\Big{(}\theta_{0}+\sum_{ j=0}^{n}c_{j}B_{j}\Big{)}B_{i}dx+\int_{-L}^{L}\varepsilon_{1}\frac{\partial}{ \partial x}\Big{(}\theta_{0}+\sum_{j=0}^{n}c_{j}B_{j}\Big{)}\frac{\partial B_{ i}}{\partial x}dx+\int_{-L}^{L}f\Big{(}\theta_{0}+\sum_{j=0}^{n}c_{j}B_{j}, \gamma_{0}+\sum_{j=0}^{n}d_{j}B_{j}\Big{)}B_{i}dx\\ -\int_{-L}^{L}p\Big{(}1-\big{(}\theta_{0}+\sum_{j=0}^{n}c_{j}B_{ j}\big{)}\Big{)}B_{i}dx=\varepsilon_{1}\Big{[}\frac{\partial}{\partial x}\Big{(} \theta_{0}+\sum_{j=0}^{n}c_{j}B_{j}\Big{)}B_{i}\Big{]}_{-L}^{L}\] or, \(\int_{-L}^{L}\frac{\partial\theta_{0}}{\partial t}B_{i}dx+\int_{-L}^{L}\sum_{ j=0}^{n}\frac{\partial c_{j}}{\partial t}B_{j}B_{i}dx+\int_{-L}^{L} \varepsilon_{1}\frac{\partial\theta_{0}}{\partial x}\frac{\partial B_{i}}{ \partial x}dx+\sum_{j=0}^{n}c_{j}\int_{-L}^{L}\varepsilon_{1}\frac{\partial B _{j}}{\partial x}\frac{\partial B_{i}}{\partial x}dx\) \[+\int_{-L}^{L}f\Big{(}\theta_{0}+\sum_{j=0}^{n}c_{j}B_{j}, \gamma_{0}+\sum_{j=0}^{n}d_{j}B_{j}\Big{)}B_{i}dx-\int_{-L}^{L}pB_{i}dx+\int_ {-L}^{L}p\theta_{0}B_{i}dx+\sum_{j=0}^{n}c_{j}\int_{-L}^{L}pB_{j}B_{i}dx\\ =\varepsilon_{1}\Bigg{[}\frac{\partial\theta_{0}}{\partial x}B_{ i}\Bigg{]}_{-L}^{L}+\varepsilon_{1}\Bigg{[}\sum_{j=0}^{n}c_{j}\frac{\partial B _{j}}{\partial x}B_{i}\Bigg{]}_{-L}^{L}\] This finally becomes \[\int_{-L}^{L}\frac{\partial\theta_{0}}{\partial t}B_{i}dx+\int_{-L }^{L}\sum_{j=0}^{n}\frac{\partial c_{j}}{\partial t}B_{j}B_{i}dx+\int_{-L}^{L} \varepsilon_{1}\frac{\partial\theta_{0}}{\partial x}\frac{\partial B_{i}}{ \partial x}dx+\sum_{j=0}^{n}c_{j}\int_{-L}^{L}\varepsilon_{1}\frac{\partial B_{ j}}{\partial x}\frac{\partial B_{i}}{\partial x}dx\\ +\int_{-L}^{L}\Gamma\Big{(}\theta_{0},\gamma_{0},\sum_{k=0}^{n}c_{ k}B_{k},\sum_{l=0}^{n}d_{l}B_{l}\Big{)}B_{i}dx+\sum_{j=0}^{n}d_{j}\int_{-L}^{L} \Omega\Big{(}\theta_{0},\gamma_{0},\sum_{k=0}^{n}c_{k}B_{k},\sum_{l=0}^{n}d_{ l}B_{l}\Big{)}B_{j}B_{i}dx\\ -\int_{-L}^{L}pB_{i}dx+\int_{-L}^{L}p\theta_{0}B_{i}dx+\sum_{j=0} ^{n}c_{j}\int_{-L}^{L}pB_{j}B_{i}dx=\varepsilon_{1}\Bigg{[}\frac{\partial \theta_{0}}{\partial x}B_{i}\Bigg{]}_{-L}^{L}+\varepsilon_{1}\Bigg{[}\sum_{j=0}^ {n}c_{j}\frac{\partial B_{j}}{\partial x}B_{i}\Bigg{]}_{-L}^{L} \tag{2.10}\] The first terms on both sides, and third terms on the left-hand side Equation (2.10) become zero because of boundary conditions. Therefore, the equation reduces to, \[\sum_{j=0}^{n}\frac{dc_{j}}{dt}\int_{-L}^{L}B_{j}B_{i}dx+\sum_{j=0}^{n }c_{j}\Bigg{(}\int_{-L}^{L}\varepsilon_{1}\frac{dB_{j}}{dx}\frac{dB_{i}}{dx}dx+ \int_{-L}^{L}pB_{j}B_{i}dx-\varepsilon_{1}\Big{[}\frac{dB_{j}}{dx}B_{i}\Big{]}_ {-L}^{L}\Bigg{)}\] \[+\sum_{j=0}^{n}d_{j}\int_{-L}^{L}\Omega\Big{(}\theta_{0},\gamma_{0 },\sum_{k=0}^{n}c_{k}B_{k},\sum_{l=0}^{n}d_{l}B_{l}\Big{)}B_{j}B_{i}dx=-\int_{- L}^{L}\Gamma\Big{(}\theta_{0},\gamma_{0},\sum_{k=0}^{n}c_{k}B_{k},\sum_{l=0}^{n}d_{l}B _{l}\Big{)}B_{i}dx\] \[+\int_{-L}^{L}pB_{i}dx-\int_{-L}^{L}p\theta_{0}B_{i}dx \tag{2.11}\] The derivative and non-derivative terms of Equation (2.11) can be summarized via standard matrix notation as follows: \[[C_{1}]\Big{\{}\frac{dc_{j}}{dt}\Big{\}}+[K_{1}]\{c_{j}\}+[K_{2}]\{d_{j}\}=[ F_{1}] \tag{2.12}\] where \[C_{1_{ij}}=\int_{-L}^{L}\varepsilon_{1}\frac{dB_{j}}{dx}\frac{dB_ {i}}{dx}dx+\int_{-L}^{L}pB_{j}B_{i}dx-\varepsilon_{1}\Big{[}\frac{dB_{j}}{dx}B _{i}\Big{]}_{-L}^{L}\] \[K_{2_{ij}}=\int_{-L}^{L}\Omega\Big{(}\theta_{0},\gamma_{0},\sum_ {k=0}^{n}c_{k}B_{k},\sum_{l=0}^{n}d_{l}B_{l}\Big{)}B_{j}B_{i}dx\] \[F_{1_{i}}=-\int_{-L}^{L}\Gamma\Big{(}\theta_{0},\gamma_{0},\sum_ {k=0}^{n}c_{k}B_{k},\sum_{l=0}^{n}d_{l}B_{l}\Big{)}B_{i}dx+\int_{-L}^{L}pB_{i }dx-\int_{-L}^{L}p\theta_{0}B_{i}dx\] Here, \(K_{1}\) and \(K_{2}\) are \(n\times n\) matrices, \(C_{1}\) is \(n\times n\) matrix, and \(F_{1}\) is \(n\times 1\) matrix. The first two matrices \(K_{1}\) and \(K_{2}\) are called stiffness matrices. The other two matrices \(C_{1}\) and \(F_{1}\) are called forced matrix, and load vector respectively. Therefore, we apply the backward difference method on the first term of Equation (2.12) and rearrange the resulting terms as follows: \[[C_{1}]\Big{\{}\frac{c_{j}-c_{j-1}}{\Delta t}\Big{\}}+[K_{1}]\{c _{j}\}+[K_{2}]\{d_{j}\}=[F_{1}]\] \[Or,\Big{(}\frac{1}{\Delta t}[C_{1}]+[K_{1}]\Big{)}\{c_{j}\}+[K_{ 2}]\{d_{j}\}=\frac{1}{\Delta t}[C_{1}]\{c_{j-1}\}+[F_{1}] \tag{2.13}\] The second residual equation can be written as, \[\int_{-L}^{L}\Bigg{[}\frac{\partial\widetilde{N}}{\partial t}- \varepsilon_{2}\frac{\partial^{2}\widetilde{N}}{\partial x^{2}}-f(\widetilde{M },\widetilde{N})+(p+q)\widetilde{N}\Bigg{]}B_{i}(x)dx=0\] After employing integration by parts and then substitution of (2.4) reduces the above equation, \[\int_{-L}^{L}\frac{\partial}{\partial t}\Big{(}\gamma_{0}+\sum_ {j=1}^{n}d_{j}B_{j}\Big{)}B_{i}dx+\int_{-L}^{L}\varepsilon_{2}\frac{\partial}{ \partial x}\Big{(}\gamma_{0}+\sum_{j=1}^{n}d_{j}B_{j}\Big{)}\frac{\partial B_{ i}}{\partial x}dx+\int_{-L}^{L}(p+q)\Big{(}\gamma_{0}+\sum_{j=0}^{n}d_{j}B_{j} \Big{)}B_{i}dx\] \[-\int_{-L}^{L}f\Big{(}\theta_{0}+\sum_{j=0}^{n}c_{j}B_{j},\gamma_{ 0}+\sum_{j=0}^{n}d_{j}B_{j}\Big{)}B_{i}dx=\varepsilon_{2}\Big{[}\frac{ \partial}{\partial x}\Big{(}\gamma_{0}+\sum_{j=0}^{n}d_{j}B_{j}\Big{)}B_{i} \Big{]}_{-L}^{L}\] \[\text{or},\int_{-L}^{L}\frac{\partial\gamma_{0}}{\partial t}B_{i}dx+ \int_{-L}^{L}\sum_{j=0}^{n}\frac{\partial d_{j}}{\partial t}B_{j}B_{i}dx+\int_{- L}^{L}\varepsilon_{2}\frac{\partial\gamma_{0}}{\partial x}\frac{\partial B_{i}}{ \partial x}dx+\sum_{j=0}^{n}d_{j}\int_{-L}^{L}\varepsilon_{2}\frac{\partial B_ {j}}{\partial x}\frac{\partial B_{i}}{\partial x}dx\] \[-\int_{-L}^{L}\Pi\Big{(}\theta_{0},\gamma_{0},\sum_{l=0}^{n}d_{ l}B_{l}\Big{)}B_{i}dx-\sum_{j=0}^{n}c_{j}\int_{-L}^{L}\Phi\Big{(}\gamma_{0},\sum_{l=0} ^{n}d_{l}B_{l}\Big{)}B_{j}B_{i}dx+\sum_{j=0}^{n}d_{j}\int_{-L}^{L}(p+q)B_{j}B_ {i}dx\] \[=-\int_{-L}^{L}(p+q)\gamma_{0}B_{i}dx+\varepsilon_{2}\Bigg{[} \frac{\partial\gamma_{0}}{\partial x}B_{i}\Bigg{]}_{-L}^{L}+\varepsilon_{2} \Bigg{[}\sum_{j=0}^{n}d_{j}\frac{\partial B_{j}}{\partial x}B_{i}\Bigg{]}_{-L} ^{L} \tag{2.14}\] Since the first, and third terms on the left-hand side and the first term on the right-hand side of Equation (2.14) become zero, the equation reduces to, \[\sum_{j=0}^{n}\frac{dd_{j}}{dt}\int_{-L}^{L}B_{j}B_{i}dx+\sum_{j=0 }^{n}d_{j}\Bigg{(}\int_{-L}^{L}\varepsilon_{2}\frac{dB_{j}}{dx}\frac{dB_{i}}{ dx}dx+\int_{-L}^{L}(p+q)B_{j}B_{i}dx-\varepsilon_{2}\Big{[}\frac{dB_{j}}{dx}B_{i} \Big{]}_{-L}^{L}\Bigg{)}\] \[-\sum_{j=0}^{n}c_{j}\int_{-L}^{L}\Phi\Big{(}\gamma_{0},\sum_{l=0} ^{n}d_{l}B_{l}\Big{)}B_{j}B_{i}dx=\int_{-L}^{L}\Pi\Big{(}\theta_{0},\gamma_{0},\sum_{l=0}^{n}d_{l}B_{l}\Big{)}B_{i}dx-\int_{-L}^{L}(p+q)\gamma_{0}B_{i}dx \tag{2.15}\] The derivative and non-derivative terms of Equation (2.15) can be summarized via standard matrix notation as follows: \[[C_{2}]\Big{\{}\frac{dd_{j}}{dt}\Big{\}}+[K_{3}]\{c_{j}\}+[K_{4}]\{d_{j}\}=[ F_{2}] \tag{2.16}\] where \[C_{2_{ij}} =\int_{-L}^{L}B_{j}B_{i}dx\] \[K_{3_{ij}} =-\int_{-L}^{L}\Phi\Big{(}\gamma_{0},\sum_{l=0}^{n}d_{l}B_{l}\Big{)} B_{j}B_{i}dx\] \[K_{4_{ij}} =\int_{-L}^{L}\varepsilon_{2}\frac{dB_{j}}{dx}\frac{dB_{i}}{dx}dx +\int_{-L}^{L}(p+q)B_{j}B_{i}dx-\varepsilon_{2}\Big{[}\frac{dB_{j}}{dx}B_{i} \Big{]}_{-L}^{L}\] \[F_{2_{i}} =\int_{-L}^{L}\Pi\Big{(}\theta_{0},\gamma_{0},\sum_{l=0}^{n}d_{l} B_{l}\Big{)}B_{i}dx-\int_{-L}^{L}(p+q)\gamma_{0}B_{i}dx\] Here, \(K_{3}\) and \(K_{4}\) are \(n\times n\) matrices, \(C_{2}\) is \(n\times n\) matrix, and \(F_{2}\) is \(n\times 1\) matrix. They are called stiffness matrices, forced matrices, and load vectors respectively. The application of the backward difference method on the first term of Equation (2.16) results in the following equation, \[\Big{(}\frac{1}{\Delta t}[C_{2}]+[K_{4}]\Big{)}\{d_{j}\}+[K_{3}]\{c_{j}\}=\frac {1}{\Delta t}[C_{2}]\{d_{j-1}\}+[F_{2}] \tag{2.17}\] By assembling Equations (2.13) and (2.17), we get the following recurrent system, \[\left(\frac{1}{\Delta t}[C_{1}]+[K_{1}]\right)\{c_{j}\}+[K_{2}]\{d_{j}\} =\frac{1}{\Delta t}[C_{1}]\{c_{j-1}\}+[F_{1}] \tag{2.18}\] \[[K_{3}]\{c_{j}\}+\Big{(}\frac{1}{\Delta t}[C_{2}]+[K_{4}]\Big{)} \{d_{j}\} =\frac{1}{\Delta t}[C_{2}]\{d_{j-1}\}+[F_{2}]\] To calculate the initial values of \(c_{j}\) and \(d_{j}\), the initial conditions are set in Galerkin sense as follows, \[\int_{-L}^{L}\widetilde{M}(x,0)B_{i}dx=\int_{-L}^{L}M_{0}(x)B_{i}dx\] \[\text{or},\int_{-L}^{L}\Big{(}\theta_{0}+\sum_{j=1}^{n}c_{j}(0)B_ {j}(x)\Big{)}B_{i}dx=\int_{-L}^{L}M_{0}(x)B_{i}dx\] \[\text{equivalently},\sum_{j=0}^{n}c_{j}(0)\int_{-L}^{L}B_{j}B_{i} dx=\int_{-L}^{L}M_{0}(x)B_{i}dx-\int_{-L}^{L}\theta_{0}B_{i}dx \tag{2.19}\] and \[\int_{-L}^{L}\widetilde{N}(x,0)B_{i}dx=\int_{-L}^{L}N_{0}(x)B_{i}dx\] \[\text{equivalently},\int_{-L}^{L}\gamma_{0}B_{i}dx+\int_{-L}^{L} \sum_{j=0}^{n}d_{j}(0)B_{j}B_{i}dx=\int_{-L}^{L}N_{0}(x)B_{i}dx\] \[\text{or},\sum_{j=0}^{n}d_{j}(0)\int_{-L}^{L}B_{j}B_{i}dx=\int_{ -L}^{L}N_{0}(x)B_{i}dx-\int_{-L}^{L}\gamma_{0}B_{i}dx \tag{2.20}\] This process will help us to evaluate the numerical solutions of the nonlinear reaction-diffusion systems. ## 3 Numerical Examples and Applications In this section, the previously described approach has been implemented into practice by solving a few examples of practical issues. Our methodology has been shown to be valid after being applied to the first test problem. The aforementioned procedure is then used, with a variety of parameters, to assess the subsequent test problems. The \(L_{2}\) norm and \(L_{\infty}\) norm has been determined by the following expression, \[L_{2}\;Norm=||M_{\Delta t}-M_{\frac{\Delta t}{2}}||_{2}\] \[L_{\infty}\;Norm=||M_{\Delta t}-M_{\frac{\Delta t}{2}}||_{\infty}\] Where \(\Delta t\) is the time increment and \(M_{\Delta t}\) is the approximate solution obtained using time increment \(\Delta t\). _Test Problem 1:_ Let us consider the system of the parabolic equations from the study of Manaa _et. al.[45]_ \[\left.\begin{array}{l}\frac{\partial M}{\partial t}=\varepsilon_{1}\frac{ \partial^{2}M}{\partial x^{2}}+f(M,N)-(p+q)M\\ \frac{\partial N}{\partial t}=\varepsilon_{2}\frac{\partial^{2}N}{\partial x ^{2}}-f(M,N)+p(1-N)\end{array}\right\} \tag{3.1}\] where \(f(M,N)=M^{2}N\) and \(x\in[a,b],t\geq 0\). The boundary conditions and the initial conditions are considered as: \[\left.\begin{array}{l}M(a,t)=M(b,t)=0\\ N(a,t)=N(b,t)=1\end{array}\right\} \tag{3.2}\] and \[\left.\begin{array}{l}M(x,0)=0.01sin(\pi(x-b)/(b-a))\\ N(x,0)=1-0.12sin(\pi(x-b)/(b-a))\end{array}\right\} \tag{3.3}\] The domain of the model is \([a,b]\). The values of the parameters are taken as \(a=0,b=2,\varepsilon_{1}=\varepsilon_{2}=0.01\), \(p=0.09\), and \(q=-0.004\). Here, to obtain the numerical approximation, the effect of boundary conditions is insignificant because all terms of \(B_{j}(x)\) are zero at the boundary points. We have employed the modified Galerkin method to the system of nonlinear partial differential equations (3.1) and therefore obtained the system of ordinary differential equations with respect to \(t\). In this stage, we have used the \(\alpha\) family of approximation in order to convert the system into recurrent relations and then we applied _Picard iterative procedure_. To find the initial guess of the given system, we have applied the weighted residual procedure on the initial conditions (3.2). Tables (3.1) and (3.2) provide the numerical results of concentrations \(M(x,t)\) and \(N(x,t)\) for various values of \(x\). For computation, we have taken \(\Delta t=0.1\). The numerical approximations are derived at time levels \(t=1\) and \(t=2\). Throughout these tables, we have compared the results which we have obtained with the numerical approximations that have already been published in other well-known literature. The table demonstrates that our outcomes are reasonably comparable to those that have been published. It validates the accuracy of our approach to approximating the reaction-diffusion system numerically. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \multirow{2}{*}{\(\mathbf{x}\)} & \multicolumn{2}{c|}{\(t=1\)} & \multicolumn{2}{c}{\(t=2\)} \\ \cline{2-5} & **Present Method** & **Reference [45]** & **Present Method** & **Reference [45]** \\ \hline [MISSING_PAGE_POST] 00 & 0.0003 & 0.00 & -0.0000 \\ \hline \end{tabular} \end{table} Table 3.1: Numerical results of concentrations \(M(x,t)\) at different time levels with \(\Delta t=0.1\) and first 7 modified Bernstein polynomials. The approximate results \(M(x,t)\) and \(N(x,t)\) of Equation (3.1) are presented in the following figure (3.1). In Figure (3.1) we have employed a three-dimensional graphical depiction of approximate solutions of \(M(x,t)\) and \(N(x,t)\) at different time levels for better understanding. The graphical representations agree with the results that we have obtained in the tables. Eventually, it makes sense clearly that the method is more applicable to solving such nonlinear parabolic PDE systems. In Figure (3.2), we have presented the error graph of \(M(x,t)\) and \(N(x,t)\) at time \(t=10\), where the absolute errors are computed between two different time increments, say \(\Delta t=0.2\), \(\Delta t=0.4\) and \(\Delta t=0.1\), \(\Delta t=0.2\). \begin{table} \begin{tabular}{c|c|c|c|c} \hline \multirow{2}{*}{\(x\)} & \multicolumn{2}{c|}{\(t=1\)} & \multicolumn{2}{c}{\(t=2\)} \\ \cline{2-5} & **Present Method** & **Reference [45]** & **Present Method** & **Reference [45]** \\ \hline [MISSING_PAGE_POST] \end{tabular} \end{table} Table 3.2: Numerical results of concentrations \(N(x,t)\) at different time levels with \(\Delta t=0.1\) and first 7 modified Bernstein polynomials. Figure 3.1: Approximate solution of \(M(x,t)\) and \(N(x,t)\) of (3.1) by using the present method for \((x,t)\in[0,2]\times[0,2]\) The \(L_{2}\) norm and \(L_{\infty}\) norm, are presented in Table (3.3), which shows that the comparative errors are reduced significantly according to the reduction of the size of the time increments. _Test Problem 2:_ The Gray-Scott Model is one of the most important models whose wave formations are similar to many waves formed in real life such as butterfly wings, gesticulation, damping, turning patterns, embryos, multiple spots, and so on [29, 46]. Let us consider the following model, \[\left.\begin{array}{l}\dfrac{\partial M}{\partial t}=\varepsilon_{1}\dfrac{ \partial^{2}M}{\partial x^{2}}-f(M,N)+p(1-M)\\ \dfrac{\partial N}{\partial t}=\varepsilon_{2}\dfrac{\partial^{2}N}{ \partial x^{2}}+f(M,N)-(p+q)N\end{array}\right\} \tag{3.4}\] where \(f(M,N)=MN^{2}\). The boundary conditions and the initial conditions are considered as follows: \[\left.\begin{array}{l}M(-50,t)=M(50,t)=1\\ N(-50,t)=N(50,t)=0\end{array}\right\} \tag{3.5}\] and \[\left.\begin{array}{l}M(x,0)=1-0.5sin^{100}(\pi(x-50)/100)\\ N(x,0)=0.25sin^{100}(\pi(x-50)/100)\end{array}\right\} \tag{3.6}\] The domain of the model is \([-50,50]\). The values of the parameters are taken as \[\varepsilon_{1}=1,\,\varepsilon_{2}=0.01,\,p=0.01,\,q=0.12\] Here for computational purposes, we have used 7 modified Bernstein polynomials. By applying the modified Galerkin method, we have used the backward difference method to transform the system of ordinary differential equations into the recurrent relations which is therefore solved by _Picard_ iterative procedure. Numerical data of \(M(x,t)\) and \(N(x,t)\) of (3.1) are also presented in tabulated form in the following table at different time steps. Figure 3.2: Absolute error of \(M(x,t)\) and \(N(x,t)\) from equation (3.1) for different time increment at time \(t=10\). \begin{table} \begin{tabular}{c|c|c|c|c} \hline \multirow{2}{*}{\(\Delta t\)} & \multicolumn{2}{c|}{\(M(x,t)\)} & \multicolumn{2}{c}{\(N(x,t)\)} \\ \cline{2-5} & \(L_{2}\) **norm** & \(L_{\infty}\) **norm** & \(L_{2}\) **norm** & \(L_{\infty}\) **norm** \\ \hline 0.40 & - & - & - & - \\ \hline 0.20 & 0.00000138 & 0.00000060 & 0.00001359 & 0.00000580 \\ \hline 0.10 & 0.00000035 & 0.00000015 & 0.00000340 & 0.00000145 \\ \hline \end{tabular} \end{table} Table 3.3: The \(L_{2}\) and \(L_{\infty}\) norm at \(t=10\) for \(M(x,t)\) of equation (3.1). The table shows that the numerical values of concentrations \(M\) and \(N\) change very slowly with varying values of \(x\). It happens in every time step. The results obtained by applying our proposed scheme are presented in Figure (3.3). Figure (3.3) is deployed to provide pictorial representations of the numerical concentrations \(M\) and \(N\) at different time levels. The results that are obtained in the table are shown graphically. The graphs are obtained for different time levels. The graphical presentation shows that the changes in concentrations are sufficiently small for different time levels. The \(L_{2}\) norm, and \(L_{\infty}\) norms, are presented in table (3.5), which shows that the comparative errors are reduced significantly according to the reduction of the size of the time increments. However, the order of convergences increased noticeably along with the reduction of the time length. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{\(\mathbf{x}\)} & \multicolumn{2}{c|}{\(M(x,t)\)} & \multicolumn{2}{c}{\(N(x,t)\)} \\ \cline{2-7} & \(t=1\) & \(t=10\) & \(t=20\) & \(t=1\) & \(t=10\) & \(t=20\) \\ \hline -50.0 & 1.0000 & 1.0000 & 1.0000 & 0.0000 & 0.0000 & 0.0000 \\ \hline -40.0 & 0.9725 & 0.9746 & 0.9767 & 0.0140 & 0.0164 & 0.0209 \\ \hline -30.0 & 1.0155 & 1.0201 & 1.0192 & -0.0071 & -0.0072 & -0.0094 \\ \hline -20.0 & 1.0288 & 1.0056 & 0.9866 & -0.0144 & -0.0073 & -0.0061 \\ \hline -10.0 & 0.8770 & 0.8515 & 0.8284 & 0.0623 & 0.0850 & 0.1148 \\ \hline 0.0 & 0.7753 & 0.7563 & 0.7355 & 0.1139 & 0.1447 & 0.1922 \\ \hline 10.0 & 0.8770 & 0.8515 & 0.8284 & 0.0623 & 0.0850 & 0.1148 \\ \hline 20.0 & 1.0288 & 1.0056 & 0.9866 & -0.0144 & -0.0073 & -0.0061 \\ \hline 30.0 & 1.0155 & 1.0201 & 1.0192 & -0.0071 & -0.0072 & -0.0094 \\ \hline 40.0 & 0.9725 & 0.9746 & 0.9767 & 0.0140 & 0.0164 & 0.0209 \\ \hline 50.0 & 1.0000 & 1.0000 & 1.0000 & 0.0000 & 0.0000 & 0.0000 \\ \hline \end{tabular} \end{table} Table 3.4: Numerical results of concentrations \(M(x,t)\) and \(N(x,t)\) ate different time levels with \(\Delta t=0.1\) and first 7 modified Bernstein polynomials. Figure 3.3: Absolute errors of \(M(x,t)\) and \(N(x,t)\) of (3.4) by using the present method for \((x,t)\in[-50,50]\times[0,20]\) In Figure (3.4), we have presented the error graph of \(M(x,t)\) and \(N(x,t)\) at time \(t=10\), where the absolute errors are computed between two different time increments say \(\Delta t=0.2\), \(\Delta t=0.4\) and \(\Delta t=0.1\), \(\Delta t=0.2\). ## Conclusion This research study has provided numerical approximations of nonlinear reaction-diffusion systems with specified boundary and initial conditions through the employment of the modified Galerkin method. To generate the trial solution, modified Bernstein Polynomials have been used. The simplification of the weighted residual leads to a system of ordinary differential equations which is then transformed into the recurrent relation by applying the backward difference formula. At this stage, we have used Picard's iterative procedure to approximate the trial solution. After successful derivation, we applied our proposed method to several models in order to test their applicability and effectiveness. We have solved and displayed the results both numerically and graphically. From those figures and numerical results, it is indisputable that our proposed method is an unconditionally stable, efficient, highly modular, and easily expandable method that can be applied to any type of system of nonlinear parabolic partial differential equations regardless of the type of the boundary conditions, type of non-linearity of the functions, coefficients are constants or function of independent variables. ## Acknowledgement The authors acknowledge that the research was supported and funded by Dhaka University research grant under UGC, Bangladesh. Figure 3.4: Approximate solution of \(M(x,t)\) and \(N(x,t)\) of (3.1) by using the present method for \((x,t)\in[0,2]\times[0,2]\) \begin{table} \begin{tabular}{c|c|c|c|c} \hline \multirow{2}{*}{\(\Delta t\)} & \multicolumn{2}{c|}{\(M(x,t)\)} & \multicolumn{2}{c}{\(N(x,t)\)} \\ \cline{2-5} & \(L_{2}\) **norm** & \(L_{\infty}\) **norm** & \(L_{2}\) **norm** & \(L_{\infty}\) **norm** \\ \hline 0.40 & - & - & - & - \\ \hline 0.20 & 0.00110596 & 0.00080091 & 0.00105161 & 0.00051358 \\ \hline 0.10 & 0.00055524 & 0.00040251 & 0.00052415 & 0.00025600 \\ \hline \end{tabular} \end{table} Table 3.5: The \(L_{2}\) and \(L_{\infty}\) norm at \(t=10\) for \(M(x,t)\) of equation (3.4).
2303.16472
Coarse-graining collective skyrmion dynamics in confined geometries
Magnetic skyrmions are magnetic quasi-particles with enhanced stability and different manipulation mechanisms using external fields and currents making them promising candidates for future applications for instance in neuromorphic computing. Recently, several measurements and simulations have shown that thermally activated skyrmions in confined geometries, as they are necessary for device applications, arrange themselves predominantly based on commensurability effects. In this simulational study, based on the Thiele model, we investigate the enhanced dynamics and degenerate non-equilibrium steady state of a system in which the intrinsic skyrmion-skyrmion and skyrmion-boundary interaction compete with thermal fluctuations as well as current-induced spin-orbit torques. The investigated system is a triangular-shaped confinement geometry hosting four skyrmions, where we inject spin-polarized currents between two corners of the structure. We coarse-grain the skyrmion states in the system to analyze the intricacies of skyrmion arrangements of the skyrmion ensemble. In the context of neuromorphic computing, such methods address the key challenge of optimizing read-out positions in confined geometries and form the basis to understand collective skyrmion dynamics in systems with competing interactions on different scales.
Thomas Brian Winkler, Jan Rothörl, Maarten A. Brems, Hans Fangohr, Mathias Kläui
2023-03-29T06:04:52Z
http://arxiv.org/abs/2303.16472v1
# Coarse-graining collective skyrmion dynamics in confined geometries ###### Abstract Magnetic skyrmions are magnetic quasi-particles with enhanced stability and different manipulation mechanisms using external fields and currents making them promising candidates for future applications for instance in neuromorphic computing. Recently, several measurements and simulations have shown that thermally activated skyrmions in confined geometries, as they are necessary for device applications, arrange themselves predominantly based on commensurability effects. In this simulational study, based on the Thiele model, we investigate the enhanced dynamics and degenerate non-equilibrium steady state of a system in which the intrinsic skyrmion-skyrmion and skyrmion-boundary interaction compete with thermal fluctuations as well as current-induced spin-orbit torques. The investigated system is a triangular-shaped confinement geometry hosting four skyrmions, where we inject spin-polarized currents between two corners of the structure. We coarse-grain the skyrmion states in the system to analyze the intricacies of skyrmion arrangements of the skyrmion ensemble. In the context of neuromorphic computing, such methods address the key challenge of optimizing read-out positions in confined geometries and form the basis to understand collective skyrmion dynamics in systems with competing interactions on different scales. ## Introduction Magnetic skyrmions [1] are of major interest for next-generation spintronic applications due to their topological stabilization and a multitude of mechanisms for controlled nucleation, movement and annihilation. In previous magnetic skyrmion research, the focus has been mainly on using skyrmions for data storage devices, such as the skyrmion racetrack [2]. Recently additional schemes for using skyrmions for non-conventional computing, such as Brownian-based token computing [3], probabilistic computing [4] or Reservoir computing [5], have been developed. The latter scheme exploits the non-linear nature of physical systems to perform calculations. The input data for the reservoir is transformed into an excitation of the physical system which introduces a complex non-linear response altering the system's high-dimensional phase space state. In this high-dimensional space, classification of the input data is usually significantly more straightforward, and the input separability is often conserved when extracting measurable quantities from the system. Simulation studies have already demonstrated that a skyrmion reservoir exhibits enough memory capacity, computing capacity as well as non-linearity to classify signals [6]. The highest score for the TI-46 benchmarking set for audio signals has been reached with a skyrmion reservoir [7]. The first skyrmion reservoir computer has been experimentally realized [8]. In this work it has been demonstrated that a single skyrmion in a triangular confinement suffices to realize 2- and 3-input Boolean operations including the non-linearly-separable XOR [8]. A triangular confinement was chosen due to its simplicity and ability to contact the edges with electric potentials. Instead of a skyrmion ensemble only a single skyrmion has been chosen to be hosted in the confinement, since skyrmions also exhibit non-linear behavior in various aspects. For example, non-linear interaction potentials between skyrmions and boundaries (as well as between different skyrmions) have been found [9], as well as other non-linear responses [10, 11, 12, 13]. First studies have shown the possibility to perform different kinds of Boolean logic operations with this device [8]. Building on this, one can insert multiple skyrmions into the confinement to potentially increase the computing capacity of such a device. Recent experimental and numerical studies of collective behavior of skyrmions in confined geometries has shown that diffusion of skyrmions heavily depends on the ability of the skyrmions to arrange themselves in patterns that are commensurate with the sample geometry [14]. For triangular confinements in particular, arrangements of 3, 6 and 10 skyrmions lower the average diffusion of the skyrmions, since those ensembles can arrange in a lattice that fills most of the space in the confinement [14, 15]. However, independent of the commensurability effect, skyrmions arrange in certain ways in a confinement, due to the interplay of repulsive skyrmion-skyrmion and skyrmion-boundary interactions [16]. All the above-mentioned studies have been performed in the absence of external forces except for thermal fluctuations. We take the next step beyond these investigations by using computer simulation studies, in which we investigate a thermally activated 4-skyrmion system in a triangular confinement. In this system skyrmion interactions and thermal effects compete with current-induced dynamics. Currents are applied by applying an electric potential at contacts at the edges of the confinement. Current-induced spin-orbit torques (SOT) are employed as they are a promising way to encode input signals for reservoir computing schemes. We analyze the system's behavior by considering the different prominent states. We employ coarse-graining methods to determine those non-equilibrium steady states (NESS) based on Markovian processes and find via coarse-graining methods that several degenerate non-equilibrium states are possible when external forces are applied. By this we demonstrate that coarse-graining approaches are very useful to identify multiple energetically similar or even degenerate non-equilibrium steady states in driven systems with thermal excitations, as present for instance in Brownian reservoir computing. representation of the triangular confinement hosting four skyrmions. The overall direction of the force generated by the spin orbit torque is indicated. The current density distribution in the device was simulated in detail numerically. A similar setup with one skyrmion was used in Ref. [9]. b) For every used simulation snapshot the space was discretized with the k-means++ algorithm. For clarity an arbitrary skyrmion arrangement and only \(k~{}=~{}5\) cluster centers are chosen here. The discretized data is then converted into a Markovian process, see the center box. c) From this Markov process we can build the transition matrix \(\Lambda\) and run the GPCCA algorithm to obtain d) a coarse-grained transition matrix \(\Gamma\). Simulations are performed in a rigid-particle based ansatz within the Thiele-approach [17, 18]. We employ skyrmion interaction potentials extracted for a Ta/CoFeB/Ta/MgO/Ta thin film multilayer stack in [16]. We consider an equilateral triangle with side length \(a~{}=~{}40~{}\upmu\)m. The total force \(\mathbf{F}(\mathbf{r})\) acting on every skyrmion can be split up into internal force \(\mathbf{F}_{\text{int}}(\mathbf{r})\) and external forces \(\mathbf{F}_{\text{ext}}(\mathbf{r})\) Figure 1: Visualizing the data analysis from simulation to coarse-graining. a) Schematic \(\mathbf{F(r)=F_{int}(\mathbf{v})+F_{ext}(r)}\), with \(\mathbf{r}\) the position of the skyrmion and \(\mathbf{v}\) its velocity. Internal forces are the dissipative force \(\mathbf{F}_{G}(\mathbf{v})\) and the gyroscopic force \(\mathbf{F}_{G}(\mathbf{v})\). External forces are the thermal force \(\mathbf{F}_{th}\), the skyrmion-skyrmion interaction force \(\mathbf{F}_{S\mathbf{S}\mathbf{K}}(\mathbf{r})\) the skyrmion-boundary interaction force \(\mathbf{F}_{S\mathbf{K}\mathbf{B}}(\mathbf{r})\) and the force that arises due to the spin orbit torque, \(\mathbf{F}_{\mathbf{S}\mathbf{O}}(\mathbf{r})\). The timestep of the simulations was chosen as \(\tau=0.1\), and we integrated over \(2\cdot 10^{6}\) time steps. We model the spatial dependence of \(\mathbf{F}_{\mathbf{S}\mathbf{O}}(\mathbf{r})\) by setting it to be proportional to the distribution of current density in the system. The latter is obtained using minimalistic COMSOL Multiphysics* simulations described in the supplementary material [19]. The first \(2\cdot 10^{5}\) steps are not considered for the analysis, as they comprise an initialization process. For our analysis, the position of each skyrmion at every time step is stored. Further, to describe a system as a Markovian process, we must discretize our space. Spatial discretization is performed using a k-means++ clustering [20] algorithm with \(k=20\) on an array of all skyrmion positions that occur during the simulation. **Figure 1** shows the data evaluation protocol in detail. Every snapshot of the simulation can be described by an array of four cluster centers since every configuration is an ensemble of four skyrmions. We choose \(k=20\) for a good trade-off between discretization and accuracy, since the number of theoretically available micro-states are \(n=k^{4}\). We are reducing the 4-dimensional (4D) state into a 1D state, enumerating the discretized skyrmion configurations according to their appearance. This list of states is considered to be a Markovian process, from which we can build a transition matrix \(\Lambda^{n\times n}\), with \(n\) the number of states appearing in the process. If two skyrmions switched their states, the physical systems stays the same, since skyrmions are undistinguishable particles. To delete these permutations in the states, we are sorting the 4D-states in ascending order. For the coarse graining into \(n_{coarse}\) states, we employ the generalized Peron cluster cluster analysis (GPCCA), which uses Schur decomposition techniques to coarse-grain the system [21]. We choose this algorithm as it can be applied to dynamical systems that are driven out of equilibrium, contrary to a classical eigenvector analysis. GPCCA calculates a coarse-grained transition matrix \(\Gamma^{n_{coarse}\times n_{coarse}}\). ## Results and Discussion Our simulations reveal three different temperature regimes for the system's dynamics. **Figure 2** shows scatter plots of all the positions a skyrmions has been found during the simulation at different temperatures. We observe a low temperature regime, where each of the skyrmion stays approximately at its position with nearly no overlap between the probability density functions (PDF) of the skyrmions (Fig. 2a). There is an intermediate temperature regime, where the 3 skyrmions which are closest to the center can interchange positions (nearly identical PDFs), but the skyrmion close to the bottom-right corner of the device does not move significantly (Fig. 2 b-c). Finally, there is a high temperature regime, where all 4 skyrmions in the confinement have sufficient thermal energy to exchange positions and explore the whole accessible space (Fig. 2 d). The current is injected from Figure 2: Explored space of all skyrmions during the simulated time for the different temperature regimes. a) Low temperature regime without applied currents and thus zero 5OT. Skyrmions try to arrange themselves according to the geometrical confinement [14]. The central skyrmion is showing the highest diffusion. b) Low temperature regime with injected currents: The skyrmions stay approximately at their position. c) Intermediate temperature regime, the 3 central skyrmions can change their position, while the bottom right skyrmion stays in its corner. d) High temperature regime: all skyrmions can explore the energy landscape of the device. Cluster centers of the k-means algorithm are plotted as black dots in all panels, which are used to discretize the system. the bottom right contact to the top contact, so the skyrmions are pushed with the electron flow to the bottom right corner. To build the transition matrix \(\Lambda^{n\times n}\), we assign every skyrmion at every simulation snapshot to the closest cluster and sort the arrangement in ascending order. Thereby, we reduce the 8-dimensional vector (x- and y- coordinates of each of the 4 skyrmions) to a 4-dimensional assignment. By sorting we also remove possible permutations, which is desired as those states are not distinguishable from a physical point of view. To reduce the assignment to a Markovian process, we just number the 4-dimensional assignment depending on its first appearance to obtain a Markov chain. We call \(n\) the number of explored states and build the transition \(\Lambda^{n\times n}\). The GPCCA algorithm can now be applied to \(\Lambda^{n\times n}\) as visualized and described in **Figure 1**. We scan \(n_{coarse}\) in a reasonable range and plot in Figure 3 the quality measures provided by the GPCCA method for the coarse-grained system. \(min\chi\) indicates well-separable clusters. The crispiness \(\xi\) indicates a long dwell time. A positive \(Y_{min}\) ensures \(\Gamma\) to describe a Markovian process. All measures indicate an optimal number \(n_{coarse}~{}=~{}2\) meaning that there are 2 predominant states present. For 3 states already negative entries in \(\Gamma\) occur for different temperatures, indicating a failing of the algorithm for \(n_{coarse}~{}=~{}3\). A high crispiness \(\xi\), indicating a long dwell time in each coarse state on average, as well as a \(min\chi\) close to zero, but negative, indicates well-separable clusters. Further we check the smallest element \(Y_{min}\) in the coarse-grained transition matrix \(\Gamma^{n_{coarse}\times n_{coarse}}\). Non-negative values in \(\Gamma\) are a Figure 3: Quality-measures of the coarse-graining for different Temperatures. A small negative \(min\chi\) indicates well-separable clusters. The crispiness \(\xi\) indicates a long dwell time. A positive \(Y_{min}\) ensures \(\Gamma\) to describe a Markovian process. All measures indicate an optimal number \(n_{coarse}~{}=~{}2\) meaning that there are 2 predominant states present. For 3 states already negative entries in \(\Gamma\) occur for different temperatures, indicating a failing of the algorithm for \(n_{coarse}~{}=~{}3\). requirement for interpreting the matrix as a transition matrix for a Markovian process. For details on the algorithm, we refer to Ref. [21]. _located in the confinement. Red probability clouds refer to the "arrow" state, blue ones to the "rhombus" state. For small temperatures the peaks of both states are close together, while for higher temperatures the probability clouds broaden and the states get more distinct. The approximate rotation center of the three central skyrmions is drawn with a blue ring._ _Figure 3_ shows \(min\chi\), \(\xi\) and \(\gamma_{min}\) for a various number of \(n_{coarse}\). The measures indicate an optimal number of coarse-grained states of \(n_{coarse}=2\). It shows the highest crispiness, a small appropriate \(min\chi\) and a small positive \(\gamma_{min}\) value. Even for 3 clusters, \(\gamma_{min}\) is not positive for all temperatures. For 5 and more clusters, no coarse-grained model can be created with the GPCCA method. This means that 2 states can be distinguished that predominantly occur. **Figure 4** depicts the occurrence maps for the skyrmions being in one of the identified metastable states. We observe that introducing spin torques into the system leads to a shifting of the pdf for all skyrmions. For applied spin torques we find two configurations with the GPCCA approach, of which one arrangement looks like an "arrow" (red) and one looks like a "rhombus" (blue). Both states have one skyrmion trapped in the bottom right corner, while the three skyrmions closer to the center rotate around each other, driven by SOT. In the arrow state, one of the central skyrmions is closer to Figure 4: Back-mapping of the coarse-grained clusters to the unscaled probability of a skyrmion the bottom right one, leading to a slight shift in the probability density of the latter one towards the corner. The splitting of the two states occurs due to the interplay of SOT and the repulsive potential of each skyrmion and the boundary. In both configurations, the average skyrmion distance to its closest neighbors is approximately constant, as visible in **Figure 4**. Furthermore, the two states become more distinct with increasing temperature. We want to underline that this kind of analysis can be a key factor in understanding thermal skyrmion dynamics in confined geometries. It is useful to find optimal positions for read-out contacts, such as magnetic tunnel junctions [9]. Further, when the strength of the spin torque is changed, there might be an abrupt change in the variety of possible states of the system. Finally, also the geometry of the confinement can be optimized using this approach. ## Conclusion To conclude, we have expanded the analysis of the commensurability effects in skyrmion ensembles in confined geometries to systems that are driven out of equilibrium. We have shown that commonly used coarse-graining methods in statistical physics can be applied to the dynamics of skyrmion ensembles. In our case - four skyrmions in a triangular confinement and driven by SOT - we have identified two states with similar occurrence probability, namely an arrow-like and a rhombus-like state, while the arrow-like state occurs roughly 40 % of the time, the rhombus-like state roughly 60 % of the time. The states can transition into each other by rotating the three central skyrmions by some degree. The dynamics of skyrmion ensembles in confined geometries is a key factor to understand the underlying physics and to develop neuromorphic computing devices based on skyrmions, such as Brownian Reservoir computing devices. In particular, the presented methods allow one to optimize the confinement shape and the position of read-out contacts, such as magnetic tunnel junctions. Thereby, these methods may be a key tool for studies to enhance the applicability, realizability, performance and efficiency of non-conventional computing methods. ## Authors Declaration ### Acknowledgment The authors acknowledge funding from the emergentAI center, funded by the Carl Zeiss Stiftung, as well as by the German Research Foundation (DFG SFB TRR 173, SPIN+X, A01 - 268565370 - and B12 - 268565370) and project 403502522 SPP 2137 as well as TopDyn. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No. 856538 (ERC-SyG 3D MAGIC). M.B. is supported by a doctoral scholarship of the Studienstiftung des deutschen Volkes. We thank Peter Virnau for valuable discussions. ### Authors contribution T.B.W. performed the data analysis, the coarse graining and wrote the initial draft. J.R. performed particle-based simulations of skyrmion ensembles. M.B. performed simulations of current density distributions in the confinement. M.K. and H.F. supervised the study. All authors contributed to the manuscript. ### Data availability The data and codes that support the findings of this study are available from the corresponding authors upon reasonable request.
2303.08894
A Formalization of Operads in Coq
What provides the highest level of assurance for correctness of execution within a programming language? One answer, and our solution in particular, to this problem is to provide a formalization for, if it exists, the denotational semantics of a programming language. Achieving such a formalization provides a gold standard for ensuring a programming language is correct-by-construction. In our effort on the DARPA V-SPELLS program, we worked to provide a foundation for the denotational semantics of a meta-language using a mathematical object known as an operad. This object has compositional properties which are vital to building languages from smaller pieces. In this paper, we discuss our formalization of an operad in the proof assistant Coq. Moreover, our definition within Coq is capable of providing proofs that objects specified within Coq are operads. This work within Coq provides a formal mathematical basis for our meta-language development within V-SPELLS. Our work also provides, to our knowledge, the first known formalization of operads within a proof assistant that has significant automation, as well as a model that can be replicated without knowledge of Homotopy Type Theory.
Zachary Flores, Angelo Taranto, Eric Bond, Yakir Forman
2023-03-15T19:29:40Z
http://arxiv.org/abs/2303.08894v1
# A Formalization of Operads in Coq ###### Abstract What provides the highest level of assurance for correctness of execution within a programming language? One answer, and our solution in particular, to this problem is to provide a formalization for, if it exists, the denotational semantics of a programming language. Achieving such a formalization provides a gold standard for ensuring a programming language is correct-by-construction. In our effort on the DARPA V-SPELLS program, we worked to provide a foundation for the denotational semantics of a meta-language using a mathematical object known as an operad. This object has compositional properties which are vital to building languages from smaller pieces. In this paper, we discuss our formalization of an operad in the proof assistant Coq. Moreover, our definition within Coq is capable of providing proofs that objects specified within Coq are operads. This work within Coq provides a formal mathematical basis for our meta-language development within V-SPELLS. Our work also provides, to our knowledge, the first known formalization of operads within a proof assistant that has significant automation, as well as a model that can be replicated without knowledge of Homotopy Type Theory. Operads, Formal Mathematics, Coq 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmright 2012 acmright 2012 acmright 2012 acright 2012 acmright 201 methods from static analysis, natural language processing, and dynamic analysis to the legacy source code in order to generate high-level abstractions of these DSLs (Domain Specific Languages) that we call domain-specific semantic models (DSSMs) from the DSLs that comprise the source code. These DSSMs will be generated in a language we refer to as the _meta-DSL_, and in order to provide the patches to the legacy code requested in V-SPELLS, these DSSMs will have to be composed in very specific ways. In order to ensure correctness of composition, as is required in V-SPELLS, we are providing verification via an algebraic framework using several ideas from category theory in which the key structure to our modeling is called an _operad_. Operads have begun to play an increasingly important role within applied mathematics (see [6, 3, 1, 2, 4]), and we find they provide an excellent mathematical model for our verification needs on V-SPELLS. To be more precise about our modeling, when a DSSM is written in the meta-DSL, we will use an operad to represent the DSSM in the meta-DSL, and composition of DSSMs within the meta-DSL will be modeled via a "gluing" operation. Mathematically, we are providing denotational semantics for a key portion of the language of the meta-DSL. To ensure the highest level of correctness on composition between DSSMs in the meta-DSL, we aim provide a formalization of the denotational semantics of the meta-DSL. In particular, we need to provide a formalization for the foundation for the denotational semantics of the meta-DSL: operads. We provide this formalization within the proof assistant Coq, and this is the focus of our paper. In Section 2, we discuss the informal definition of operads; Section 3 discusses the technicalities we faced and our solutions to defining operads within Coq; in Section 4, we discuss our construction of the equivalent of an _operad of sets_ within Coq (namely, an _operad of types_), and discuss our proof in Coq that this is an operad according to our specification in Section 3; and lastly, in Section 5, we compare our formalization to the only other formalization of operads we are aware of [5]. We ## 2 Informally Defining Operads While there does not seem to be an agreed-upon definition for a _symmetric colored operad_, we note we are following the definition of a _symmetric colored operad_ in [7]. However, we remark the definition in [7] does not include what is called the _equivariance axiom_ in [8]; we too omit this axiom, since it is not relevant to what we want to accomplish in our work on V-SPELLS. Regardless of these distinctions, we use _operad_ to mean _symmetric colored operad_ or _colored operad_ in the sequel. As our aim was to fully formalize the definition of an operad within Coq, we require precision, so we provide the full informal definition of an operad below in two parts. The first part consists of the objects that comprise an operad. [Data for an Operad] An **operad**, \(\mathcal{O}\), consists of a collection of types, which we will denote by \(T\), and for each \(n\geq 1\), \(d\in T,\underline{c}:=c_{0},\ldots,c_{n-1}\) a sequence of types in \(T\), a collection of terms \(\mathcal{O}\binom{d}{\underline{c}}\) such that, 1. for each \(c\in T\), we designate an element \(\mathbb{1}_{c}\in\mathcal{O}\binom{c}{c}\) called the \(c\)**-colored unit**; 2. if \(\sigma\) is a permutation on \(n\) letters, and \(\underline{c}\sigma:=c_{\sigma(0)},\ldots,c_{\sigma(n-1)}\), then there is a bijection between \(\mathcal{O}\binom{d}{\underline{c}}\) and \(\mathcal{O}\binom{d}{\underline{c}\sigma}\); 3. _for any sequence_ \(\underline{b}\) _of types in_ \(T\)_, if we denote by_ \(\underline{c}\bullet_{i}\underline{b}\) _the sequence given by_ \[\underbrace{c_{0},\ldots,c_{i-1}}_{\emptyset\not\in i=0},\underline{b},c_{i+1}, \ldots,c_{n-1},\] _then we require the existence of a function:_ \[\circ_{i}:\mathcal{O}\binom{d}{\underline{c}}\times\mathcal{O}\binom{c_{i}}{ \underline{b}}\to\mathcal{O}\binom{d}{\underline{c}\bullet_{i}\underline{b}}.\] _We typically refer to the function_ \(\circ_{i}\) _as_ _multi-composition_._ For quick example of what the type signature of a multi-composition function looks like, let \(\underline{c}=c_{0},c_{1},c_{2}\), \(\underline{b}=b_{0},b_{1}\), and \(i=1\), then \(\circ_{1}\) has type signature: \[\mathcal{O}\binom{d}{c_{0},c_{1},c_{2}}\times\mathcal{O}\binom{c_{1}}{b_{0},b_ {1}}\to\mathcal{O}\binom{d}{c_{0},b_{0},b_{1},c_{2}}.\] Now the data for an operad \(\mathcal{O}\) in Definition 3 is subject to certain axiomatic constraints, and this forms the second half of our definition for an operad. [Axioms for an Operad] Let \(\underline{c}:=c_{0},\ldots,c_{n-1},\underline{b}:=b_{0},\ldots,b_{m-1}, \underline{a}=a_{0},\ldots,a_{\ell-1}\) be sequences from a collection of types \(T\). The axioms that the data for an operad \(\mathcal{O}\) must follow are given below. 1. The _horizontal associativity axiom_: Suppose \(n\geq 2\) and \(0\leq i<j\leq n-1\), then for \((\alpha,\beta,\gamma)\in\mathcal{O}\binom{d}{\underline{c}}\times\mathcal{O} \binom{c_{i}}{\underline{a}}\times\mathcal{O}\binom{c_{j}}{\underline{c}}\), \[(\alpha\circ_{i}\beta)\circ_{\ell-1+j}\gamma=(\alpha\circ_{j}\gamma)\circ_{i}\beta\] _To give a visual description of this axiom, we are requiring commutativity of the following diagram:_ \[\mathcal{O}\binom{d}{\underline{c}}\times\mathcal{O}\binom{c_{i}}{ \underline{a}}\times\mathcal{O}\binom{c_{j}}{\underline{b}}\xrightarrow{(c_{ i},id)}\mathcal{O}\binom{d}{\underline{c}\bullet_{i}\underline{a}}\times\mathcal{O} \binom{c_{j}}{\underline{b}}\] \[\mathcal{O}\binom{d}{\underline{c}}\times\mathcal{O}\binom{c_{j}}{ \underline{b}}\times\mathcal{O}\binom{c_{i}}{\underline{a}}\] \[\mathcal{O}\binom{d}{\underline{c}\bullet_{j}\underline{b}}\times \mathcal{O}\binom{c_{j}}{\underline{b}}\times\mathcal{O}\binom{c_{i}}{ \underline{a}}\] \[\mathcal{O}\binom{d}{\underline{c}\bullet_{i}\underline{a}} \xrightarrow{(c_{j},id)}\mathcal{O}\binom{d}{\underline{c}\bullet_{i}\underline {a}}\] \[\mathcal{O}\binom{d}{\underline{c}\bullet_{j}\underline{b}}\times \mathcal{O}\binom{c_{i}}{\underline{a}}\xrightarrow{(c_{i},id)}\mathcal{O} \binom{d}{\underline{c}\bullet_{i}\underline{a}}.\] 2. The _vertical associativity axiom_: Suppose \(m,n\geq 1\), \(0\leq i\leq n-1\), and \(0\leq j\leq m-1\). Then for \((\alpha,\beta,\gamma)\in\mathcal{O}\binom{d}{\underline{c}}\times\mathcal{O} \binom{c_{i}}{\underline{b}}\times\mathcal{O}\binom{b_{j}}{\underline{a}}\), \[(\alpha\circ_{i}\beta)\circ_{i+j}\gamma=\alpha\circ_{i}(\beta\circ_{j}\gamma)\] _That is, we are requiring commutativity of the following diagram:_ \[\mathcal{O}\!\left(\!\!\begin{array}{c}d\\ \underline{c}\end{array}\!\!\right)\times\mathcal{O}\!\left(\!\!\begin{array}[] {c}c_{i}\\ \underline{b}\end{array}\!\!\right)\times\mathcal{O}\!\left(\!\!\begin{array}[] {c}b_{j}\\ \underline{a}\end{array}\!\!\right)\ \xrightarrow{(id,\circ_{j})}\ \mathcal{O}\!\left(\!\!\begin{array}{c}d\\ \underline{c}\end{array}\!\!\right)\times\mathcal{O}\!\left(\!\!\begin{array}[] {c}c_{i}\\ \underline{b}\end{array}\!\!\bullet_{j}\underline{a}\right)\] \[\mathcal{O}\!\left(\!\!\begin{array}{c}d\\ \underline{c}\end{array}\!\!\right)\times\mathcal{O}\!\left(\!\!\begin{array}[] {c}b_{j}\\ \underline{a}\end{array}\!\!\right)\ \xrightarrow{(id,\circ_{j})}\ \mathcal{O}\!\left(\!\!\begin{array}[] {c}d\\ \underline{c}\end{array}\!\!\right)\times\mathcal{O}\!\left(\!\!\begin{array}[] {c}d\\ \underline{c}\end{array}\!\!\right)\] 3. _The **left unity axiom** requires that for \(\alpha\in\mathcal{O}\!\left(\!\!\begin{array}{c}d\\ \underline{c}\end{array}\!\!\right)\) with \(n\geq 1\), \(\mathbbm{1}_{d}\circ_{1}\alpha=\alpha\). 4. _The **right unity axiom** requires that for \(n\geq 1\), \(0\leq i\leq n-1\), and \(\alpha\in\mathcal{O}\!\left(\!\begin{array}{c}d\\ \underline{c}\end{array}\!\!\right)\), \(\alpha\circ_{i}\mathbbm{1}_{c_{i}}=\alpha\)._ Before we give an example, some comments are in order about Definition 3. We want to give some sanity checks of the associativity axioms. First notice the following equality occurs in the right-hand corner of the diagram for the horizontal associativity axiom (1 of Definition 3): \[\mathcal{O}\!\left(\!\!\begin{array}{c}d\\ (\underline{c}\bullet_{i}\underline{a})\bullet_{\ell-1+j}\underline{b}\end{array} \!\!\right)=\mathcal{O}\!\left(\!\!\begin{array}{c}d\\ (\underline{c}\bullet_{j}\underline{b})\bullet_{i}\underline{a}\end{array}\!\!\right) \tag{1}\] This equality arises from an equality of the following sequences: \[(\underline{c}\bullet_{i}\underline{a})\bullet_{\ell-1+j}\underline{b} = (c_{0},\ldots,c_{i-1},\underline{a},c_{i+1},\ldots,c_{n-1}) \bullet_{\ell-1+j}\underline{b}\] \[= c_{0},\ldots,c_{i-1},\underline{a},c_{i+1},\ldots c_{j-1}, \underline{b},c_{j+1},\ldots,c_{n-1}\] \[= (\underline{c}\bullet_{j}\underline{b})\bullet_{i}\underline{a}\] In particular, in providing a specification in Coq for operads, we need to provide a proof that (2) holds for such sequences in \(T\). A similar equality of sequences is required to define the vertical associativity diagram: \[\underline{c}\bullet_{i}(\underline{b}\bullet_{j}\underline{a}) = c_{0},\ldots,c_{i-1},(\underline{b}\bullet_{j}\underline{a}),c_ {i+1},\ldots,c_{n-1}\] \[= c_{0},\ldots,c_{i-1},b_{0},\ldots,b_{j-1},\underline{a},b_{j+1}, \ldots,b_{m-1},c_{i+1},\ldots,c_{n-1}\] \[= (\underline{c}\bullet_{i}\underline{b})\bullet_{i+j}\underline{a}\] While our definition seems extraordinarily abstract, the next example helps clarify the roots of the abstraction found in Definition 1 and Definition 3. Moreover, the next example will serve as the first application of our formal definition of operads, as we will prove in Coq that our realization of this example is an operad according to our specification. **Example 5**.: If we let \(T\) be a collection of types for which \(T\) is closed under finite products, we can define an operad \(\mathbf{Sets}_{T}\) by setting \[\mathbf{Sets}_{T}\binom{d}{c_{0},\ldots,c_{n-1}}:=\operatorname{Hom}(c_{0} \times\cdots\times c_{n-1},d),\] where the hom-set on the right is the collection of all functions from the product of sets \(c_{0}\times\cdots\times c_{n-1}\) to the set \(d\). Given \(c\in T\), the identity function on \(c\) operates as the \(c\)-colored unit in \(\mathbf{Sets}_{T}\binom{c}{c}=\operatorname{Hom}(c,c)\). In this setting, we can explicitly define multi-composition \(\circ_{i}\) from Definition 1 which returns, given \(f\in\operatorname{Hom}(c_{0}\times\cdots\times c_{n-1},d)\) and \(g\in\operatorname{Hom}(b_{0}\times\cdots\times b_{m-1},c_{i})\), the function \(f\circ_{i}g\) which acts on the \((n+m-1)\)-tuple \((x_{0},\ldots,x_{i-1},\underline{y},x_{i+1},\ldots,x_{n-1})\) as \[(f\circ_{i}g)(x_{0},\ldots,x_{i-1},\underline{y},x_{i+1},\ldots,x_{n-1})=f(x_ {0},\ldots,x_{i-1},g(\underline{y}),x_{i+1},\ldots,x_{n-1}).\] All other pieces of Definition 1 and 3 not mentioned above can be proved for \(\mathbf{Sets}_{T}\) using everything defined above and basic facts in set theory. ## 3 Formally Modeling Operads in Coq In defining the collection of terms \(\mathcal{O}\binom{d}{c_{0},\ldots,c_{n-1}}\) in Coq, Definition 1 requires that \(d,c_{i}\) come from the collection \(T\). Throughout our specification in this paper, we will replace \(T\) with one of Coq's in-house universes: \(\mathbf{Type}\). In practice, we do need a proper subset of \(\mathbf{Type}\), but for simplicity in our paper, we use \(\mathbf{Type}\). In the event we need a restriction to a subset of \(\mathbf{Type}\), we briefly discuss how to use Tarski universes to do this after the description of our formal model in Coq. ### Encoding an Operad in Coq The first goal to tackle in defining an operad is giving a formal definition of \(\mathcal{O}\binom{d}{\underline{c}}\). **Note 6** (A Definition for \(\mathcal{O}\binom{d}{\underline{c}}\) in Coq).: Informally, part of an operad \(\mathcal{O}\) is a collection of sets indexed by pair \(d:\mathbf{Type}\) and \(\underline{c}:=c_{0},\ldots,c_{n-1}:\,\mathbf{list}\,\mathbf{Type}\). Since this is a _collection of sets_, it would be natural to use a _record_ in Coq to make this definition. To do so, we create a record in Coq, which we denote as \(\mathbf{Operad}\), whose single field is given by a function with type signature: \(\mathbf{Type}\rightarrow\mathbf{list}\,\mathbf{Type}\rightarrow\mathbf{Type}\). An instantiation of \(\mathbf{Operad}\) will yield a function \(\mathcal{O}:\mathbf{list}\,\mathbf{Type}\rightarrow\mathbf{Type}\rightarrow \mathbf{Type}\), so that \(\mathcal{O}\binom{d}{\underline{c}}\) yields our desired collection of terms. We give an example of our definition from Note 6. **Example 7**.: Our goal in Section 4 is to provide a version of \(\mathbf{Sets}_{T}\) in Example 5 in Coq for \(T=\mathbf{Type}\); we will denote this operad by \(\mathbf{Type}\). In Coq, if \(\underline{c}=c_{0},\ldots,c_{n-1}:\mathbf{list}\,\mathbf{Type}\) and \(d:\mathbf{Type}\), then the following is definable in Coq via recursion: \[\mathbf{Type}\binom{d}{\underline{c}}:=c_{0}\rightarrow\cdots\to c_{n-1} \to d.\] In particular, _terms_ of type \(\mathbf{Type}\binom{d}{\underline{c}}\) are \(n\)-ary functions with codomain defined by \(\underline{c}\), and with return type \(d\). In the rest of our model in Coq, we also use a record to denote the data that comprises the operad (as in Definition 1) or the constraints the data is subject to (as in Definition 3). Each piece in Definition 1 and 3 is a proposition that must be satisfied. We first detail how the the data from Definition 1 will be encoded as propositions within Coq. [Data for an Operad in Coq] 1. the existence of a \(c\)-colored unit in \(\mathcal{O}\) (1 from Definition 1): for all \(c:\mathsf{Type}\), there is a \(\mathbbm{1}_{c}\in\mathcal{O}\binom{c}{c}\); 2. the requirement that there is a bijection between \(\mathcal{O}\binom{d}{c}\) and \(\mathcal{O}\binom{d}{c\sigma}\) for a permutation \(\sigma\) on \(n\) letters (2 from Definition 1): for all \(d:\mathsf{Type}\), \(\underline{c},\underline{c}^{\prime}:\mathsf{list}\,\mathsf{Type}\) with the length \(\underline{c}\) at least 1, and \(\underline{c}\) and \(\underline{c}^{\prime}\) are permutations of one another, there is a bijection between \(\mathcal{O}\binom{d}{\underline{c}}\) and \(\mathcal{O}\binom{d}{\underline{c}^{\prime}}\); 3. the requirement for the existence of \(\circ_{i}\) (3 from Definition 1); for all \(i,n:\mathbb{N}\), \(d,c_{i}:\mathsf{Type}\), \(\underline{b},\underline{c}:\mathsf{list}\,\mathsf{Type}\), if \(\underline{c}\) has length \(n\), \(1\leq n\), \(i<n\), and the \(n\)th entry of \(\underline{c}\) is \(c_{i}\), there is a function of type \(\mathcal{O}\binom{d}{\underline{c}}\times\mathcal{O}\binom{c_{i}}{\underline{ b}}\to\mathcal{O}\binom{d}{\underline{c}^{\prime},\underline{b}}\). To make our implementation in Coq clear in Note 8, some remarks are in order about how to make the above precise within Coq: 1. Any time _bijection_ is used in this context, we are referring to a bijection in \(\mathsf{Type}\). That is, if \(t,t^{\prime}:\mathsf{Type}\), then there are functions \(f:t\to t^{\prime}\), \(f^{\prime}:t^{\prime}\to t\) such that \(f\circ f^{\prime}=\mathrm{id}_{t^{\prime}}\), and \(f^{\prime}\circ f=\mathrm{id}_{t}\). This is easily definable in Coq. 2. To create a proposition that two lists, \(\underline{c},\underline{c}^{\prime}\), are permutations of one another in Coq, we can use Coq's built-in type **Permutation**. This says that **Permutation**\(\underline{c}\underline{c}^{\prime}:\mathsf{Prop}\) (where \(\mathsf{Prop}\) is the type of all propositions in Coq). 3. The operation \(\bullet_{i}\) on lists can be defined in Coq by taking the first \(i\) entries of \(\underline{c}\), concatenating the list \(\underline{b}\), and then concatenating the the last \(n-i-1\) entries of \(\underline{c}\) to the previous concatenation. 4. In 3 of Note 8, we need the use of the \(n\)th function within Coq. This function requires a default element as part of its arguments, which means we would need to choose a default element from \(\mathsf{Type}\) to use consistently throughout. The choice we make in Coq is the _unit_ type, which is the type used to represent singleton sets. We encode Definition 3 into Coq in a similar manner using records, denoting this record by **OperadLaws**. However, there is more caution to be had, due in part to the discussion in Remark 4. To demonstrate this caution, we discuss our modeling of of the horizontal associativity axiom within Coq in explicit detail below. [Axioms for an Operad in Coq] The horizontal associativity axiom in an operad (1 in Definition 3) can be defined in Coq by first listing a collection of parameters that we refer to as \(P\): * \(n,m,\ell,i,j:\mathbb{N}\); * \(d,c_{i},c_{j}:\mathsf{Type}\); * \(\underline{a},\underline{b},\underline{c}:\mathsf{list}\,\mathsf{Type}\) * \(\alpha:\mathcal{O}\binom{d}{\underline{c}},\beta:\mathcal{O}\binom{c_{i}}{ \underline{b}},\gamma:\mathcal{O}\binom{b_{j}}{\underline{a}}\) * \(2\leq n\), \(1\leq m\), and \(1\leq\ell\); * \(i<j\) and \(j<n\); * \(\underline{c}\) has length \(n\), \(\underline{b}\) has length \(m\), and \(\underline{a}\) has length \(\ell\); * the \(i\)th entry of \(\underline{c}\) is \(c_{i}\) and the \(j\)th entry of \(\underline{c}\) is \(c_{j}\); Using what is now in \(P\), we can give a proof that the \(i\)th entry of \(\underline{c}\bullet_{j}\underline{b}\) is \(c_{i}\), and a proof that the \((\ell-1+j)\)th entry of \(\underline{c}\bullet_{i}\underline{a}\) is \(c_{j}\); we add these proofs to \(P\). With this update to \(P\), we can state our formalization of the horizontal associativity axiom in Coq: for all parameters that comprise \(P\), Equation (2) in Remark 4 holds, and there exists a type casting function \(\mathcal{C}_{\mathbf{assoc}}\) such that \[\mathcal{C}_{\mathbf{assoc}}\,P\left((\alpha\circ_{i}\beta)\circ_{\ell-1+j} \gamma\right)=(\alpha\circ_{j}\gamma)\circ_{i}\beta\] The type-casting function \(\mathcal{C}_{\mathbf{assoc}}\) is necessary, since we have defined in Coq for each \(d:\mathbf{Type}\) and \(\underline{c}:\mathbf{listType}\), that \(\mathcal{O}\binom{d}{\underline{c}}\)be a type in \(\mathbf{Type}\), and the casting function provides a proof that the equality of types in Equation (1) holds. However, the existence of \(\mathcal{C}_{\mathbf{assoc}}\) relies entirely on the proof of the equality of lists in Equation (2). Now the equality in Equation (2) requires a significant effort, and the most difficult part of formalizing this axiom is in providing its proof. Providing a formal specification of all other axioms in Definition 3 to be inserted into the fields of of our record \(\mathbf{OperadLaws}\) follows the same path as above: 1. carefully curate the correct collection \(P\) of parameters needed for the axiom; 2. add in any proofs needed that can be deduced from everything currently in \(P\); 3. show that any required equality of lists holds (this will be necessary for all axioms in Definition 3); 4. create the necessary casting function. We have one last comment to make on the choices in our model. In [8] the definition for operads says that if \(\underline{c}=\emptyset\), the empty list of symbols coming from the collection \(T\), then the symbol \(\mathcal{O}\binom{d}{\emptyset}\) still has meaning. Notice in Definition 1, we do not allow the existence of of such a symbol since we require that the list \(\underline{c}\) is _not_ empty. Our reason for doing so is that our main application relies on giving a version of Example 5 in Coq. Within \(\mathbf{Sets}_{T}\), if \(\underline{c}=\emptyset\), then the product of an empty list of sets is a singleton, \(\{\bullet\}\), so that \(\mathbf{Sets}_{T}\binom{d}{\emptyset}\cong\mathbf{Sets}_{T}\binom{d}{\{\bullet\}}\). We can model this situation in Coq by letting \(\underline{c}\) be the list whose only entry is \(\mathbb{U}:\mathbf{Type}\), the unit type. ### Tarski Universes A solution to using a subset \(T\) of \(\mathbf{Type}\) is to define \(T\) in Coq as a _Tarski universe_. This defines \(T:\mathbf{Type}\), as well as an _interpretation_ that allows the terms of \(T\) be regarded as _codes_ for actual types. In this way, the type \(T\) is a set together with an injective mapping to \(\mathbf{Type}\), which is exactly the data of a subset of \(\mathbf{Type}\). Our approach to implementing this definition in Coq involves the following: 1. a type \(\mathcal{B}\) in Coq with nullary constructors, we call the _base types_, and whose terms we refer to as _type sigils_; 2. the constructors that define the type \(T\), which include: * a constructor with signature \(\mathbf{Ty}:\mathcal{B}\to T\) which encodes the base types into \(T\); * other constructors that may model products, such as \(\mathbf{p}:T\to T\to T\), or \(\mathbf{fn}:T\to T\to T\), which can model functions; * an assignment for \(\mathcal{B}\) within \(\mathbf{Type}\), and a recursively-defined interpretation function \(\mathbf{El}:T\to\mathbf{Type}\) that assigns a value within \(\mathbf{Type}\) to each \(t:T\). We give an example of what this would look like explicitly. We define our collection of base types \(\mathcal{B}\) in Coq with the nullary constructors \(N,U\), and \(B\). Within Coq, we create a function \(\mathbf{I}\) that interprets these type sigils: \(N\) is assigned to \(\mathbb{N}\), the type of natural numbers; \(U\) to \(\mathbb{U}\), the unit type; \(B\) to \(\mathbb{B}\), which is bool. If we want to model products and functions within in \(T\), then we can define \(T\) with constructors: 1. \(\mathbf{Ty}:\mathcal{B}\to T\); 2. \(\mathbf{p}:T\to T\to T\); 3. \(\mathbf{fn}:T\to T\to T\). Now \(\mathbf{El}\) will provide the embedding into Coq via the following recursion: 1. \(\mathbf{El}\left(\mathbf{Ty}\,t\right)\Rightarrow\mathbf{I}\,t\) 2. \(\mathbf{El}\left(\mathbf{p}\,t\,t^{\prime}\right)\Rightarrow\mathbf{El}\,t \times\mathbf{El}\,t^{\prime}\) 3. \(\mathbf{El}\left(\mathbf{fn}\,t\,t^{\prime}\right)\Rightarrow\mathbf{El}\,t \rightarrow\mathbf{El}\,t^{\prime}\) For an explicit example of a code in \(T\), we have \(\mathbf{p}\left(\mathbf{Ty}\,N\right)\left(\mathbf{Ty}\,N\right):T\), and via the embedding \(\mathbf{El}\), this is a model for \(\mathbb{N}\times\mathbb{N}\) in Coq. ## 4 A Proof Using Our Model Our goal in this section is to discuss the formal proof that the equivalent of Example 5 in Coq, which we denote by \(\mathbf{Type}\) and define in Example 7, is an operad according to our model. To formally demonstrate that \(\mathbf{Type}\) is an operad, we first need a definition of the function in the only field of the record \(\mathbf{Operad}\) (see Note 6). Next, in the record \(\mathbf{OperadLaws}\), we need to define, for \(c:\mathbf{Type}\), the \(c\)-colored units (1 of Definition 1), provide proofs that \(\mathbf{Type}{d\choose c}\) is invariant (up to injection) under reordering of \(\underline{c}\) (2 of Definition 1), define the multi-composition functions (3 of Definition 1), and show all axioms in Definition 3 hold according. Wrapping these assignments and proofs together provides a term of type \(\mathbf{Operad}\) and \(\mathbf{OperadLaws}\), which gives our desired formal proof. In Example 7, we give the definition of the required function in \(\mathbf{Operad}\) for \(\mathbf{Type}\): given \(\underline{c}=c_{0},\ldots,c_{n-1}:\mathbf{list}\,\mathbf{Type}\) and \(d:\mathbf{Type}\), we write: \[\mathbf{Type}{d\choose c}:=c_{0}\rightarrow\cdots\,\,\,c_{n-1}\to d,\] which is the type of \(n\)-ary functions with codomain defined by \(\underline{c}\) and return type \(d\). Our instantiation of \(\mathbf{OperadLaws}\) for \(\mathbf{Type}\) will use this definition throughout. The right-hand side of \(\mathbf{Type}{d\choose c}\) is defined via a recursive function, which we denote as \(\mathbf{arr}\) (short for _arrow_), with type signature \(\mathbf{list}\,\mathbf{Type}\rightarrow\mathbf{Type}\). In particular, we define \(\mathbf{arr}\,\emptyset\,d=d\) (where \(\emptyset\) is the empty list). ### Implementing the Data for Type in Coq Next we discuss in a series of notes, the implementation of Definition 1 for **Type** in Coq, as well as the tools that were developed for use in this implementation. [c-colored units in **Type**] If \(\underline{c}\) has single entry \(c:\textbf{Type}\), then \(\textbf{Type}{c\choose c}=c\to c\), which is the type of all functions with domain and range given by \(c\). Then \(\mathbbm{1}_{c}:=\mathrm{id}_{c}\), the identity function on \(c\). [c-colored units in **Type**] Let \(\textbf{Type}{d\choose c}\cong\textbf{Type}{d\choose 2\sigma^{\prime}}\). Our motivation is to provide the equivalent of Example 5 within Coq, and the analogous isomorphism for the operad \(\textbf{Sets}_{T}\) is, \[\mathrm{Hom}(c_{0}\times\cdots\times c_{n-1},d)\cong\mathrm{Hom}(c_{\sigma(0) }\times\cdots\times c_{\sigma(n-1)},d),\] given \(\underline{c}=c_{0},\ldots,c_{n-1}\) and a permutation \(\sigma\) on \(n\) letters. The isomorphism in the context of Coq asks us to construct a bijection between the two sets above. Following Definition 8 and comments in Remark 9, we can translate this into Coq for **Type** as: for all \(d:\textbf{Type}\), \(\underline{c},\underline{c}^{\prime}:\textbf{list}\,\textbf{Type}\) with the length \(\underline{c}\) at least \(1\), and \(\textbf{Permutation}\,\underline{c}\,\underline{c}^{\prime}\), there is a bijection between \(\textbf{Type}{d\choose c},\textbf{Type}{d\choose c^{\prime}}:\textbf{Type}\). We can prove this in \(\mathbb{C}_{\mathrm{O}}\) using induction on the length of \(\underline{c}\), along with some preceding lemmas about the behavior of function composition and isomorphism in this setting. [c-colored for **Type**] Lastly, we need to write (3) of Definition 8 for **Type** in Coq. Writing \(\textbf{Type}{d\choose c}\) in the curried form, as opposed to using a verbatim translation of \(\textbf{Sets}_{T}{d\choose c}\) from Example 5, provides the needed flexibility, via partial application, to implement the multi-composition function \(\circ_{i}\) for **Type** in Coq with respect to (3) of Definition 1. The most important piece of our definition in Coq is that we define a recursive function **compose** with type signature \[\textbf{arr}\,\underline{c}^{\prime}(t\to t^{\prime})\to\textbf{arr}\, \underline{b}\,t\to\textbf{arr}\,\underline{c}^{\prime}(\textbf{arr}\, \underline{b}\,t^{\prime});\] where \(t,t^{\prime}:\textbf{Type}\), and \(\underline{c}^{\prime},\underline{b}:\textbf{list}\,\textbf{Type}\). If \(\underline{c}=c_{0},\ldots,c_{i-1},c_{i},c_{i+1},\ldots,c_{n-1}\), we let \(\underline{c}^{\prime}=c_{0},\ldots,c_{i-1}\), \(t=c_{i}\), and \(t^{\prime}=c_{i}\to c_{i+1}\to\cdots\to c_{n-1}\to d\), and this gives us, provided the correct type inference is written in, the required multi-composition operator \(\circ_{i}\) for **Type**. ### Implementing the Axioms for Type in Coq Definition 1 provides the base to build the axioms of an operad, which are given in Definition 3, and we discuss our implementation in Coq of this here. There are several axioms listed in Definition 3, and as in Note 10, we keep our discussion focused on the horizontal associativity axiom (1 of Definition 3), as the proof that **Type** satisfies all other axioms in Definition 3 follows from a similar procedure in Coq. Our first hurdle comes from noticing that our definition for \(\circ_{i}\) in Note 15 is what we want mathematically, but that Coq does not automatically recognize the equality of types: \[\underline{c}^{\prime}(\textbf{arr}\,\underline{b}\,t^{\prime})=\textbf{arr}( \underline{c}\bullet_{i}\underline{b})\,d,\] where \(\underline{c}=c_{0},\ldots,c_{i-1},c_{i},c_{i+1},\ldots,c_{n-1}\), \(\underline{c}^{\prime}=c_{0},\ldots,c_{i-1}\), \(t=c_{i}\), \(t^{\prime}=c_{i}\to c_{i+1}\to\cdots\to c_{n-1}\to d\), so that (see 3 of Definition 1) \(\underline{c}\bullet_{i}b=c_{0},\ldots,c_{i-1},\underline{b},c_{i+1},\ldots, c_{n-1}\). Since we require for \(f:\mathbf{Type}\binom{d}{c},g:\mathbf{Type}\binom{c}{b}\) that \(f\circ_{i}g:\mathbf{Type}\binom{d}{c}\), this presents an issue whose solution is, as in Note 10, a type casting function. The remainder of our formalization of \(\mathbf{Type}\) within Coq is a tour de force of type casting, and we discuss the tools we use in this proof below. First, we give the formal definition we use for a type cast within Coq. Given \(A,B:\mathbf{Type}\), and an equation, \(A=B\), a **type cast**\(\mathcal{C}_{A=B}\) is a function such that for \(a:A\), \(\mathcal{C}_{A=B}\,a:B\). In order to manipulate the type casts that occur throughout our proof that \(\mathbf{Type}\) satisfies the axioms in Definition 3, we prove a handful of general facts about type casts in Coq, which we discuss below. [Type Casts for Definition 3 in Coq] 1 The composition two type casts is a type cast: given equations of types \(A=B\) and \(B=C\), we have \(\mathcal{C}_{B=C}\circ\mathcal{C}_{A=B}=\mathcal{C}_{A=C}\). 2. A type cast using an equation of types \(A=A\) (i.e., a type cast between two types Coq recognizes as identical) is equal to the identity: \(\mathcal{C}_{A=A}\,a=a\) for \(a:A\). 3. Two type casts between the same two types (i.e., both using equations of type \(A=B\)) are equal: for all \(a:A\), \(\mathcal{C}_{A=B}\,a=\mathcal{C}^{\prime}_{A=B}\,a\). 4. Type casting a function and then applying it to an argument is the same as applying the original function to an argument that had been type cast: if \(f:B\to C\) and \(a:A\), then \(\left(\mathcal{C}_{A\to B=B\to C}\,f\right)a=f\left(\mathcal{C}_{A=B}\,a\right)\) These facts smooth the process of showing \(\mathbf{Type}\) satisfies the operad axioms of Definition 3, as this involves manipulations of several type casts. For example, 2 and 3 of Note 17 ensure that it is not necessary to keep track of how these manipulations impact the _equations_ on which the type casts rely, since we need only that the _types_ involved match in order to show equality. Now we discuss how to utilize the facts we demonstrated in Note 17, by discussing their use in our proof the horizontal associativity axiom (1 of Definition 3) is satisfied in \(\mathbf{Type}\). Our first step is to prove the following key lemma of equality involving the **compose** function: \[\mathbf{compose}\left(\,\mathcal{C}\left(\,\mathbf{compose}\left(\,\mathcal{C }\,\alpha\,\right)\beta\,\right)\,\right)\gamma=\ \mathcal{C}\left(\,\mathbf{compose}\left(\,\mathcal{C}\left(\,\mathbf{compose} \left(\,\mathcal{C}\,\alpha\,\right)\gamma\,\right)\,\right)\beta\,\right). \tag{4}\] Here, \(\alpha,\beta,\gamma\) are terms of the appropriate types in the operad \(\mathbf{Type}\), and the type casting equations have been suppressed from the notation. We can show the equality in Equation 4 by induction on the appropriate lists and several uses of 4 of Note 17. Notice that Equation 4 is essentially the horizontal associativity axiom in \(\mathbf{Type}\), and this is the case: to prove the horizontal associativity axiom, we manipulate type casts using our work in Note 17 until they match the equality in Equation 4. Demonstrating the remainder of the axioms in Definition 3 for \(\mathbf{Type}\) within Coq follows a similar pattern: show the pattern for the given axiom holds for **compose**, and then manipulate the type casts appropriately using Note 17 to arrive at the desired axiom. ## 5 Related Work In [5], the authors present a formalization of a simpler type of an operad using Cubical Agda, which is an extension of Agda with Cubical Type Theory. Cubical Type Theory is an alternative to Homotopy type theory that is more directly amenable to constructive interpretations, so fully understanding the implementation of operads in [5] requires a working knowledge of a variant of Homotopy type theory, as well as how to use its implementation in Agda. Moreover, Agda does not have significant automation, so showing, for example, our proof in Section 4 would require _significantly_ more work. However, we do not think that our work would be impossible to translate into Agda, just require much more boilerplate code (e.g. handwriting structural induction tactics). We also want to compare what was formalized in our work to that of [5]. What they use in [5] to refer to an operad is an operad with a collection of types \(T\) for which \(|T|=1\). In particular, \(\mathcal{O}\binom{d}{\varepsilon}\) can be parametrized by the natural numbers, so we can write \(\mathcal{P}(n):=\mathcal{O}\binom{d}{\varepsilon}\), if \(|\underline{c}|=n\), where \(\underline{c}=c_{0},\ldots,c_{n-1}\), with \(c_{i}=d\). Moreover, there is a unique identity (\(1\in\mathcal{P}(1)\)), the functions \(\circ_{i}\) have type \(\mathcal{P}(n)\to\mathcal{P}(m)\to\mathcal{P}(n+m-1)\), and there is a significant simplification of the associativity axioms in Definition 3. We also note [5] they define their singly-colored operads to have the _equivariance axiom_ given in [8].
2310.19679
Tidal effects up to next-to-next-to leading post-Newtonian order in massless scalar-tensor theories
In this article, we study the tidal effects in the gravitationally bound two-body system at next-to-next-to leading post-Newtonian order for spin-less sources in massless scalar-tensor theories. We compute the conservative dynamics, using both a Fokker Lagrangian approach and effective field theory with the PN-EFT formalism. We also compute the ten conserved quantities at the same NNLO order. Finally, we extend our results from simple ST theories to Einstein-scalar-Gauss-Bonnet gravity. Such results are important in preparation of the science case of the next generation of gravitational wave detectors.
Laura Bernard, Eve Dones, Stavros Mougiakakos
2023-10-30T15:59:23Z
http://arxiv.org/abs/2310.19679v1
# Tidal effects up to next-to-next-to leading post-Newtonian order in massless scalar-tensor theories ###### Abstract In this article, we study the tidal effects in the gravitationally bound two-body system at next-to-next-to leading post-Newtonian order for spin-less sources in massless scalar-tensor theories. We compute the conservative dynamics, using both a Fokker Lagrangian approach and effective field theory with the PN-EFT formalism. We also compute the ten conserved quantities at the same NNLO order. Finally, we extend our results from simple ST theories to Einstein-scalar-Gauss-Bonnet gravity. Such results are important in preparation of the science case of the next generation of gravitational wave detectors.
2310.01036
Generative AI for Integrated Sensing and Communication: Insights from the Physical Layer Perspective
As generative artificial intelligence (GAI) models continue to evolve, their generative capabilities are increasingly enhanced and being used extensively in content generation. Beyond this, GAI also excels in data modeling and analysis, benefitting wireless communication systems. In this article, we investigate applications of GAI in the physical layer and analyze its support for integrated sensing and communications (ISAC) systems. Specifically, we first provide an overview of GAI and ISAC, touching on GAI's potential support across multiple layers of ISAC. We then concentrate on the physical layer, investigating GAI's applications from various perspectives thoroughly, such as channel estimation, and demonstrate the value of these GAI-enhanced physical layer technologies for ISAC systems. In the case study, the proposed diffusion model-based method effectively estimates the signal direction of arrival under the near-field condition based on the uniform linear array, when antenna spacing surpassing half the wavelength. With a mean square error of 1.03 degrees, it confirms GAI's support for the physical layer in near-field sensing and communications.
Jiacheng Wang, Hongyang Du, Dusit Niyato, Jiawen Kang, Shuguang Cui, Xuemin Shen, Ping Zhang
2023-10-02T09:27:27Z
http://arxiv.org/abs/2310.01036v2
Generative AI for Integrated Sensing and Communication: Insights from the Physical Layer Perspective ###### Abstract As generative artificial intelligence (GAI) models continue to evolve, their generative capabilities are increasingly enhanced and being used extensively in content generation. Beyond this, GAI also excels in data modeling and analysis, benefiting wireless communication systems. In this article, we investigate applications of GAI in the physical layer and analyze its support for integrated sensing and communications (ISAC) systems. Specifically, we first provide an overview of GAI and ISAC, touching on GAI's potential support across multiple layers of ISAC. We then concentrate on the physical layer, investigating GAI's applications from various perspectives thoroughly, such as channel estimation, and demonstrate the value of these GAI-enhanced physical layer technologies for ISAC systems. In the case study, the proposed diffusion model-based method effectively estimates the signal direction of arrival under the near-field condition based on the uniform linear array, when antenna spacing surpassing half the wavelength. With a mean square error of 1.03 degrees, it confirms GAI's support for the physical layer in near-field sensing and communications. Generative AI, integrated sensing and communications, physical layer, diffusion model ## I Introduction Recently, the unprecedented growth in user data, together with the continuous advancement of AI-Generated Content (AIGC) models, have led to groundbreaking applications such as Google Bard and ChatGPT. As users increasingly benefit from these applications, their attention is concurrently shifting to the mechanism powering these applications, i.e., generative artificial intelligence (GAI) [1]. Unlike traditional AI models that prioritize sample analysis, training, and classification, GAI specializes in understanding and modeling the distribution of complex datasets. By leveraging statistical properties of the training data, GAI can generate data similar to the training data, manifesting in diverse formats like documents and images [2]. For example, the diffusion model-based ControlNet [3] can efficiently generate images with outstanding quality, in terms of resolution, saturation, and naturalness, according to the user prompts, which demonstrates greater flexibility and higher efficiency compared to traditional content generation methods. In the context of the rapidly evolving of wireless network services, GAI is poised to meet the various and ever-changing demands of users. Indeed, not only content generation, GAI's inference capability has catalyzed research across various domains. For example, researchers in [4] introduce a generative adversarial networks (GANs) based architecture to learn the channel transition probabilities (CTP) from the observations, thereby helping achieve the maximum likelihood sequence detection. Additionally, in device-to-device (D2D) communications, an incentive mechanism based on contract theory is proposed to facilitate user information sharing, in which the diffusion model is employed to generate optimal contract designs [2]. While there have been attempts to integrate GAI into wireless communication, they remain limited, especially when considering emerging technologies like extremely large-scale multiple-input-multiple-output (XL-MIMO), near-field communications, and integrated sensing and communication (ISAC) [5]. For instance, ISAC encompasses both communication and sensing modules, as shown in Fig. 1, and each module has specific demands for bandwidth, power, and other resources. This complexity imposes new challenges in designing efficient wireless resource allocation strategies at the network layer to balance sensing and communication. Moreover, physical layer technologies such as antenna array and waveform design are also crucial for ISAC systems. For communication purposes, enhancing transmission reliability in multipath fading channels necessitates large antenna spacing to ensure independent signals across antennas. On the other hand, for sensing, estimating key parameters including the direction of arrival (DoA) of signal waves usually require antenna spacing to be less than or equal to half the wavelength to avoid ambiguities. These conflicting requirements introduce new challenges in the design of the antenna array for ISAC systems. Fortunately, the emerging of GAI and its recent applications in wireless communications, provide a promising way for resolving these dilemmas. Therefore, an in-depth investigation into the applications of GAI in ISAC systems, particularly in the physical layer, is imperative. Recognizing the challenges outlined above, this article conducts an extensive investigation on the application of GAI in the physical layer and the corresponding potential support for ISAC systems. Concretely, we first present an overview of five major GAI models and the ISAC system. After that, we thoroughly analyze the potential support of these GAI-enhanced physical layer technologies for ISAC from both sensing and communication perspectives. Finally, we provide a practical use case to explain how GAI can be used to tackle challenges in signal DoA estimation during sensing, a critical component of ISAC. Overall, the contributions of this article are summarized as follows. * We conduct a review of five major GAI models and the ISAC system. Building on this, we analyze the potential applications of the GAI models in the physical, network, and application layers of the ISAC system, providing comprehensive insights of emerging sensing, localization, and communication technologies. * From different perspectives such as beamforming and signal detection, we investigate how GAI models enhance physical layer technologies. Subsequently, we analyze the support that these GAI-enhanced physical layer technologies provide for communication and sensing in ISAC systems, outlining technical issues and viable solutions. * We propose a signal spectrum generator (SSG) to tackle the near-field DoA estimation problem when antenna spacing exceeds half the wavelength. Experimental results reveal that SSG yields a mean square error (MSE) of around 1.03 degrees in DoA estimation, confirming SSG's effectiveness while highlighting the importance of integrating GAI into the ISAC physical layer. ## II Overview of Generative AI and ISAC This section first introduces the concepts of GAI and presents five representative GAI models. Following that, we introduce ISAC and generally explain GAI's potential support for ISAC systems from the physical, network, and application layers. ### _Generative AI_ GAI refers to a specific category of AI models that is trained on large datasets to learn the inherent patterns of the data distribution. Once trained, they can generate new data that is similar yet distinct from the training data, facilitating content production. Compared to the traditional AI models, the GAI models hold better ability to understand and capture the distribution of the complex and high-dimensional data [1]. Hence, Fig. 1: The role of GAI in the physical layer and its support for ISAC applications. The GAI models can be utilized to enhance several physical layer technologies, including channel state information (CSI) compression and signal detection. On this basis, the GAI enhanced physical layer technologies can further augment ISAC system performance across various applications, such as indoor human detection and outdoor vehicle to vehicle communication. they find several applications across various fields. Among different GAI models, GANs, normalizing flows (NFs), variational autoencoders (VAEs), diffusion models (DFMs), and Transformers not only excel in generating digital content but also demonstrate significant applicability in the physical layer of wireless communications. * GANs (Fig. 1-part A) consist of a generator and a discriminator that compete during training, aiming for a particular equilibrium. The training is completed when the discriminator cannot differentiate between real and fake data. After that, the generator can produce similar, yet new data in a parallel manner. However, the training process is complex, as finding the equilibrium is harder than optimizing an objective function. * NFs (Fig. 1-part B) use invertible transformations to map basic distributions to target spaces for detailed analysis. These transformations create a flow that can be reversed, facilitating likelihood estimation. NFs can sample from complex probability distributions, which is useful for the unanalyzable data. However, many transformations may make the training process time-consuming. * VAEs (Fig. 1-part C) are neural networks designed to compress and reconstruct data. Unlike traditional autoencoders, VAEs can model the latent distribution and sample from the modeled space, benefiting data dimension reduction and feature extraction. Additionally, they can estimate the uncertainty in predictions and generate plausible outputs for a given input. However, generated samples are not always interpretable, as they are derived from the latent space. * DFMs (Fig. 1-part D) have attracted significant attention due to their image generation capabilities. During the training, DFMs corrupt training data with random noise and subsequently denoise the data to learn optimal hyperparameters. Once trained, they apply the learned parameters in the denoising process to generate samples. DFMs can be trained on incomplete datasets with a stable process, but inference requires many steps, making them less efficient for generating large datasets. * Transformers (Fig. 1-part E) are neural network architectures based on the self-attention mechanism, which can model long-range dependencies between elements in the input sequence and support parallel sequences processing, suitable for tasks involving substantial sequence data. Their design needs minimal inductive biases and is inherently suited for set-functions, enabling them to process multiple modalities using similar processing blocks. Besides the above-mentioned models, there are some other GAI models, such as multimodal models, used in ChatGPT. These models possess strong data analysis and modeling capabilities, making them advantageous for incorporation into communication system designs. ### _Integrated Sensing and Communication_ ISAC focuses on integrating wireless sensing and communication into a unified system. This aims at the efficient use of limited resources, while facilitating both functions [5]. From the physical layer, ISAC can be broadly classified into non-overlapping and overlapping systems. Specifically, non-overlapping systems include time-division, frequency-division, and space-division ISAC. For example, time-division ISAC allocates distinct signals to individual time slots for either sensing or communication tasks, allowing them to use their preferred waveforms. The overlapping systems can be divided into sensing-centric, communication-centric, and joint designs. For example, the communication-centric design can be achieved by appropriately modifying existing communication systems, and a representative example is Wi-Fi sensing. Compared to traditional wireless communication and sensing systems, the ISAC systems offer several advantages. * Higher efficiency: By allowing communication and sensing to share resources, ISAC boosts the overall efficiency of wireless networks. * Lower cost: By eliminating the need for separate communication and sensing modules, ISAC lowers both hardware and power consumption costs for wireless devices. * More versatile services: ISAC is capable of fulfilling users' communication requirements while concurrently offering sensing function, allowing it to deliver more services. Benefiting from these advantages, ISAC systems can be applied across various scenarios and are thus considered one of the core technologies for future 6G networks. ### _Potential Applications of GAI in ISAC Systems_ As aforementioned, we can see that GAI can serve ISAC systems from multiple perspectives. This can be broadly categorized into the physical, network, and application layers. * **Physical layer:** GAI can be employed for channel estimation, anomaly signal identification, encoding, beamforming, etc, as shown in Fig. 1. These GAI-enhanced physical layer technologies can improve the communication performance (e.g., reducing bit error rate (BER)) and enhancing the sensing accuracy (e.g., optimizing signal beams to increase target detection accuracy while avoiding interference in ISAC systems). * **Network layer:** GAI can be utilized for designing resource allocation strategies, scheduling schemes, and incentive mechanisms, which could not only lower the system cost but also boost the operation efficiency. While methods such as deep reinforcement learning (DRL) are applicable here, GAI has been shown to be more effective in tasks like resource allocation [2]. * **Application layer:** GAI can be used to offer support in data generation, analysis, and feature extraction for various ISAC applications. This support not only facilitates in-depth analysis of communication or sensing data but also generates a substantial amount of data for both communication and sensing model training, which is difficult for other existing AI models. In Table I, we summarize the above mentioned GAI models and their potential support for ISAC systems. Next, we detail GAI's applications in the physical layer. ## III GAI-Enhanced Physical Layer Technologies for ISAC The physical layer includes several key technologies such as codebook design and channel estimation. In this section, we investigate how GAI strengthens various physical layer technologies and discuss their potential support for ISAC systems from both sensing and communication perspectives. ### _From Communication Perspective_ #### Iii-A1 **Signal Detection** Detecting signals in cases with unpredictable noise is challenging. NFs can infer latent variables, offering an effective solution. Hence, the authors in [6] propose a probabilistic machine-learning detection framework that employs NFs to approximate the unknown system noise in MIMO systems without any prior information. This approximation is driven by unsupervised learning with only noise samples, which is difficult to achieve with traditional AI models. Evaluations show that this framework not only stands out in terms of BER in environments with unanalyzable noise, but also reaches close to the maximum likelihood bound in environments with predictable noise. Besides NFs, other GAI models like GANs and VAEs can be also used for signal detection. In ISAC systems, the integration of communication and sensing creates more complex noise, additionally, differences in signal waveforms and other aspects between these two modules could exacerbate the issue. Therefore, NFs can also be employed to model the unknown noise, improving signal detection capability of ISAC systems. #### Iii-A2 **Secure Transceiver Design** The complexity of ISAC architectures and channel models complicates the design of security technologies. With the ability of processing complex data, VAEs can automatically manage codeword variation, which can be modeled as noise during transmission, making VAEs suitable for building secure transceiver pairs. In [7], the authors modify the VAE loss function at the receiver to include a security term, enhancing the receiver security. The unsupervised training is further used to strengthen the robustness against random codeword variation. In the case of imperfect CSI with the signal-to-noise ratio (SNR) range from -5 dB to 10 dB, the BER of this method at the eavesdropper is 0.05 higher than that of the autoencoder based on traditional neural networks. The same approach can be integrated into ISAC systems to enhance the security of the receiver and the robustness to codeword variations. However, when sensing and communication share the receiver, it is crucial to consider how adding the security term to a loss function might affect the sensing module. #### Iii-A3 **Sparse Code Multiple Access** In ISAC, various smart devices like unmanned aerial vehicles participate in communication and sensing, causing severe interference among devices. To mitigate this, combining GAI models with non-orthogonal multiple access (NOMA) techniques is a promising solution. The authors in [8] introduce a GAN-based sparse code multiple access (SCMA) encoding and decoding approach. At the SCMA encoder, the generator is used to shorten the sequences in the information processing phase. Additionally, a noise layer is introduced to ensure a robust representation of the encoder output, thereby improving the noise immunity. At the decoder, PatchGAN serves as the discriminator to reduce both model parameters and computational load. Besides, an attention mechanism is inserted between the GAN's generator and discriminator to enhance the BER performance. Such designs can offer better connectivity of various smart devices involved in communication for ISAC, ensuring that control, scheduling, and other information can be timely transmitted to each device. #### Ii-A4 **Joint Source-Channel Coding** Coding is crucial for mitigating channel noise and interference, making it essential for communication of ISAC. Joint source-channel coding (JSCC) is an effective encoding method, but the complexity and discontinuity of the source data distribution present design challenges. To address this, in [9], the authors employ the VAE encoder to transform source data into a low-dimensional latent space and use the decoder to revert it to the original data for JSCC. During this process, one of multiple encoders is selected for transmission to tackle the issue of discontinuous projection. The evaluations show that the average peak SNR (PSNR) of the proposed method is nearly 3 dB higher than traditional methods based on convolutional neural networks. In ISAC systems where communication and sensing modules have independent encoding requirements and the channel is modeled as an additive Gaussian noise channel, such a method can directly contribute to the JSCC efficiency of communication module in ISAC. ### _From Sensing Perspective_ #### Ii-B1 **CSI Compression** Sensing in ISAC may need a significant amount of CSI from multiple antennas and frequencies, especially in systems like Wi-Fi based sensing. Hence, efficient compression, which facilitates the CSI storage and transmission, is essential. Given the superiority over traditional multi-layer perceptrons when output dimensionality far exceeds input, GANs are a preferred choice for CSI compression. In [10], the authors use the CSiNet encoder at the transmitter to compress original CSI into a low-dimensional vector. Then, at the receiver, a deep convolutional GAN decoder reconstructs the original CSI from this compressed vector with the discriminator assessing its quality. The evaluations show that the normalized MSE of the proposed method is -7.05 dB, which is lower than -2.46 dB of CS-CsiNet based on deep learning, when the compression ratio is 1/64. Besides GANs, VAEs are also suitable for this task. These CSI compression methods show excellent reconstruction accuracy across varying compression ratios, providing support to reduce the overhead of CSI transmission and storage. #### Ii-B2 **Beamforming** Beamforming is a critical element in ISAC systems, significantly affecting the scanning accuracy in sensing tasks. Adaptive beam alignment remains a central challenge in this area. To address this, the authors in [11] introduce a VAE based dual timescale learning and adaptation framework. For the long timescale, a deep recurrent VAE (DR-VAE) is proposed to learn a probabilistic model of beam dynamics based on noisy beam-training observations. For the short timescale, an adaptive beam-training procedure is formulated as a partially observable Markov decision process. This is then optimized using point-based value iteration by leveraging both beam-training feedbacks and probabilistic predictions of the strongest beam pair provided by the DR-VAE. The proposed DR-VAE approach achieves near-optimal spectral efficiency, with a gain of 85% over a conventional strategy that scans exhaustively over the dominant beam pairs. In ISAC, such a method not only minimizes the overhead associated with beam alignment during sensing process, but also boosts spectral efficiency, thereby increasing communication throughput. #### Ii-B3 **Channel Estimation** Channel estimation is important for sensing reliability, particularly in sensing systems that rely on CSI. Diffusion models, excel at learning high-dimensional gradients and model the log distribution of the data, are well-suited for modeling high-dimensional millimeter-wave MIMO channels. In [12], the authors introduce a MIMO channel estimation method using score-based diffusion models. They first train a score-based generative model in an unsupervised manner using a database of known channels, which is independent of pilot symbols. Then, annealed Langevin dynamics is used for channel estimation by sampling from the posterior distribution. Compared to conventional supervised deep learning methods, this approach can offer a communication gain of up to 5 dB to the end-to-end coded communication system can reach up to 5 dB. More importantly, within ISAC systems, this approach holds the potential to solve the problems of estimating the channel in an out-of-distribution setting, i.e., the environments not seen during training, thereby providing more robust data support for the CSI-based sensing in complex channel conditions. #### Ii-B4 **Signal Enhancement** Signal parameter estimation is crucial for wireless sensing in ISAC systems, as it provides valuable observations for tasks like target detection and localization. Estimating signal parameters in low SNR conditions is particularly challenging. One effective strategy to address this issue is to improve the SNR using the generative capabilities of GAI models. Hence, in [13], the authors convert low-SNR complex signals into images. Then, they employ a Unet structure as the GAN's generator to encode these images, effectively boosting the SNR. After that, PatchGAN, i.e., the discriminator, assesses the quality of the enhanced image. This approach successfully increases the SNR of the signal thereby yielding more accurate parameter estimation. Adapting this concept to ISAC, incomplete and low-SNR signals can be converted into images. GAI models, once trained, can then refine these images, effectively boosting the signal SNR and thereby improving parameter estimation and sensing performance. Besides the aforementioned applications, GAI can also be applied to sensing signal processing. For instance, in [14], the Transformer is used to capture inter-feature correlations among received signal strength observations, thereby boosting the multi-target localization capability. We summarize the above observations in Table II. ### _Discussion_ So far, various GAI models have been integrated into the physical layer, offering potential support for both the communication and sensing of ISAC systems from diverse perspectives. From the above investigations, we can see that the designs leverage the following prominent capabilities of GAI: * **Capability of capturing complex data distributions.** For intricate datasets with complex distributions that are difficult, or even impossible to analyze directly, such as the noise and dynamic features of users, GAI models can be employed to capture their latent distributions. On this basis, the acquired distributions can be sampled, thereby supporting corresponding physical layer technologies, like signal detection in the system with complex noise and beam prediction in dynamic environments. * **Capability of transforming and processing data across various dimensions.** For high-dimensional data, GAI models can reduce its dimensionality through encoding and subsequently decode it to recover the original high-dimensional data. This facilitates the efficient compression, storage, and transmission of high-dimensional data within the ISAC system. For data with simpler distributions, GAI models can project them to more complex target spaces, thereby aiding in more efficient sampling and more accurate density estimation. * **Capability of restoring and enhancing data**. For data in the ISAC system with a low SNR, such as the covariance matrix of received signals with low SNR as mentioned earlier, GAI models can effectively restore them. This restoration contributes to enhanced outcomes in subsequent stages, like more precise parameter estimation. Moreover, the generative capabilities of GAI can also recover incomplete data, ensuring that the subsequent processing can be effectively carried out. ## IV Case Study Signal DoA estimation, which helps in identifying the location of the signal source, is crucial in both near-field and far-field ISAC systems. Besides, it also facilitates beamforming, enhancing the active near-field communication (NFC) [15]. However, when the antenna spacing exceeds half the wavelength (i.e., \(\lambda\)), the DoA estimation becomes challenging due to phase ambiguity. In this section, we show how to use GAI, i.e., diffusion models, to address this challenge, thereby providing support for near-field ISAC. ### _Problem Description_ Based on a uniform linear array (ULA), the DoA estimation relies on the phase difference across signals received by adjacent antennas in the array. This phase difference is a function of both the distance and direction of the signal source relative to the ULA. In the near-field context, the phase difference has a one-to-one correspondence with the DoA when the antenna spacing is less than or equal to \(0.25\lambda\), allowing for effective DoA estimation, as shown by the clear signal spectrum in Fig. 2. However, as the antenna spacing expands, for instance, to \(\lambda\), the increased propagation path length causes the phase ambiguity, leading to an ambiguous spectrum, as shown in Fig. 2. Under these conditions, the system is unable to identify the true signal DoA, which affects subsequent tasks like localization and beamforming. ### _Proposed Design_ Essentially, the signal spectrum is a matrix and the distribution of data in it describes the signal DoA in the near-field. When the antenna spacing in the array exceeds half the wavelength, the signal spectrum becomes ambiguous, indicating a change in the data distribution and leading to an inability to recognize the correct signal DoA. The diffusion model's powerful inference capabilities can be employed to explore the relationship between ambiguous and clear signal spectra. Thereby, we propose a diffusion model-based signal spectrum generator (SSG), the architecture of which is illustrated in Fig. 3. In the near-field scenario (shown in Fig. 2) with \(N=4\) and \(d=0.5\lambda\), we produce 10,000 paired signal spectra via simulation, assigning 80% for training and 20% for testing. During the simulation, the number of signal sources is fixed at 3, with the corresponding DoAs and ranges randomly generated within 0-180 degrees and 0-6\(\lambda\), respectively, and the SNR is also randomly generated between 0-5 dB. To ensure data consistency, the ambiguous spectra are obtained via DoA estimation using the signals captured by antennas with odd index (corresponding to an antenna spacing of \(\lambda\)), while signals from antennas 3 to 7 generate the corresponding correct spectra. Subsequently, the ambiguous spectrum serves as the observation, while the correct spectrum acts as the expert solution for training the SSG. During the training, the SSG adds noise to the expert solution and subsequently denoises it step by step, as shown in Steps 4 and 5, refining the denoising network hyperparameters along the way. These hyperparameters establish the denoising criteria, guiding the diffusion model's inference based on the observations. Therefore, after training, the SSG can generate the expert solutions (clear signal spectra) based on the given observations (ambiguous signal spectra) via the trained denoising network. ### _Performance Evaluation_ The Part (i)-a in Fig. 4 presents the test reward curve. According to the results, the difference between the solution generated by the SSG and the real expert solution gradually narrows over training, indicating that the SSG can learn the denoising network's hyperparameters through the noising and denoising processes and can subsequently utilize the denoising network to generate the corresponding expert solution. Furthermore, the test reward of SSG stabilizes around -10, better than -80 of DRL [2] based method, indicating SSG is better than DRL in signal spectrum reconstruction. This may be due to the DRL struggling to focus on the key points in the spectrum corresponding to the DoAs, thereby failing to effectively learn the correct solution. The results in Part (ii) of Fig. 4 illustrate the expert solution generation process. As can be seen, the trained SSG can effectively produce the expert solution through sequential denoising based on the ambiguous spectrum in Fig. 4 Part (i)-b. The generated signal spectrum in Part (i)-d depicts the DoAs of the three signal sources are 31, 99, and 146 degrees, which closely align with the expert solution, shown in Part (i)-c, of 30, 99, and 146 degrees, respectively. Based on the generated signal spectrum and the corresponding ground truth, we observe that the SSG achieves a DoA estimation MSE of about 1.03 Fig. 2: The cause of ambiguous signal spectrum and its impact on applications. Here, \(\lambda\) is the signal wavelength, \(d\) is the antenna spacing, \(\theta\) is the DoA, \(r\) is the distance between the signal source and the reference antenna, \(2N+1\) is the total number of antennas. When \(d\) is less than half of the \(\lambda\), the signal DoA can be accurately estimated. However, as \(d\) increases, for instance, to \(\lambda\) or \(1.5\lambda\), the signal spectrum becomes ambiguous, obstructing the identification of the true signal DoA and subsequently impacting further operations such as localization and beamforming. degrees. This further proves that SSG can effectively produce the clear signal spectrum, which can be leveraged to both improve the energy efficiency of beamforming and reduce communication power consumption. In addition, we investigate the impact of SSD on localization performance. During the test, we assume that the range is accurately estimated, and the system uses the DoA of the three peak points with the largest amplitude and the ranges to achieve localization. In Part (i)-e, results indicate a median localization error of about \(1.25\lambda\) without using SSG. However, the use of SSG reduces this error to around \(0.21\lambda\). This is intuitive, as ambiguous spectrums can lead the system to conduct localization using the incorrect DoA, causing notable errors. ## V Future Directions ### _GAI Application Security_ While GAI has demonstrated its potential in the physical layer, it also poses certain risks. For instance, attacks on the training datasets can lead to training non-convergence or even failure, thereby wasting significant computational resources. Attacks on GAI model itself could cause more severe consequences, such as ineffective channel estimation and coding, ultimately impacting the ISAC performance. Hence, future research should address these security issues from both the dataset and model perspectives. Blockchain technology can ensure data authenticity and provider reliability, while offering a unified management for multi-party data, hence serving as one effective approach to resolving these security issues. ### _Resource Allocation_ The training and operation of GAI models consume computational, storage, and communication resources, disrupting the resource balance of the original system. Hence, integrating GAI models into the physical layer necessitates reallocating resources to ensure stable system operation. When local resources are abundant, strategies should be developed to maximize benefits while minimizing resource consumption based on task complexity and real-time requirements. When local resources are constrained, incentivization mechanisms, such as dynamic spectrum access, should be considered to ensure functional effectiveness, and then maximize benefits. ### _Cell Free ISAC_ The decentralized architecture of cell-free massive MIMO effectively reduces the distance between the access point and the user, thereby minimizing path loss. This configuration is naturally conducive to the utilization of millimeter wave and terahertz frequencies for ISAC performance. Within this framework, GAI can be utilized to optimize factors such as precoding and combining. This integration has the potential to generate high-gain, narrow beams in a mobile cell-free setting, further enhancing the efficacy of both target tracking and high-capacity wireless fronthaul. ## VI Conclusion In this article, we investigated GAI's use in the physical layer from various perspectives. We concluded that these applications primarily leverage GAI's capabilities in complex data feature extraction, transformation, and enhancement. Fig. 3: The structure of the proposed SSG. In Step 1, the current observation, i.e., the ambiguous signal spectrum, is obtained. In Step 2, the corresponding expert solution is obtained. Steps 3-6 detail the training process via forward and backward diffusion. Using the expert solution, the loss function is designed to minimize the discrepancy between the generated signal spectrum and the expert solution. Subsequently, we analyzed how GAI-enhanced physical layer technologies can potentially support ISAC systems, considering both sensing and communication aspects. In the case study, we introduced the diffusion model based SSG. Operating in the physical layer, SSG addresses the DoA estimation problem that arises when array spacing exceeds half the wavelength. These insights emphasize the crucial role of GAI in the ISAC physical layer and the pressing need for a further exploration of its applications.
2301.01763
Statics and Dynamics of non-Hermitian Many-Body Localization
Many-body localized phases retain memory of their initial conditions in disordered interacting systems with unitary dynamics. The stability of the localized phase due to the breakdown of unitarity is of relevance to experiment in the presence of dissipation. Here we investigate the impact of non-Hermitian perturbations on many-body localization. We focus on the interacting Hatano-Nelson model which breaks unitarity via asymmetric hopping. We explore the phase diagram for the mid-spectrum eigenstates as a function of the interaction strength and the non-Hermiticity. In contrast to the non-interacting case, our findings are consistent with a two-step approach to the localized regime. We also study the dynamics of the particle imbalance. We show that the distribution of relaxation time scales differs qualitatively between the localized and ergodic phases. Our findings suggest the possibility of an intermediate dynamical regime in disordered open systems.
József Mák, M. J. Bhaseen, Arijeet Pal
2023-01-04T18:58:17Z
http://arxiv.org/abs/2301.01763v3
# Statics and Dynamics of non-Hermitian Many-Body Localization ###### Abstract Many-body localized phases retain memory of their initial conditions and have been shown to exist in disordered interacting systems with unitary dynamics. The stability of the localized phase due to the breakdown of unitarity is of relevance to experiment when coupling to the environment is taken into account. Motivated by this, we investigate the impact of non-Hermitian perturbations on many-body localization. We focus on the interacting Hatano-Nelson model which breaks unitarity _via_ asymmetric hopping. We explore the phase diagram for the mid-spectrum eigenstates as a function of the interaction strength and the non-Hermiticity. In contrast to the single-particle case, our findings are consistent with a two-step approach to the localized regime. We also study the dynamics of the particle imbalance starting from an initial density wave, focusing on disorder realizations with complex eigenvalues. The time to reach the steady state in the localized phase is clustered around the inverse of the extremal eigenvalue, in contrast to the ergodic phase which shows a broader distribution of timescales. Our findings suggest the possibility of novel dynamical regimes in disordered open systems. The investigation of isolated quantum systems has led to a remarkable understanding of novel states of matter [1; 2]. However, naturally occurring and engineered quantum systems are typically coupled to an environment, even if the coupling is weak. The dynamics of such open systems can be described by an effective non-Hermitian Hamiltonian which breaks unitarity. These have been realized in pioneering experiments on photonic [3; 4; 5; 6] and matter-light [7; 8; 9; 10; 11] systems. The study of non-Hermitian Hamiltonians allows an enriched classification scheme for describing quantum matter [12; 13]. For example, non-Hermitian systems with parity and time-reversal symmetry can have real eigenvalues, much like their Hermitian counterparts [14; 15; 16]. Non-Hermitian perturbations can also provide insight into the sensitivity of isolated systems to environmental effects. They can also lead to novel phases and phase transitions beyond the equilibrium paradigm [11; 17; 18; 19]. For a review of the applications of non-Hermitian systems see [20]. An important class of isolated quantum systems are those that fail to equilibrate on long timescales. This includes many-body localized (MBL) phases which retain memory of their initial conditions, as observed in analog and digital quantum simulators [21; 22; 23; 24; 25; 26; 27]. The fate of MBL in open systems has also been investigated in cold atom settings via a controlled coupling to the environment [28; 29; 30; 31; 32; 33; 34]. This is of considerable interest in solid state devices due to their intrinsic coupling to other degrees of freedom [35; 36]. The stability of MBL has also been studied [37; 38] including the effects of local dissipation in the Lindblad formalism [39; 40; 41; 42; 28; 43; 44; 29], and by coupling to delocalized environments; see for example [34; 43]. In the context of non-Hermitian MBL, pioneering studies have focused on the link between the phase diagram and spectral properties [44], together with mobility edges [45; 46]. Extensions of these investigations have also considered the effects of quasiperiodic potentials [47; 48]. In this work we explore the phase diagram of the interacting Hatano-Nelson model as a function of non-Hermiticity and the interaction strength. We provide numerical evidence for a dynamical instability occurring within the MBL phase together with predictions for the non-equilibrium dynamics. Figure 1: Phase diagram of the interacting Hatano-Nelson model as a function of the non-Hermiticity \(g\) and the disorder strength \(h\), with \(J=1\) and \(U=2\). The results are obtained using exact diagonalization (ED) with \(L=8,10,12,14,16\) sites. The left boundary (blue line) corresponds to the instability \(\mathcal{G}\) of real eigenstates to a non-Hermitian perturbation, as described by Eq. (2). The right boundary (black line) indicates the spectral feature where the fraction of complex energy eigenvalues \(f_{C}\) goes from increasing to decreasing with increasing \(L\). Region I corresponds to unstable real eigenstates and increasing \(f_{C}\). Region II corresponds to stable real eigenstates and increasing \(f_{C}\). Region III corresponds to stable real eigenstates and decreasing \(f_{C}\). The error bars are inferred from the change in the transition points using different system sizes. _Model:_-- We consider an interacting version of the one-dimensional Hatano-Nelson model [49; 50; 51; 52; 53] with \(L\) sites and periodic boundary conditions \[\hat{H}=\sum_{i=1}^{L}\left[-J\left(e^{-g}\hat{b}_{i+1}^{\dagger}\hat{b}_{i}+e^{ g}\hat{b}_{i}^{\dagger}\hat{b}_{i+1}\right)+U\hat{n}_{i}\hat{n}_{i+1}+h_{i}\hat{n}_{ i}\right], \tag{1}\] where \(J\) is the hopping strength, \(g\) parametrizes the hopping asymmetry, \(h_{i}\) represents on-site disorder and \(U\) is the nearest-neighbor interaction strength. For simplicity we consider hard-core bosons \(\hat{b}_{i}\) and \(\hat{b}_{i}^{\dagger}\) where \(\hat{n}_{i}=\hat{b}_{i}^{\dagger}\hat{b}_{i}\). The disorder is drawn from a uniform distribution \(h_{i}\in[-h,h]\). The asymmetry in the hopping terms renders the model non-Hermitian when \(g\neq 0\). Throughout this work we consider half-filling and set \(J=1\). Before embarking on a detailed study of the model (1) we first discuss some limiting cases. In the non-interacting limit with \(U=0\), all states are localized for \(g=0\)[54]. However, for \(g\neq 0\) it undergoes a delocalization transition [49; 50; 51; 52; 53]. In the Hermitian case (\(g=0\)) with \(U\neq 0\), the model exhibits a many-body localization (MBL) transition [55; 56; 57; 58; 35]. In the clean limit (\(h=0\)) the model exhibits \(\mathcal{PT}\)-symmetry breaking transitions in both the ground state and excited state sectors with only the former surviving to the thermodynamic limit [59]. In this paper we study the interplay between non-Hermiticity and MBL by considering both the mid-spectrum states of the model (1) and its dynamics. To obtain the mid-spectrum states we employ exact diagonalization (ED) and retain a total of \(N_{T}=\lceil 0.04\mathcal{N}\rceil\) eigenstates (rounded up to the nearest integer) which are closest to the mid-point of the spectrum, \(Tr(H)/\mathcal{N}\); here \(\mathcal{N}=\binom{L}{L/2}\sim 2^{L}/\sqrt{L}\) is the dimension of the Hilbert space. _Symmetry._-- The non-Hermitian Hamiltonian (1) exhibits time-reversal symmetry which ensures that the eigenvalues are either real or occur in complex conjugate pairs [60; 44]. In the non-interacting limit with \(U=0\) it has been shown that the localized eigenstates correspond to purely real eigenvalues and the delocalized states correspond to complex conjugate pairs [49; 50; 51; 52; 53]. At large disorder all states are localized for \(U=0\), corresponding to an entirely real spectrum. In contrast, in the interacting case it is only the _fraction_ of complex eigenvalues that goes to zero. In fact, the number of complex eigenvalues remains non-zero and grows exponentially with increasing system size, as shown in Fig. 2. For weak disorder the number of complex eigenvalues \(N_{C}\) approaches the total number of computed eigenvalues \(N_{T}\sim 2^{L}/\sqrt{L}\) but the number of real eigenvalues \(N_{R}\) remains non-zero, see Fig. 2(a). Conversely, in the strong disorder regime the number of real eigenvalues approaches the total number of eigenvalues but the number of complex eigenvalues remains non-zero, see Fig. 2(b). As we discuss more fully below, in the many-body problem localized eigenstates may correspond to complex eigenvalues. _Spectral Transition._-- Following Ref. [44] we first consider the behavior of the fraction of complex eigenvalues \(f_{C}=\overline{N_{C}/N_{T}}\) with increasing disorder strength, where the overbar denotes disorder averaging. As shown in Fig. 3(a) the spectrum of the Hamiltonian undergoes a transition at a critical disorder strength where \(f_{C}\) goes from increasing to decreasing with increasing system size [44]. However, the number of complex eigenvalues \(N_{C}\) does not go to zero with increasing \(L\) in the strong disorder regime. This is in contrast to the non-interacting (\(U=0\)) case [52]. In Fig. 1 we plot the evolution of this boundary as a function of the non-Hermiticity \(g\) and the disorder strength \(h\). It can be seen that at large non-Hermiticity the transition occurs for larger disorder strength. _Eigenstate Transition._-- Having established the locus of the spectral transition, we now turn our attention to the stability of the eigenstates. In the Hermitian case, one of the hallmarks of localized eigenstates is their stability to local perturbations [56]. An extension of this to the non-Hermitian case was suggested in Ref. [44]. First, we denote the left and right eigenstates of the non-Hermitian Hamiltonian (1) by \(\left|E_{k}\right\rangle_{R}\) and \(\left|E_{k}\right\rangle_{L}\). These satisfy \(\hat{H}\left|E_{k}\right\rangle_{R}=E_{k}\left|E_{k}\right\rangle_{R}\) and \(\left\langle E_{k}\right|\hat{H}=_{L}\left\langle E_{k}\right|E_{k}\) with the complex eigenenergies \(E_{k}=\mathcal{E}_{k}+i\Lambda_{k}\). Denoting the eigenstates which correspond to purely real eigenenergies Figure 2: Exponential growth of the number of (a) complex eigenvalues \(N_{C}\) and (b) real eigenvalues \(N_{R}\) for the non-Hermitian Hamiltonian (1) as a function of the system size \(L\). We set \(J=1\), \(U=2\) and \(g=0.5\). The black dashed line corresponds to the total number of computed eigenpairs \(N_{T}\sim 2^{L}/\sqrt{L}\). Each data point is computed from \(2\times 10^{4}\) disorder realizations. by \(\left|\mathcal{E}_{k}\right\rangle_{R}\) and \(\left|\mathcal{E}_{k}\right\rangle_{L}\) we examine the quantity \[\mathcal{G}(h)=\overline{\ln\left|\frac{L\left\langle\mathcal{E}_{k+1}|\hat{V}_{ NH}|\mathcal{E}_{k}\right\rangle_{R}}{\mathcal{E}_{k+1}^{\prime}-\mathcal{E}_{k}^{ \prime}}\right|}, \tag{2}\] where \(\hat{V}_{NH}\) is a non-Hermitian perturbation and \(\mathcal{E}_{k}^{\prime}=\mathcal{E}_{k}+_{L}\left\langle\mathcal{E}_{k}|\hat {V}_{NH}|\mathcal{E}_{k}\right\rangle_{R}\) is the perturbed eigenvalue. Here we take \(\hat{V}_{NH}=\hat{b}_{1}^{\dagger}\hat{b}_{2}\) and the eigenstates are ordered with increasing \(\mathcal{E}_{k}^{\prime}\). In Fig. 3(b) we plot the evolution of the instability parameter \(\mathcal{G}\) versus disorder strength for different system sizes. It can be seen that the eigenstates are stable (unstable) above (below) a critical disorder strength. However, the value of the critical disorder strength differs from that obtained from \(f_{C}\). In Fig. 1 we plot the locus of the stability line obtained from Eq. (2). It can be seen to be well-separated from the spectral transition in our finite-size simulations. That is to say, the instability of the real eigenstates occurs in advance of the spectral transition where \(f_{C}\to 0\) with increasing \(L\). This suggests the possibility of a two-step transition to the localized phase in open systems. This is consistent with results obtained from the study of mobility edges [46]. _Role of Interactions._-- Having established the presence of two boundaries in the non-Hermitian problem we now turn our attention to the role of the interaction strength. In Fig. 4 we show the evolution of the boundaries with increasing \(U\) for a fixed value of the non-Hermiticity \(g\). It can be seen that in the non-interacting limit with \(U=0\) the transitions coincide [50; 52; 61]; that is to say the localization transition coincides with the spectral transition from complex to fully real. However, in the presence of interactions the phase structure changes. The two transitions separate with increasing \(U\). This suggests the possibility of an intermediate regime as one goes from weak disorder to strong, as found in Fig. 1. _Dynamics._-- To further characterize the localized and delocalized phases, we turn our attention to non-equilibrium dynamics. We focus on the particle imbalance \(I(t)=|n_{e}(t)-n_{o}(t)|/[n_{e}(t)+n_{o}(t)]\) as measured in experiments on isolated systems [21]. Here \(n_{e}(t)\) and \(n_{o}(t)\) are the occupations of the even and odd lattice sites, respectively. In the case of an initial density wave \(|\Psi(0)\rangle=|0101\ldots\rangle\) the imbalance can be rewritten as \[I(t)=\frac{1}{N}\left|\sum_{i=1}^{L}(-1)^{i}\left\langle\Psi(t)|\hat{n}_{i}| \Psi(t)\right\rangle\right|, \tag{3}\] where \(N=L/2\) is the total number of particles. The state of the system evolves according to \[|\Psi(t)\rangle=\frac{\exp(-i\hat{H}t)\left|\Psi(0)\right\rangle}{||\exp(-i \hat{H}t)\left|\Psi(0)\right\rangle||} \tag{4}\] where the normalization is explicitly enforced; the operator \(\exp(-i\hat{H}t)\) does not preserve the norm of \(|\Psi(0)\rangle\) when \(\hat{H}\) is non-Hermitian. In the context of Lindbladian dynamics this corresponds to trajectories which are post-selected on the absence of quantum jumps [20; 62]. Figure 3: (a) Variation of the fraction of complex eigenvalues \(f_{C}\) as a function of the disorder strength \(h\) for different system sizes \(L\), with \(g=0.5\) and \(U=2\). The crossing point at \(h\approx 10.8\) shows the separatrix between phases II and III in Fig. 1. The data were obtained using \(2\times 10^{4}\) disorder realizations per point. An eigenvalue is considered real if its absolute imaginary part is below \(10^{-13}\) and complex otherwise. (b) Variation of the eigenstate instability measure \(\mathcal{G}\) as a function of the disorder strength \(h\) for different system sizes \(L\), with \(g=0.5\) and \(U=2\). The crossing point at \(h\approx 9.1\) shows the separatrix between phases I and II in Fig. 1. In our finite-size numerics the transitions in panels (a) and (b) are well-separated. Figure 4: Evolution of the phase diagram of the interacting Hatano-Nelson model as a function of interaction strength \(U\) with \(g=0.5\) and \(J=1\). The phase boundaries are extracted using the method described in Fig. 3 and the same system sizes. The single-particle localization transition (red square) is computed from the crossing points of the fraction of complex eigenvalues \(f_{C}\) for \(L=100,200,300\). Inset: evolution of the width of region II as a function of \(U\). To explore the impact of complex eigenvalues on the relaxation dynamics, we consider the formation of a steady state in \(I(t)\). In Fig. 5(a) we plot the time-evolution of \(I(t)\) in both the delocalized and localized regimes corresponding to \(h=3\) and \(h=18\), respectively. In each case we take a single realization of the disorder with complex eigenvalue pairs in the spectrum. It is readily seen that \(I(t)\) approaches a steady state after a time \(\tau\), as indicated by the red square markers. Heuristically, one expects that \(\tau\) is governed by the largest imaginary eigenvalue \(\Lambda_{max}\). This is borne out in Fig. 5(b) which shows the distribution of \(\tau\) for different disorder realizations and two values of the disorder strength \(h\). The dashed line is a guide to the eye showing \(\tau=\Lambda_{max}^{-1}\). A notable feature of Fig. 5(b) is that the distribution of timescales changes markedly in going from the weak to the strong disorder regime. In the localized phase we see a concentration of timescales in the vicinity of \(\tau=\alpha\Lambda_{max}^{-1}\), where \(\alpha\approx 1.3\). However, in the weak disorder regime we see a broader distribution of timescales which are only bounded from below by \(\Lambda_{max}^{-1}\). Moreover, the distribution of timescales becomes elongated along the axis \(\tau=\alpha\Lambda_{max}^{-1}\) in the vicinity of the transition region (region II), as depicted in Figs 1 and 4. To explore this further we plot the distribution of \(\tau\Lambda_{max}\) for different values of the disorder strength. The distribution develops a sharp peak in the vicinity of the eigenstate transition at \(h\approx 10\), for the chosen parameters. The peak location at \(\tau=\alpha\Lambda_{max}^{-1}\) with \(\alpha\approx 1.3\) coincides with the relationship observed in Fig. 5, and does not change with increasing disorder strength. To show this more clearly, in the inset of Fig. 6 we plot the evolution of \(\tau\Lambda_{max}\) at the overall peak of the distribution shown in the main panel. The peak location shifts to lower values with increasing \(h\), becoming independent of \(h\) in the localized phase. This is consistent with a direct relationship between \(\tau\) and \(\Lambda_{max}^{-1}\) in the localized phase. This is in marked contrast with the thermal phase which shows a broader distribution without a sharp peak. _Conclusion._-- In this work we have investigated the phase diagram of the interacting Hatano-Nelson model as a function of the interaction strength and the non-Hermiticity. We have mapped out two regimes for MBL _via_ the mid-spectrum eigenstates. In particular, we have shown that the delocalization instability of the real eigenstates occurs at a weaker disorder strength than the transition to a predominantly real spectrum. We have also explored the non-equilibrium dynamics of this model and shown the appearance of a dynamical signature in the Figure 5: (a) Time-evolution of \(I(t)\) for a fixed realization of the disorder with \(h=3\) (blue) and \(h=18\) (burgundy) selected to have at least one complex eigenvalue pair in the spectrum. We set \(g=0.5\) and \(L=14\). \(I(t)\) is well approximated by a steady state after a time \(\tau\) as indicated by the red squares. We define \(\tau\) as the time where \(|I(t)-I(\infty)|/|I(\infty)|<\epsilon\) where \(\epsilon=10^{-8}\) and \(I(\infty)=I(t\rightarrow\infty)\). The latter is inferred from ED by setting \(|\Psi(t)\rangle\) in Eq. (3) as the right eigenstate with the largest imaginary eigenvalue. (b) Distribution of \(\tau\) for different realizations of the disorder with \(h=3\) (blue) and \(h=18\) (burgundy). In the localized phase the distribution of timescales clusters around a peak maximum at \(\tau=\alpha\Lambda_{max}^{-1}\), where \(\alpha\approx 1.3\) depends on the definition of \(\tau\). This clustering leads to the sharp peak in the distribution of timescales given in Fig. 6. Figure 6: Distribution of \(\tau\Lambda_{max}\) as a function of the disorder strength with \(g=0.5\) and \(L=14\) as used in Fig. 5. The distribution exhibits a broad maximum which shifts towards lower values with increasing disorder strength \(h\), as indicated by the black arrow. In the vicinity of the transition region (region II) shown in Figs 1 and 4 the distribution develops a sharp peak, as indicated by the red arrow. The location of this peak occurs at \(\tau=\alpha\Lambda_{max}^{-1}\), with \(\alpha\approx 1.3\) in agreement with Fig. 5. Inset: The location of the overall broad peak for \(L=14\) moves to lower values of \(\alpha=\tau\Lambda_{max}\) with increasing disorder strength. The value of \(\alpha\) becomes independent of \(h\) in the localized phase with \(\alpha\approx 1.3\). vicinity of the eigenstate transition. It would be interesting to explore the phase diagram of this problem in the framework of Lindbladian dynamics. It would also be interesting to examine the connection between the non-Hermitian spectrum and avalanches in the MBL phase. _Acknowledgements._-- We acknowledge helpful discussions with Jonas Richter. JM acknowledges support from the EPSRC CDT in Cross-Disciplinary Approaches to Non-Equilibrium Systems (CANES) _via_ grant number EP/L015854/1. MJB acknowledges support of the London Mathematical Laboratory. AP was funded by the European research Council (ERC) under the European Union's Horizon 2020 research and innovation programme _via_ Grant Agreement No. 853368. We are grateful to the UK Materials and Molecular Modelling Hub for computational resources, which is partially funded by EPSRC (EP/P020194/1 and EP/T022213/1). We acknowledge the use of the QuSpin package [63]. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
2310.06995
Accelerated Modelling of Interfaces for Electronic Devices using Graph Neural Networks
Modern microelectronic devices are composed of interfaces between a large number of materials, many of which are in amorphous or polycrystalline phases. Modeling such non-crystalline materials using first-principles methods such as density functional theory is often numerically intractable. Recently, graph neural networks (GNNs) have shown potential to achieve linear complexity with accuracies comparable to ab-initio methods. Here, we demonstrate the applicability of GNNs to accelerate the atomistic computational pipeline for predicting macroscopic transistor transport characteristics via learning microscopic physical properties. We generate amorphous heterostructures, specifically the HfO$_{2}$-SiO$_{2}$-Si semiconductor-dielectric transistor gate stack, via GNN predicted atomic forces, and show excellent accuracy in predicting transport characteristics including injection velocity for nanoslab silicon channels. This work paves the way for faster and more scalable methods to model modern advanced electronic devices via GNNs.
Pratik Brahma, Krishnakumar Bhattaram, Sayeef Salahuddin
2023-10-10T20:26:46Z
http://arxiv.org/abs/2310.06995v1
# Accelerated Modelling of Interfaces for Electronic Devices using Graph Neural Networks ###### Abstract Modern microelectronic devices are composed of interfaces between a large number of materials, many of which are in amorphous or polycrystalline phases. Modeling such non-crystalline materials using first-principles methods such as density functional theory is often numerically intractable. Recently, graph neural networks (GNNs) have shown potential to achieve linear complexity with accuracies comparable to ab-initio methods. Here, we demonstrate the applicability of GNNs to accelerate the atomistic computational pipeline for predicting macroscopic transistor transport characteristics via learning microscopic physical properties. We generate amorphous heterostructures, specifically the HfO\({}_{2}\)-SiO\({}_{2}\)-Si semiconductor-dielectric transistor gate stack, via GNN predicted atomic forces, and show excellent accuracy in predicting transport characteristics including injection velocity for nanoslab silicon channels. This work paves the way for faster and more scalable methods to model modern advanced electronic devices via GNNs. + Footnote †: * These authors contributed equally to this work ## 1 Introduction The modern Si transistor gate stack comprises of multiple material interfaces whose electronic interactions significantly affect electron transport and ultimately transistor performance. In particular, the heterogeneous semiconductor-dielectric gate stack introduces many atomic-scale modeling challenges, which, if addressed, can help design higher-performance gate stacks for next-generation transistors. Fundamentally, the starting point for modeling any transport process of an atomistic electronic device stems from the continuity equation \(\frac{\partial Q}{Q}=-\vec{\nabla}\cdot\vec{J}\), where \(Q\) is the non quasi-static charge and \(J\) is current, which in turn is a function of \(Q\) and injection velocity (\(v_{inj}\)) [10; 8]. In practical devices, a significant contribution to \(Q\) comes from parasitic sources, which traditional Poisson solvers[15] capture well. The main challenge in transport calculations is calculating the intrinsic \(Q\) and \(v_{inj}\), which depend on the specific combination of materials interfaces and confinement effects. When amorphous/polycrystalline phases are involved, one cannot directly leverage the E-k diagram. Therefore, the DOS is then used to calculate all relevant parameters, including \(Q\) and \(v_{inj}\), which are later used in electrostatic solvers and transport models to directly estimate the fast behavior of nanoscale devices. Fig.1 summarises this atomistic computational pipeline for calculating transistor characteristics from the macroscopic transistor dimensions. This pipeline has two bottlenecks for scalability: (i) Molecular Dynamics (MD), which generates atomistic transistor gate stacks with different structural phases, and (ii) Electronic Structure Calculators, which generate atomistic properties like DOS from the quantum Hamiltonian. These bottlenecks arise as the current state-of-the-art atomistic simulation models[6] diagonalize the quantum Hamiltonian, an operation that scales cubically with system size. This poses a challenge for fast and accurate simulations of practically large material systems containing thousands of atoms and varied crystalline states. Following the success of graph neural networks (GNNs)[11; 18] in learning ab initio molecular forces[9] and predicting atomistic properties [4; 3], we propose to overcome the scaling challenge by learning the functional relationship between atomic local chemical environments and macroscopic transistor properties. As GNN inference (summarized in Fig.2) scales linearly with system size, orders of magnitude speedup can be realized. Other existing neural network algorithms for transistor characterization prioritize speed but sacrifice generalizability to unseen geometries and complex material interfaces [17] by inferring on macroscopic scales and ignoring microscopic properties. Our work focuses on learning atomistic contributions to macroscopic transistor characteristics, which we demonstrate yields accurate and generalizable predictions even on unseen transistor geometries. ## 2 Methods Neural Network Architecture:Our GNN architecture is composed of the Schnet[11] and SpookyNet[18] architectures. The forward pass of the GNN (Fig.2) is divided into three phases: Atom Embedding: The atomistic transistor structure is modeled as a graph where each atom is a node, and each neighboring atom interaction within a given cutoff \(r_{c}\) is a weighted edge. Every node is initialized with a random vector (\(x_{v}^{t}\)) according to its atomic number, and every distance vector (\(\vec{r}\)) to its neighboring atoms is projected in radial (\(R\)) and spherical harmonic basis (\(Y_{l}^{m}\))[18]. This phase emulates a system of isolated atoms unaware of its local chemical environment. Message Passing: This phase starts with an isolated system of atoms and allows interactions with atomic neighbors to influence the atomic chemical state. Continuous convolutional filter-generated messages (\(m_{v}^{t}\)) are sent over the edges as given in Fig.2b. The state vector (\(x_{v}^{t}\)) of each node is updated by summing up the incoming messages (\(m_{vj}^{t}\)) along the edges \(j\) as \(x_{v}^{t+1}=x_{v}^{t}+\sum_{j\in\mathcal{N}(v)}m_{vj}^{t}\). Readout: This phase calculates the local atomic contributions of the desired global property from the state vector of all nodes. We focus on two sets of properties: (i) Energy and Atomic Forces: These properties generate the atomistic transistor gate stack using MD. A dense linear layer predicts the local atomic energy using the final atomic state of each node. Summing up the local atomistic energy predictions give total energy, and its derivative gives the atomic forces. (ii) Injection Velocity: This property characterizes the drain current through small channel transistors[10; 8]. It relates to the average velocity of all electrons over the source-channel potential barrier (\(U\)). In the ballistic limit, the drain current (\(I_{D}\)) through a transistor is related to the injection velocity as \(I_{D}=qN_{inv}v_{inj}\), where: \[v_{inj}=\frac{\int dEv_{x}(E)D(E)f(E+U-E_{f})}{N_{inv}},\quad N_{inv}\qquad= \int dED(E)f(E+U-E_{f})\] Figure 1: **Atomistic Computational Pipeline**: Procedure for ab initio accurate predictions of advanced transistor characteristics containing various material interfaces given the macroscopic transistor dimensions. The blue boxes represent the current bottlenecks for scalable simulations of large atomistic devices. We propose to substitute these blocks with GNNs for accelerated predictions. \(N_{inv}\) is the inversion electron density present in the silicon channel, \(v_{x}(E)\) is the bandstructure velocity of the electron at energy \(E\) and \(D(E)\) is the DOS in the silicon channel. A dense linear layer with multiple outputs predicts \(D(E)\) and \(J_{x}(E)=v_{x}(E)D(E)\) from the final node state vectors. We perform PCA on the dataset to reduce the number of output nodes[1]. \(v_{inj}\) and \(N_{inv}\) are subsequently calculated using the predicted properties at a given fermi level (\(E_{f}\)). Datasets:Two datasets are generated related to the transistor gate stack and silicon channel. Gate Stack:The transistor gate stack, a heterostructure of crystalline silicon, amorphous silica, and hafnia, is generated via a high-temperature quench using the LAMMPS MD package[16]. Forces between atoms are estimated using the Charge-Optimized Many Body (COMB) potential[13, 12]. The crystalline forms \(\beta\)-cristobalite and orthorhombic HfO\({}_{2}\) are melted using constant number, pressure, temperature (NPT) dynamics at 2700K and 3500K, respectively. Subsequently, we reshape the melts to match the silicon substrate area using non-equilibrium constant number, volume, temperature (NVT) dynamics and quenched via a damped force minimizer. The generated amorphous silica, hafnia, and crystalline silicon are stacked on each other, and the material interfaces are allowed to relax using the COMB potential. This procedure generates a dataset of \(\sim\)200k molecular structures ranging from 25-96 atoms. Silicon channel:We consider the silicon channel of the transistor as a nanoslab passivated by hydrogen atoms. The empirical \(sp^{3}d^{5}s^{*}\) tight-binding model in Quantum ATK[14, 7] generates the electronic properties of the silicon channel, primarily \(D(E)\) and \(J_{x}(E)\). Around 1k structures are generated by varying the strain (0.900-1.035) and the nanoslab silicon channel thickness (0.6-2.4 nm). ## 3 Results Gate Stack Generation:We simultaneously train on atomic forces and energy using a 90-10 weighted sum of mean squared error (MSE) losses, which yields a final mean absolute error of 3.0e-2 eV/A for the MD dataset on force predictions. The trained model is then used to generate gate stacks of around 200 atoms, a factor of 2 larger than the structures provided during training. The validity of the GNN-generated heterostructure (Fig.3a) is confirmed by the excellent match of the pair distribution functions \(g(r)\) of amorphous silica and hafnia to the baseline simulation model (Fig.3b). Predicted Si-Si, Hf-Hf, Si-O, and Hf-O bond lengths are within \(3\%\) of their experimental values Fig.3b)[2, 5] This demonstrates that our approach of learning local chemical environments can Figure 2: **Graph Neural Network Architecture**: The forward pass of a GNN is divided into three phases: a) Atom Embedding, b) Message Passing and c) Readout. be generalized to predicting atomic forces and energy in transistor geometries unseen in the initial training set. Injection Velocity:We simultaneously train on \(D(E)\) and \(J_{x}(E)\) using an equally weighted sum of MSE losses, which yields a final mean absolute error of 9.0e-4 /eV (0.18% error) for \(D(E)\) and 4.9e4 cm/s-eV (0.82% error) for \(J_{x}(E)\). The trained task-specific GNN predicts PCA coefficients for \(D(E)\) and \(J_{x}(E)\) over 200 and 15 basis functions respectively for the crystalline silicon nanoslab. We found high model performance in reproducing \(D(E)\) and \(J_{x}(E)\) for a 2.6nm thick unstrained silicon nanoslab, a structure larger than in the training set, as shown in Fig.3c. From these predictions, we derived \(v_{inj}\) for a range of chemical potentials \(E_{f}\), finding errors within 5.4% for both the on and off states of the transistor (Fig. 3d) Furthermore, we evaluate our neural network on unstrained silicon nanoslabs with thicknesses ranging from 0.67 to 2.4 nm. We reproduce \(v_{inj}\) as a function of thickness with high fidelity (within 5.3%) (Fig.3e), demonstrating the ability of our model to successfully predict macroscopic dynamics of unseen geometries of silicon channel. The runtime for predicting injection velocity by the GNN is around 20ms, while the baseline simulation takes 430s, demonstrating four orders of speed improvement. ## 4 Conclusion In this study, we demonstrate the efficacy of GNNs to accelerate an end-to-end simulation framework for predicting the material properties of complex material interfaces. Starting from macroscopic transistor dimensions, we use our neural network to predict forces for generating the modern transistor gate stack (HfO\({}_{2}\)-SiO\({}_{2}\)-Si) with bond lengths within 3% of the experimental values. We further reproduce global electronic (\(D(E)\), \(J_{x}(E)\)) and transport properties (\(v_{inj}\)) of crystalline silicon channels. We show agreement within 5.4% for injection velocity on unseen geometries and demonstrate high performance on a structure size outside the training domain. The scalability and accuracy of our predictions over a wide range of material and transport properties demonstrate the viability of our approach for the modeling of advanced heterogeneous devices, paving the way for rapid design and modeling of next-generation transistors. Figure 3: **Gate Stack Generation and Injection Velocity Prediction**: (a) The gate stack heterostructure generated via the GNN predicted atomic forces. (b) Excellent agreement between the \(g(r)\) of the heterostructure generated by GNN and the baseline material simulation. [Exp] refers to the experimental bond length values[2; 5]. (c), (d) Excellent agreement of the predicted \(D(E)\), \(J_{x}(E)\), and \(v_{inj}\) between the GNN and the baseline model for a 2.6nm thick unstrained nanoslab silicon. (e) GNN correctly predicts the transistor \(v_{inj}\) trend as a function of the unstrained nanoslab silicon channel thickness. ## Acknowledgements This work is supported by the Defense Advanced Research Projects Agency (DARPA) within the Nanosim Microsystems Exploration Program.
2303.00204
PCF: ECAPA-TDNN with Progressive Channel Fusion for Speaker Verification
ECAPA-TDNN is currently the most popular TDNN-series model for speaker verification, which refreshed the state-of-the-art(SOTA) performance of TDNN models. However, one-dimensional convolution has a global receptive field over the feature channel. It destroys the time-frequency relevance of the spectrogram. Besides, as ECAPA-TDNN only has five layers, a much shallower structure compared to ResNet restricts the capability to generate deep representations. To further improve ECAPA-TDNN, we propose a progressive channel fusion strategy that splits the spectrogram across the feature channel and gradually expands the receptive field through the network. Secondly, we enlarge the model by extending the depth and adding branches. Our proposed model achieves EER with 0.718 and minDCF(0.01) with 0.0858 on vox1o, relatively improved 16.1\% and 19.5\% compared with ECAPA-TDNN-large.
Zhenduo Zhao, Zhuo Li, Wenchao Wang, Pengyuan Zhang
2023-03-01T03:12:28Z
http://arxiv.org/abs/2303.00204v1
# PCF: ECAPA-TDNN with Progressive Channel Fusion for Speaker Verification ###### Abstract ECAPA-TDNN is currently the most popular TDNN-series model for speaker verification, which refreshed the state-of-the-art(SOTA) performance of TDNN models. However, one-dimensional convolution has a global receptive field over the feature channel. It destroys the time-frequency relevance of the spectrogram. Besides, as ECAPA-TDNN only has five layers, a much shallower structure compared to ResNet restricts the capability to generate deep representations. To further improve ECAPA-TDNN, we propose a progressive channel fusion strategy that splits the spectrogram across the feature channel and gradually expands the receptive field through the network. Secondly, we enlarge the model by extending the depth and adding branches. Our proposed model achieves EER with 0.718 and minDCF(0.01) with 0.0858 on vox1o, relatively improved 16.1% and 19.5% compared with ECAPA-TDNN-large. Zhenduo Zhao\({}^{1,2}\), Zhuo Li\({}^{1,2}\), Wenchao Wang\({}^{1}\), Pengyuan Zhang\({}^{1,2}\)+\({}^{1}\)Key Laboratory of Speech Acoustics and Content Understanding, Institute of Acoustics, Chinese Academy of Sciences, Beijing, China \({}^{2}\)University of Chinese Academy of Sciences, Beijing, China speaker verification, TDNN, progressive channel fusion Footnote †: This work is partially supported by the National Key Research and Development Program of China(No. 2021YFC3320103) ## 1 Introduction Speaker verification aims to determine whether a piece of speech belongs to the claimed speaker. As an important method of biometric authentication, it has a broad wide range of application scenarios. Automatic speaker verification(ASV) systems consist of three modules, an embedding extractor, a scoring backend, and a calibration module. In recent years, ASV systems based on deep neural networks have continuously refreshed the SOTA performance. To achieve better performance, researchers have innovated on each module to push up the upper limit of performance. The embedding extractor is the key component of a system and contributes the most among all modules. Starting from x-vector[1], plenty of works have been proposed on building more powerful networks. These architectures can be roughly divided into one-dimensional convolution networks, two-dimensional convolution networks, and attention-based transformer[2]. While transformer gives a less competitive performance on ASV tasks without large-scale pre-trained models, convolution-based structures take the mainstream position. There are diverse aspects to boost system performance, adding more layers [3, 4, 5, 6] helps model extracting deep representations, adding residual connections [7] make it faster for convergence while avoiding gradient vanishing, introducing attention module [8, 9, 10, 11] improves the ability to capture long-range dependencies and so on. ECAPA-TDNN[10] improves performance by introducing several useful methods. However, compared with ResNet models, we find out that it could be further improved. Firstly, we argue that the limitation of one-dimensional convolution restricts its performance. Compared with two-dimensional convolution, TDNN has a global sensor space over the feature channel, and thus lost time-frequency correlation at the first block. Secondly, ECAPA-TDNN only has five blocks, which restricts the generation of deep representations. Finally, the use of different sizes of convolution kernel in the same layer [12] could improve the ability to capture multi-scale features. This paper proposes a simple but effective strategy called progressive channel fusion(PCF). This strategy splits the input spectrogram into several frequency bands and progressively fuses the bands through the network. Thereby it gets a local receptive field over both time and frequency channels similar to ResNet and reduces the parameter number at the same time. Besides, we introduce two useful methods that further improve the performance accompanied by PCF, res2block branching, and block deepening. This paper is organized as follows: Section 2 describes related works, Section 3 describes the proposed method, Section 4 presents experiments and results, and Section 5 summarizes and looks to future work. ## 2 Related Work We adopt two architectures as the baseline: ECAPA-TDNN and ResNet. ECAPA-TDNN introduces multiple module construction strategies. We summarize the key points as follows: * Channel- and context-dependent statistics pooling. It extends the attention mechanism to channel dimension, to make the model focus more on speaker characteristics instead of non-active frames. Besides, it introduces global context representation by concatenating local input and global non-weighted mean and standard deviation of the feature map across the time domain, thereby taking global properties into account, such as background noises or recording conditions. * Res2Net[13] blocks. By splitting input channels into several pieces, and hierarchically executing convolution, addition, and concatenation, the Res2Block module enhances the capability to capture multi-scale features while significantly reducing the number of parameters. * Squeeze-Excitation(SE) [8] module. SE module first generates a channel-wise descriptor, called the squeeze operation. Then, it computes channel weights based on the descriptor, and applied them to the original channels, called the excitation operator. * Multi-layer feature aggregation(MFA) module. Shallow layer outputs in deep neural networks also contain speaker-relevant information. MFA module concatenates outputs from 3 SE-Res2Net blocks for robustness improvement. ResNet models are dominant in recent speaker recognition challenges. Benefiting from residual connection and modular design, ResNet can easily scale to large while maintaining its ability of fast convergence. After its backbone, we use the same attentive statistics pooling as ECAPA-TDNN for comparison. ## 3 Proposed Method We propose a progressive channel fusion strategy, branch res2block, and layer deepening methods to further enhance the performance of ECAPA-TDNN. The topology of the model is shown in Fig 2 and the relevant configuration gives in Table 1. ### Progressive Channel Fusion The local receptive field is one of the basis when constructing deep convolution networks, in collaboration with stacked blocks, it expands sensor space from local to the whole feature map in deep blocks while avoiding over-fitting. One of the critical differences between ResNet and TDNN is that two-dimensional convolution preserves a local receptive field on both frequency and temporal channels. Because conventional TDNN has full access to all feature channels, the risk of Figure 1: The receptive field of ResNet34 with 64 base channels, ECAPA-TDNN (C=1024), and our PFA-ECAPA (C=1024). For the first two models, we only give the results from blocks 1&3, and for PFA-ECAPA, we give all four layers’ sensor space. Figure 2: Proposed Structure. TDNN block is a sequential of a conv1d, a ReLU and a batchnorm1d. Res2Block in ECAPA-TDNN is replaced with Res2BlockB shown in (b) over-fitting goes high when increasing the number of parameters. To alleviate this phenomenon, we propose a strategy called progressive channel fusion, which allows the model to gradually fuse the channel relationship across the forward propagation. Assume input feature map \(X\in\mathcal{R}^{F\times T}\), where \(F,T\) is the feature dimension and temporal dimension respectively. We split \(F\) into \(N\) sub-bands, where \(N=8\) for the first block, then we half \(N\) after each block. It should be noted that in Res2Block, benefit from its hierarchical design, frequency bands still have the opportunity to communicate, while the difference is that our strategy assigns major frequency bands to channel splits. Besides, we introduce links between the spectrogram and each block with a TDNN block, it shares an identical split configuration with the target block. The receptive field of ResNet34, ECAPA-TDNN, and the proposed PCF-ECAPA are visualized in Fig 1. the first row shows the receptive field of block1 and block3 in ResNet34. It has a gradually increased sensor space. The second row gives the result of the original ECAPA-TDNN, which always has global sensor space across the channel dimension, while the last two rows present all blocks' receptive field of our proposed progressive channel fusion strategy. It shares a similar behavior with the ResNet model, where the visible area of the spectrogram spread around as the block goes deep. ### Res2Block Branching RepVGG [12, 14] has proven to be effective in recent challenges. Due to its re-parameterized structure design, the model can learn multi-scale features during training, and the convolution branches are merged into a single branch for inference. The key operation to improve the performance of RepVGG is the multi-branch structure, it introduces convolution branches with multiple kernel sizes so that the model can learn multi-scale representation. Therefore, we also introduce this structure by adding a branch with a convolution kernel size of 1 in the form of res2block. Although the two branches cannot be merged into one during inference, it still boosts the ability to capture input features at multiple levels. ### Layer Deepening It is usually more effective to make the model deeper rather than wider when scaling convolution networks because of the growing range of receptive field. In section 3.1, we introduce a progressive channel fusion strategy, which improves the model performance while reducing the model parameters. To make up for the weakening of the modeling ability caused by the reduction of parameters, after the first TDNN block, we extend the backbone to 4 blocks, each block containing 2 Res2Blocks with the same dilation. For the MFA module, we concatenate outputs from 4 blocks as the input. ## 4 Experiments ### Dataset We use voxceleb2-dev [15] as training set, containing 1,092,009 utterances from 5,994 speakers. For data augmentation, we use MUSAN [16] and RIRS-NOISES [17]. 80-dimensional log-Mel Filter Bank is used as input features with cepstral mean normalization applied, and voice activity detection is not used. For evaluation, we use official evaluation sets including Vox10,e,h, and validation sets from the last three years of VoxCeleb speaker recognition challenges[18, 19]. ### Model We use the ECAPA-TDNN implemented by speechbrain [20] as the baseline model for the experiments, where two settings are used, a base model with 512 channels, and a large model with 1024 channels. Our model also adopts the same configuration. We fix the output channels of the MFA module to 1536 to be consistent with the original configuration. For ResNet, we use ResNet18 and ResNet34 with 32 channels and the same pooling as ECAPA-TDNN. ### Training We use Adam [21] for optimization, and the learning rate curve adopt the cycle strategy [22]. We train for 3 cycles with one cycle lasting 100k steps, the learning rate in each cycle varies from 1e-8 to 1e-3, and the weight decay is 5e-5. The batch size is set to 256. Circle loss [23], which has stronger constraints on speaker embedding compared to \begin{table} \begin{tabular}{c c} \hline Layer & Structure \\ \hline Input & \(B\times F\times T\) \\ \hline Link1 & TDNN-Block(80,C,5,1,8) \\ Link2 & TDNN-Block(80,C,5,1,4) \\ Link3 & TDNN-Block(80,C,5,1,2) \\ Link4 & TDNN-Block(80,C,5,1,1) \\ \hline Block1 & SE-Res2BlockB(C,C,3,1,8)\(\times\)2 \\ Block2 & SE-Res2BlockB(C,C,3,2,4)\(\times\)2 \\ Block3 & SE-Res2BlockB(C,C,3,3,2)\(\times\)2 \\ Block4 & SE-Res2BlockB(C,C,3,4,1)\(\times\)2 \\ MFA & TDNN-Block(4\(\times\)C,C,1,1,1) \\ Pooling & ASP(C,1536) \\ \hline FC & Conv1D(1536,192,1,1,1) \\ FC & Linear(192,N) \\ \hline \end{tabular} \end{table} Table 1: Proposed model structure. The symbols in brackets represent input channels, output channels, kernel size, dilation, number of sub-bands for Convolution blocks, and input dim and output dim for linear blocks.N means speaker number. AAM-Softmax loss [24], is used with \(m=0.35,s=60\). All models are trained with the same setting. ### Evaluation To bridge the gap between the duration of segments for training and evaluation, we sample short clips from testing utterances and use the mean of the cosine similarity between a pair of embedding matrices as the final score. We test the model at the end of the final cycle and report all systems performance in terms of equal error rate(EER) and minimum Detection Cost Function(minDCF) with \(p_{target}=0.01,C_{FA}=C_{Miss}=1\). ### Results Table 2 summarizes the performance of PCF-ECAPA and the original ECAPA-TDNN together with the number of model parameters except for the classification layer. Our proposed architecture outperforms the baseline systems and gives an average relative improvement of 15.6% on EER and 15.2% on minDCF over the best baseline system. We conducted ablation experiments on the base model, and the results are shown in Table 3. From the ECAPA-TDNN base, we stack three methods one by one and finally get the proposed PCF-ECAPA. We first evaluate the impact of the model depth. Simply increasing the number of blocks from 3 to 8 dramatically improves the performance, where EER improves from 1.058 to 0.792 and minDCF reduces to 0.0912, giving 26.6% and 10.7% relative improvement respectively. The almost doubled number of parameters greatly enhances the model's ability to capture deep representations. Meanwhile, the deepened base model exceeds ECAPA-TDNN-large for 7.5% with less amount of parameters. It proves our assumption that it is usually more efficient for convolution networks to deepen the model rather than broaden it. Secondly, the branched block pushes the EER to 0.744 but pulls the minDCF back to 0.1024. Parallel branch structure helps the model capture multi-scale features at the cost of 1.8% extra parameters. The performance loss may come from the circle loss based on our experience. Finally, we apply the proposed PCF strategy. It further improves EER to 0.718 and minDCF to 0.0858. Restricting the sensor space of TDNN models over channel dimension is proved to be effective because of the fine-grained attention on each frequency sub-bands. Besides, it brings an 18.3% cut on the number of parameters as a result of the local receptive field. Moreover, experiments show that simply scaling the model from 512 channels to 1024 channels brings little improvement, resulting in EER=0.718 and minDCF=0.0892 on vox1o. On other evaluation sets, the large model gets the biggest boost on Vox22-dev. Nevertheless, it has a less competitive parameter efficiency. ## 5 Conclusion In summary, we propose a strategy to enhance TDNN models: progressive channel fusion. This method enables the model to pay attention to the narrow frequency band in shallow layers, gradually expand the receptive filed through the network, and have the global frequency band receptive field in deep layers, thereby improving the overall utilization efficiency of the feature map by the model. In addition, we introduce the branch structure and deepen the number of model layers to further improve the model performance, and refreshed the SOTA of TDNN models with all three methods stacked. Experiments show that our optimization direction of the model structure is correct and still has the potential for better performance. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline Stage & Params & \multicolumn{3}{c}{Vox1O} & \multicolumn{3}{c}{Vox1E} & \multicolumn{3}{c}{Vox1H} & \multicolumn{3}{c}{Vox20-dev} & \multicolumn{3}{c}{Vox21-dev} & \multicolumn{3}{c}{Vox22-dev} \\ & & EER & DCF\({}_{0.01}\) & EER & DCF\({}_{0.01}\) & EER & DCF\({}_{0.01}\) & EER & DCF\({}_{0.01}\) & EER & DCF\({}_{0.01}\) & EER & DCF\({}_{0.01}\) \\ \hline ResNet18 & 6.7M & 1.510 & 0.1789 & 1.559 & 0.1760 & 2.679 & 0.2642 & 4.235 & 0.3540 & 4.835 & 0.3798 & 3.624 & 0.4142 \\ ResNet34 & 9.3M & 1.164 & 0.1141 & 1.167 & 0.1285 & 2.099 & 0.2127 & 3.365 & 0.2794 & 3.806 & 0.3050 & 2.825 & 0.3062 \\ \hline ECAPA(C=512) & 6.2M & 1.058 & 0.1021 & 1.205 & 0.1371 & 2.182 & 0.2155 & 3.537 & 0.2905 & 4.545 & 0.3643 & 2.979 & 0.3541 \\ ECAPA(C=1024) & 14.7M & 0.856 & 0.1066 & 1.074 & 0.1285 & 2.009 & 0.2021 & 3.265 & 0.2725 & 4.142 & 0.3353 & 2.830 & 0.3110 \\ \hline PCF-ECAPA(C=512) & 8.9M & **0.718** & **0.0858** & **0.792** & 0.1138 & 1.802 & **0.1750** & 2.959 & **0.2250** & 3.684 & 0.3073 & 2.630 & 0.2836 \\ PCF-ECAPA(C=1024) & 22.2M & **0.718** & 0.0892 & 0.891 & **0.1024** & **1.707** & 0.1754 & **2.831** & 0.2339 & **3.527** & **0.2880** & **2.333** & **0.2666** \\ \hline \hline \end{tabular} \end{table} Table 2: PCF-ECAPA Performance on VoxCeleb Official Evaluation Sets \begin{table} \begin{tabular}{l c c c} \hline \hline Systems & Params & EER & minDCF(0.01) \\ \hline Base & 6.2M & 1.058 & 0.1021 \\ \hline +A & 10.7M & 0.792 & 0.0912 \\ ++B & 10.9M & 0.744 & 0.1024 \\ ++C & 8.9M & 0.718 & 0.0858 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation Study of the PCF-ECAPA architecture. A represents deepen model, B represents adding branches, and C represents using PCF strategy. With ABC applied, we get PCF-ECAPA
2302.08571
A Comprehensive Review and a Taxonomy of Edge Machine Learning: Requirements, Paradigms, and Techniques
The union of Edge Computing (EC) and Artificial Intelligence (AI) has brought forward the Edge AI concept to provide intelligent solutions close to the end-user environment, for privacy preservation, low latency to real-time performance, and resource optimization. Machine Learning (ML), as the most advanced branch of AI in the past few years, has shown encouraging results and applications in the edge environment. Nevertheless, edge-powered ML solutions are more complex to realize due to the joint constraints from both edge computing and AI domains, and the corresponding solutions are expected to be efficient and adapted in technologies such as data processing, model compression, distributed inference, and advanced learning paradigms for Edge ML requirements. Despite the fact that a great deal of the attention garnered by Edge ML is gained in both the academic and industrial communities, we noticed the lack of a complete survey on existing Edge ML technologies to provide a common understanding of this concept. To tackle this, this paper aims at providing a comprehensive taxonomy and a systematic review of Edge ML techniques, focusing on the soft computing aspects of existing paradigms and techniques. We start by identifying the Edge ML requirements driven by the joint constraints. We then extensively survey more than twenty paradigms and techniques along with their representative work, covering two main parts: edge inference, and edge learning. In particular, we analyze how each technique fits into Edge ML by meeting a subset of the identified requirements. We also summarize Edge ML frameworks and open issues to shed light on future directions for Edge ML.
Wenbin Li, Hakim Hacid, Ebtesam Almazrouei, Merouane Debbah
2023-02-16T20:33:33Z
http://arxiv.org/abs/2302.08571v2
# A Review and a Taxonomy of Edge Machine Learning: Requirements, Paradigms, and Techniques ###### Abstract The union of Edge Computing (EC) and Artificial Intelligence (AI) has brought forward the Edge AI concept to provide intelligent solutions close to end-user environment, for privacy preservation, low latency to real-time performance, as well as resource optimization. Machine Learning (ML), as the most advanced branch of AI in the past few years, has shown encouraging results and applications in the edge environment. Nevertheless, edge powered ML solutions are more complex to realize due to the joint constraints from both edge computing and AI domains, and the corresponding solutions are expected to be efficient and adapted in technologies such as data processing, model compression, distributed inference, and advanced learning paradigms for Edge ML requirements. Despite that a great attention of Edge ML is gained in both academic and industrial communities, we noticed the lack of a complete survey on existing Edge ML technologies to provide a common understanding of this concept. To tackle this, this paper aims at providing a comprehensive taxonomy and a systematic review of Edge ML techniques: we start by identifying the Edge ML requirements driven by the joint constraints. We then survey more than twenty paradigms and techniques along with their representative work, covering two main parts: edge inference, and edge learning. In particular, we analyze how each technique fits into Edge ML by meeting a subset of the identified requirements. We also summarize Edge ML open issues to shed light on future directions for Edge ML. Edge Artificial Intelligence, Edge Machine Learning, Distributed Learning, Distributed Inference, Federated Learning, Split Learning, Transfer Learning, Model Compression, Dimensionality Reduction. ## I Introduction The tremendous success of Artificial Intelligence (AI) technologies [1] in the past few years has been driving both industrial and societal transformation through domains such as Computer Vision (CV), Natural Language Processing (NLP), Robotics, Industry 4.0, Smart Cities, etc. This success is mainly brought by deep learning, providing the conventional Machine Learning (ML) techniques with capabilities of processing raw data and discovering intricate structures [2]. Daily human activities are now immersed with AI-enabled applications from content search, service recommendation to automatic identification and knowledge discovery. The existing ML models, especially deep learning models, such as DALL-E 2 [3], Switch transformer [4], and Go-pher [5], tend to rely on complex model structures and large model size to provide competitive performances. For instance, the largest WuDao 2.0 model [6] trained on 4.9TB of data has surpassed state-of-the-art levels on nine benchmark tasks with a striking 1.75 trillion parameters. As a matter of fact, large models have clear advantages on multi-modality, multi-task, and benchmark performance. However, such models require a relatively very large training data-sets to be built as well as a large amount of computing resources during the training and inference phases. This dependency makes them usually closed to public access, and unsuitable to be directly deployed for end devices or even small/medium enterprise level to provide real-time, offline, or privacy-oriented services. In parallel with ML development, Edge Computing (EC) was firstly proposed in 1990 [7]. The main principle behind EC is to bring the computational resources at locations closer to end-users. This was intended to deliver cached content, such as images and videos, that are usually communication expensive, and prevent heavy interactions with the main servers. This idea has later evolved to host applications on edge computing resources [8]. The recent and rapid proliferation of connected devices and intelligent systems has been further pushing EC from the traditional base station level or the gateway level to the end device level. This offers numerous technical advantages such as low latency, mobility, and location awareness support to delay-sensitive applications [9]. This serves as a critical enabler for emerging technologies like 6G, extended reality, and vehicle-to-vehicle communications, to mention only a few. Edge ML [10], as the ML instantiation powered by EC and a union of ML and EC, has brought the processing in ML to the network edge and adapted ML technologies to the edge environment. In this work, edge environment refers to the end-user side pervasive environment composed of devices from both base station level and the end device level. In classical ML scenarios, users run ML applications on their resource-constrained devices (e.g., mobile phones, and Internet of Things (IoT) sensors and actuators), while the core service is performed on the cloud server. In Edge ML, either optimized models and services are deployed and executed in the end-user's device, or the ML models are directly built on the edge side. This computing paradigm provides ML applications with advantages such as real-time immediacy, low latency, offline capability, enhanced security and privacy, etc. However, the Edge ML's core research challenge remains how to adapt ML technologies to edge environmental constraints such as limited computation and communication resources, unreliable network connection, data sensitivity, etc. while keeping similar or acceptable performance. Research work was done in the past few years tackling different aspects of this meta-challenge such as: model compression [11], transfer learning [12], and federated learning [13]. In parallel with the above-mentioned promising results in diverse areas, we noticed that very few work has been realized to deliver a systematic view of relevant Edge ML techniques. Worth reporting, Wang et al., [14, 15] present a comprehensive survey on the convergence of edge computing and deep learning, which covers aspects of hardware, communication, model, as well as edge applications and edge optimization. The work is a good reference as Edge ML technology stack. On the other hand, the analysis of edge ML paradigms are rather brief without a comprehensive analysis of diverse related problematics and the matching solutions. With the rapid evolution in ML paradigms and techniques, our paper focuses on the soft computing aspect of edge ML and aims at providing a thorough and up-to-date technique review for ML model training and inference on the edge by answering the three following questions: * What is the technique perimeter of Edge ML to build an intelligent model? * What are the computational and environmental constraints and requirements for ML on the edge? * How existing ML techniques can fit into an edge environment regarding these requirements? To answer the three above questions, this review is realized by firstly identifying the Edge ML requirements, and then individually review existing ML techniques and analyzing if and how each technique can fit into edge by fulfilling a subset of the requirements. Following this methodology, our goal is to be as exhaustive as possible in the work coverage and provide a panoramic view of all relevant Edge ML techniques with a special focus on machine learning for model training and inference at the edge. Other topics, such as Edge ML hardware [16] and edge communication [17], are beyond our scope of this paper. As such, we do not discuss them in this review. The remainder of the paper is organized as follows: Section II introduces the Edge ML motivation driven by the requirements. Section III provides an overview of all the surveyed edge ML techniques. From Section IV, we respectively describe each technique and analyze them in relation to Edge ML requirements. Section VI summarizes the technique review part. Section VII identifies the challenges and open issues in Edge ML. Section VIII concludes our work and shed light on future perspectives. ## II Edge ML: Requirements In the context of machine learning, be it supervised learning, unsupervised learning, or a reinforcement learning, an ML task could be either a training or an inference. As in every technology, it is critical to understand the underlying requirements that ensure proper expectations. By definition, the edge infrastructure is generally resource-constrained in terms of computation power, i.e., processor and memory, storage capacity, i.e., auxiliary storage, and communication capability, i.e., network bandwidth. ML models on the other hand are commonly known to be hardware demanding with computationally expensive and memory intensive features. Consequently, the union of EC and ML exhibits both constraints from edge environment and ML models. When designing edge powered ML solutions, requirements from both the hosting environment and the ML solution itself need to be considered and fulfilled for suitable, effective, and efficient results. We introduce in this section the Edge ML requirements structured in three categories: (i) ML requirements, (ii) EC requirements, and (iii) overall requirements, which are composite indicators from ML and EC for Edge ML performance. It is worth mentioning that the general quality of service attributes, e.g., availability and reliability, are always relevant but not listed here. This is because they are applicable to all services but not directly related to Edge ML. The three categories of requirements are summarized in figure 1. ### _ML Requirements_ We foresee five main requirements an ML system should consider: (i) Low Task Latency, (ii) High Performance, (iii) Generalization and Adaptation, (iv) Labelled Data Independence, and (v) Enhanced Privacy and Security. We detail these in the following. * **Low Task Latency:** task latency refers to the end-to-end processing time for one ML task, in seconds (s), and Fig. 1: Edge ML Requirements. is determined by both ML models and the supporting computation infrastructure. Low task latency is important to achieve fast or real-time ML capabilities, especially for time-critical use-cases such as autonomous driving. We use the term task latency instead latency to differentiate this concept with communication latency that describes the time for sending the request and receiving the answer. * **High Performance:** the performance of an ML task is represented by its results and measured by general performance metrics such as top-n accuracy, and f1-score in percentage points (pp), as well as use case dependent benchmarks such as General Language Understanding Evaluation (GLUE) benchmark for NLP [18] or Behavior Suite for reinforcement learning [19]. * **Generalization and Adaptation:** the models are expected to learn the generalized representation of data instead of the task labels, so as to be easily generalized to a domain instead of specific tasks. This brings the models capability to solve new and unseen tasks and realize a general ML directly or with a brief adaptation process. Furthermore, facing the disparity between learning and prediction environments, ML models can be quickly adapted to specific environments to solve the environmental specific problems. * **Labelled Data Independence:** the widely applied supervised learning in modern machine learning paradigms requires large amounts of data to train models and generalize knowledge for later inference. However, in practical scenarios, we cannot assume that all data in the edge are correctly labeled. The independence of labelled data indicates the capability of an Edge ML solution to solve one ML task without labelled data or with few labelled data. * **Enhanced Privacy and Security:** the data acquired from edge carry much private information, such as personal identity, health status, and messages, preventing these data to be shared in a large extent. In the meantime, frequent data transmission over network threatens data security as well. The enhanced privacy and security requires the corresponding solution to process data locally and minimize the shared information. ### _EC Requirements_ Three main edge environmental requirements from EC impact the overall Edge ML technology: (i) Computational Efficiency, (ii) Optimized Bandwidth, and (iii) Offline Capability, summarized below. * **Computational Efficiency:** refers to the efficient usage of computational resources to complete an ML task. This includes both processing resources measured by the number of arithmetic operations (OPs), and the required memory measured in MB. * **Optimized Bandwidth:** refers to the optimization of the amount of data transferred over network per task, measured by MB/Task. Frequent and large data exchanges over a network can raise communication and task latency. An optimized bandwidth usage expects Edge ML solutions to balance the data transfer over the network and local data processing. * **Offline Capability:** since the connectivity of edge devices is often weak and/or unstable, requiring operations to be performed on the edge directly. The offline capability refers to the ability to solve an ML task when network connections are lost or without network connection. ### _Overall Requirements_ The global requirements are composite indicators from ML and environmental requirements for Edge ML performance. We specify two overall requirements in this category: (i) Energy Efficiency, and (ii) Cost Optimization. * **Energy Efficiency:** energy efficient refers to the number of ML tasks obtained per power unit, in Task/J. The energy efficiency is determined by both the computation and communication design of Edge ML solutions and its supporting hardware. * **Cost optimization:** Similar to energy consumption, edge devices are generally low cost comparing to cloud servers. The cost here refers to the total cost realizing one ML task in an edge environment. This is again determined by both the Edge ML software implementation and its supporting infrastructure usage. It should be noted that, depending on the nature of Edge ML applications, one Edge ML solution does not necessarily fulfill all the requirements above. The exact requirements for each specific Edge ML application varies according to each requirement's critical level to an application. For example, for autonomous driving, the task latency is much more critical than power consumption and cost optimization requirements. ## III Techniques Overview Figure 2 shows a global view of edge Machine Learning techniques reviewed in this paper. We structure the related techniques into: (i) edge inference, and (ii) edge learning. The edge inference category introduces the technologies to accelerate the task latency of ML model inference. This is performed through, e.g., compressing existing models to consume less hardware resources or by dividing existing models into several parts for parallel inference collaboration. The edge learning category introduces solutions to directly build ML models on the edge side by learning locally from edge data. We detail the categories in the next sections. Before introducing the details of each reviewed technique, we go through three basic machine learning paradigms, i.e., supervised learning, unsupervised learning, and reinforcement learning, to lay the theoretical foundation of ML. Briefly, supervised learning involves using an ML model to learn a mapping function between input data and the target variable from labeled data-set. Unsupervised learning directly describes or extracts relationships in unlabeled data without any guidance from labelled data. Reinforcement learning is the process that an ML agent continuously interacts with its environment, performs actions to get awards, and learns to achieve a goal by the trial-and-error method. Extending the work from [20], we give below the formal definition of the three basic learning paradigms. Breakthroughs have been made in all the three ML learning paradigms to derive meaningful data insights and bring intelligent capabilities, while the reviewed techniques in this paper all fit into the three general machine learning paradigms. ### _Supervised Learning_ Supervised learning learns a function \(f_{0}:X\rightarrow\ Y\) mapping inputs \(x_{i}\in X\) to the corresponding outputs \(y_{i}\in\ Y\) with the help of a labeled data-set \(D\) of \(m\) samples \(D=\{\textit{x}\textit{, }\textit{y}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{ }\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}} \textit{\}{\textit{}\textit{}\textit{}}\textit{\textit{}}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}} \textit{\}{\textit{}\textit{}\textit{}\textit{}}\textit{}\textit{}\textit{}} \textit{\}{\textit{}\textit{}\textit{}\textit{}\textit{}}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}} \textit{\}{\textit{}\textit{}\textit{}}\textit{}\textit{}\textit{}\textit{}} \textit{\}{\textit{}\textit{}\textit{}}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}} \textit{\}\textit{}\textit{}\textit{}\textit{}\textit{}} \textit{\}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{}\textit{} \textit{}\textit{}\textit{}\textit{}\textit{}}\textit{\}} \textit{\textit{}}\textit{\textit{}}\textit{\textit{}}\textit{}\textit{}} \textit{\textit{}}\textit{\textit{}}\textit{\textit{}}\textit{}\textit{}} \textit{\textit{}\textit{}}\textit{\textit{}}\textit{\textit{}}\textit{}} \textit{\textit{}}\textit{\textit{}}\textit{\textit{}}\textit{\textit{}}\textit{\}} \textit{\textit{}}\textit{\textit{}}\textit{\textit{}}} \textit{\textit{}}\textit{\textit{}}\textit{\textit{}}} \textit{\textit{}}\textit{\}}\] where \(\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit\textit\textit\textit\textit\textit\textit\textit\textit\textit\textit\textit\textit\textit\textit\textit\textit\textit \ \ \ \ \ }}}}}\) stands for "supervised learning". In practice, the labelled dataset \(D\) is often divided into training, validation and testing datasets \(D^{\textit{tr}}\), \(D^{\textit{val}}\), \(D^{\textit{test}}\), \(D^{\textit{test}}\) to respectively train Fig. 2: Edge ML Technique Overview. the model, guide the training process and evaluate model performance after training [21]. Finding globally optimal values of \(\theta_{SL}\) is computationally expensive, while in practice the training process is commonly an approximation to find sub-optimal \(\theta_{SL}\) values guided by a predefined meta-knowledge \(\omega\) including the initial model parameters \(\theta\), the training optimizer and learning rate in the case of neural network as: \[\theta_{SL}\approx g_{\omega}(D,L_{D}), \tag{2}\] where \(g_{\omega}\) is an optimization procedure that uses predefined meta-knowledge \(\omega\), dataset \(D\) and loss function \(L_{D}\) to continuously update models parameters \(\theta\) and output final \(\theta_{SL}\). ### _Unsupervised Learning_ Training an ML model in the unsupervised manner is much similar to the supervised learning processing, except that the learned function \(f_{\theta}:X\_\_\_\_X\) mapping input \(x_{i}\in X\) to the same input \(x_{i}\) or other inputs. Unsupervised learning only uses unlabeled dataset \(D\) of \(n\) sample \(D=\{\{x_{i}\}\}_{i=1}^{n}\) to determine \(\theta\) values specific to the dataset \(D\) that minimizes an empirical loss function \(L_{\bar{D}}\) through a training process as: \[\theta_{UL}:=\underset{\theta}{arg\ min}\ L_{\bar{D}}(\theta), \tag{3}\] where \(UL\) stands for "unsupervised learning". Furthermore, the same approximation is applied to unsupervised learning to efficiently fit the \(\theta_{UL}\) to \(D\): \[\theta_{UL}\approx g_{\omega}(\overline{D},L_{\bar{D}}) \tag{4}\] In addition to the above unsupervised learning paradigm which is used to train ML models, other unsupervised learning techniques such as clustering [22] apply predefined algorithms and computing steps to _directly_ generate expected outputs (e.g., data clusters) from \(D\). In such context, the unsupervised learning approximates the values of specific algorithms' hyperparameters \(\theta_{UL}\) as: \[\overline{\theta}_{UL}\approx g_{\omega}(\overline{D},L_{\bar{D}}) \tag{5}\] ### _Reinforcement Learning_ In the classic scenario of reinforcement learning where agents know the state at any given time step, the reinforcement learning paradigm can be formalized into a Markov Decision Process (MDP) as \(M=\{S,\ A,\ P,\ r,\ p_{0},\ y,\ T\}\) where \(S\) is the set of states, \(A\) the set of actions, P the transition probability distribution defining \(P\) (\(s_{t+1}|s_{t},\ a_{t}\)) the transition probability from \(s_{t}\) to \(s_{t+1}\) via \(a_{t}\), \(r:S\times A\_\_\_\_\)R the reward function, \(p_{0}\) the probability distribution over initial states, \(y\in\)[0, 1] by the discount factor prioritizing short- or long-term rewards by respectively decreasing or increasing it, \(T\) the maximum number of time steps. At a time step \(t\in\)\(T\), a policy function \(\tau a\), usually represented by a model in the case of deep reinforcement learning, is used to determine the action \(a_{t}\) that an agent performs at state \(s_{t}:a_{t}=\tau a(s_{t})\), where \(\theta\) are the parameters of the policy function; after the action \(a_{t}\), the agent receives an award \(n=t(s_{t},\ \tau a(s_{t}))\), \(\tau_{E}\) R and enters into a new state \(s_{t+1}\). The interaction between agent and environment stops until a criterion is met such as the rewards are maximized. The objective of the reinforcement learning is to make agents learn to act and maximize the received rewards as: \[\theta_{RL}:=\underset{\theta}{arg\ min}\ \mathbb{E}_{traj}\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad locations. Comparing to a high precision floating point representation, the fixed-point parameter representation can offer faster, cheaper, and more power-efficient arithmetic operations. * **Binarization and Terrorization**: binarization [27] is the quantization of parameters into just two values, typically -1, 1 with a scaling factor. The terrorization [28] on the other hand adds the value 0 to the binary value set to express 0 in models. * **Logarithmic Quantization**: In a logarithmic quantization [29], parameters are quantized into powers of two with a scaling factor. Work in [30] shows that a weight's representation range is more important than its precision in preserving network accuracy. Thus, logarithmic representations can cover wide ranges using fewer bits, compared to the other above-mentioned linear quantization formats. To produce the corresponding quantized model, post-training quantization and quantization aware training can be applied. Given an existing trained model, post-training quantization directly converts the trained model parameters and/or activation according to the conversion needs, to reduce model size and improve task latency during the inference phase. On the other hand, and instead of quantizing existing models, quantization aware training is a method that trains an ML model by emulating inference-time quantization, which has proved to be better for model accuracy [31]. During the training of a neural network, quantization aware training simulates low precision behavior in the forward pass, while the backward pass based on backward propagation remains the same. The training process takes into account both error from training data labels as well as quantization error which is accumulated in the total loss of the model, and hence the optimizer tries to reduce it by adjusting the parameters accordingly. Several contributions are to notice in the literature. The 8-bit quantization schema proposed in [32] reported a four times' reduction of model size and an up-to 50% reduction in inference task latency for MobileNet [33] on the ARM NEON-based implementation. In addition, the sacrifice is a 1.8% point of accuracy drop for the Common Objects in Context (COCO) dataset [34]. Logarithmic quantization places more quantization boundaries for low-magnitude values and less boundaries for high-magnitude values. This can lead to a worse performance than linear quantization at the same bit-width by errors that happens at high-magnitude values. To alleviate this limitation, a successive logarithmic quantization (SLQ) scheme is proposed in [35] to quantize the training error, again when the quantization error is higher than a certain threshold. This has achieved less than 1.5% point accuracy drop for AlexNet [36], SqueezeNet [37], and VGG-S [38] at 4 to 5-bit weight representation. Moreover, a specific training method was proposed in [39] that is specifically designed for the SLQ, further improving the result with a performance degradation of around 1% at 3-bit weight quantization. Zhou et al. [40] analyzed various data precision combinations, concluding that accuracy deteriorates rapidly when weights are quantized to fewer than four bits. However, significant achievements have been made in binary neural network recently since they consume much less computing resources and energy when performing ML tasks and can be easily deployed on tiny, constrained devices [27]. More recent work in [41] presents an accurate and efficient binary neural network for keyword spotting applications along with a binarization-aware training method emphasizing high-frequency information for training optimization. Implementation on ARMv8 edge devices achieved an impressive 22.3 times speedup of task latency and 15.5 times storage-saving with only less than 3% accuracy drop on Google Speech Commands V1-12 task [42]. Overall, moving from high floating-point to lower-precision data representations is especially useful for ML models on edge devices with only low precision operation support such as Application-Specific Integrated Circuit (ASIC) and Field Programmable Gate Arrays (FPGA) to facilitate the trade-off between task accuracy and task latency. Quantization reduces the precision of parameters and/or activation, and thereby decreases the inference task latency by reducing the consumption of computing resources, while the workload reduction brought by cheaper arithmetic operations leads to energy and cost optimization as well. #### Iii-B2 Weight Reduction Weight reduction is a class of methods that removes redundant parameters from \(\theta\) through pruning and parameter approximation. We reviewed the three following categories of methods in this paper: * **Pruning**. The process of removing redundant or non-critical weights and/or nodes from models [11]: weight-based pruning removes connections between nodes (e.g., neurons in neural network) by setting relevant weights to zero to make the ML models sparse, while node-based pruning removes all target nodes from the ML model to make the model smaller. * **Weight Sharing**. The process of grouping similar model parameters into buckets and reuse shared weights in different parts of the model to reduce model size or among models [43] to facilitate the model structure design. * **Low-rank Factorization**. The process of decomposing the weight matrix into several low-rank matrices by uncovering explicit latent structures [44]. A node-based pruning method is introduced in [45] to remove redundant neurons in trained CNNs. In this work, similar neurons are grouped together following a similarity evaluation based on squared Euclidean distances and then pruned away. Experiments showed that the pruning method can remove up to 35% nodes in AlexNet with a 2.2% accuracy loss on the dataset of ImageNet [46]. A grow-and-prune paradigm is proposed in [47] to complement network pruning to learn both weights and compact DNN architectures during training. The method iteratively tunes the architecture with gradient-based growth and pruning of neurons and weight. Experimental results showed the compression ratio of 15.7x and 30.2x for AlexNet and VGG-16 network, respectively. This delivers significant additional parameter and arithmetic operation reduction relative to pruning only methods. In practice, pruning is often combined with a post tuning or a retraining process to improve the model accuracy after pruning [48]. A Dense-Sparse-Dense training method is presented in [49] which introduces a post training step to re-dense and recover the original model symmetric structure to increase the model capacity. This showed to be efficient as it improves the classification accuracy by 1.1% to 4.3% on ResNet-50 [50], ResNet-18 [50], and VGG-16 [51]. The aforementioned pruning methods are static, as they permanently change the original network structure which may lead to a decrease in model capability. On the other hand, dynamic pruning [52] determines at run-time which layers, image channels (for CNN), or neurons would not participate in further model computing during a task. A dynamic channel pruning is proposed in [53]. This method dynamically selects which channel to skip or to process using feature boosting and suppression, which is achieved by use of a side network trained together along the CNN to guide channel amplification and omission. This work achieved a 2x acceleration on ResNet-18 with 2.54% top-1, 1.46% top-5 accuracy loss. A multi-scale weight sharing method is introduced in [54] to share weights among the convolution kernels of the same layer. To share kernel weights for multiple scales, the shared tuple of kernels is designed to have the same shape, and different kernels in the shared tuple are applied to different scales. With approximately 25% fewer parameters, the shared weight ResNet model provides similar performance compared to the baseline ResNets [50]. Instead of looking up tables to locate the shared weight for each connection, HashedNets is proposed in [55] to randomly group connection weights into hash buckets via a low-cost hash function. These weights are tuned to adjust to the HashedNets weight sharing architecture with standard back-propagation during the training. Evaluations showed that HashedNets achieved a compression ratio of 64% with an around-0.7% accuracy improvement against a five-layer CNN baseline with the MNIST dataset [56]. Structured matrices use repeated patterns within matrices to represent model weights to reduce the number of parameters. The _circulant_ matrix, in which all row vectors are composed of the same elements and each row vector is shifted one element to the right relative to the preceding row vector, are often used as the structured matrix to provide a good compression and accuracy for RNN type models [57, 58]. The Efficient Neural Architecture Search (Efficient NAS) via parameter sharing is proposed in [59], in which only one shared set of model parameters is trained for several model architectures, a.k.a., child models. The shared weights are used to compute the validation losses of different architectures. Sharing parameters among child models allows efficient NAS to deliver strong empirical performances for neural network design and use fewer GPU FLOP than automatic model design approaches. The NAS approach has been successfully applied to design model architectures for different domains [60] including CV and NLP. As to low-rank factorization, to find the optimal decomposed matrices to substitute the original weight matrix, Denton et al. [61] analyzed three decomposition methods on pre-trained weight matrices:(i) singular-value decomposition, (ii) canonical polyadic decomposition, and (iii) blustering approximation. Experimental results on a 15-layer CNN demonstrate that singular-value decomposition achieved the best performance by a compression ratio of 2.4x to 13.4x on different layers along with a 0.84% point of top-one accuracy loss in the ImageNet dataset. A more recent work [62] proposes a data-aware low-rank compression method (DRONE) for weight matrices of fully-connected and self-attention layers in large-scale NLP models. As weight matrices in NLP models, such as BERT [63], do not show obvious low-rank structures, a low-rank computation could still exist when the input data distribution lies in a lower intrinsic dimension. The proposed method considers both the data distribution term and the weight matrices to provide a closed-form solution for the optimal rank-k decomposition. Experimental results show that DRONE can achieve 1.92x speedup on the Microsoft Research Paraphrase Corpus (MRPC) [64] task with only 1.5% loss in accuracy, and when DRONE is combined with distillation, it reaches 12.3x speedup on natural language inference tasks of MRPC, Recognizing Textual Entailment (RTE) [65], Corpus of Linguistic Acceptability (CoLA) [66] and Semantic Textual Similarity (STS) [67]. Overall, weight reduction directly reduces the ML model size by removing uncritical parameters. When performing tasks after weight reduction, ML models use less memory and require fewer arithmetic operations, which directly reduce the task latency with less workload and improve the computational resource efficiency. In addition, such improvement contributes to optimized energy consumption and cost. #### Iii-B3 Knowledge Distillation Knowledge Distillation is a procedure where a neural network is trained on the output of another network along with the original targets in order to transfer knowledge between the ML model architectures [68]. In this process, a large and complex network, or an ensemble model, is trained we with a labelled data-set for a better task performance. afterwards, a smaller network is trained with the help of the cumbersome model via a loss function \(L\), measuring the output difference of the two models. This small network should be able to produce comparable results, and in the case of over-fitting, it can even be made capable of replicating the results of the cumbersome network. A knowledge distillation framework for fast objects detection task is proposed in [69]. To address the specific challenges of object detection in the form of regression, region proposals, and less voluminous labels, two aspects are considered: (i) a weighted cross-entropy loss, to address the class imbalance, and (ii) a teacher bounded loss, to handle the regression component and adaptation layers to better learn from intermediate teacher distributions. Evaluations with the datasets of Pattern Analysis, Statistical Modelling and Computational Learning (PASCAL) [70], Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) [71], and COCO showed accu racy improvements by 3% to 5% point. Wen et al. [72] argued that overly uncertain supervision of teachers can negatively influence the model results. This is due to the fact that the knowledge from a teacher is useful but still not exactly right compared with a ground truth. Knowledge adjustment and dynamic temperature distillation are introduced in this work to penalize incorrect supervision and overly uncertain predictions from the teacher, making student models more discriminatory. Experiments on CIFAR-100 [73], CINIC-10 [74], and Tiny ImageNet [75] showedshowed nearly state-of-the-art method accuracy. MiniVit [76] proposes to compress vision transformers with weight sharing across layers and weight distillation. A linear transformation is added on each layers' shared weights to increase weight diversity. Three types of distillation for transformer blocks are considered in this work: (i) prediction-logit distillation, (ii) self-attention distillation, and (iii) hidden-state distillation. Experiments showed MiniViT can reduce the size of the pre-trained Swin-B transformer by 48% while achieving an increase of 1.0% in Top-1 accuracy on ImageNet. Overall, knowledge distillation directly reduces the ML model size by simplifying model structures. Compared to the source model, the target model has a more compact and distilled structure with less parameters. Hence the workload of a task is reduced, leading to a better computational efficiency, low task latency, and optimized energy consumption and cost. #### Iii-A4 Activation Approximation Besides the neural network's size complexity, i.e., in terms of number of parameters, and architecture complexity, i.e., in terms of layers, activation functions impact as well the task latency of a neural network. Activation functions approximation replaces non-linear activation functions (e.g., _sigmoid_ and _tanh_) in ML models with less computational expensive functions (e.g., _ReLU_) to simplify the calculation or convert the computational expensive calculation to series of lookup tables. In an early work [77], the Piece-wise Linear Approximation of Non-linear Functions (PLAN) was studied. The _sigmoid_ function was approximated by a combination of straight lines, and the gradient of the lines were chosen such that all the multiplications were replaced by simple shift operations. Comparing to _sigmoid_ and _tanh_, Hu et al. [78] show that _ReLU_, among other linear functions, is not only less computationally expensive but also proved to be more robust to handle neural network vanishing gradient problem, in which the error dramatically decreases along with the back-propagation process in deep neural networks. Activation approximation improves the computing resource usage by reducing the required number of arithmetic operations in ML models, and thus decreases the task latency with an acceptable increase in task error. ### _Distributed Inference_ Distributed Inference divides ML models into different partitions and carries out a collaborative inference by allocating partitions to be distributed over edge resources and computing in a distributed manner [79]. The target edge resources to distribute the inference task can be broadly divided into three levels: (i) local processors in the same edge device [80], (ii) interconnected edge devices [79], and (iii) edge devices and cloud servers [81]. Among the three levels, an important research challenge is to identify the partition points of ML models by measuring data exchanges between layers to balance the usage of local computational resources and bandwidth among distributed resources. To tackle the tightly coupled structure of CNN, a model parallelism optimization is proposed in [82], where the objective is to distribute the inference on edge devices via a decoupled CNN structure. The partitions are optimized based on channel group to partition the convolutional layers and then an input-based method to partition the fully connected layers, further exposing high degree of parallelism. Experiments show that the decoupled structure can accelerate the inference of large-scale ResNet-50 by 3.21x and reduce 65.3% memory use with 1.29% accuracy improvement. Another distributed inference framework is also proposed in [83] to decompose a complex neural network into small neural networks and apply class-aware pruning on each small neural network on the edge device. The inference is performed in parallel while considering available resources on each device. The evaluation shows that the framework achieves up to 17x speed up when distributing a variant of VGG-16 over 20 edge devices, with around 0.5% loss in accuracy. Distributed inference can improve the end-to-end task latency by increasing the computing parallelism over a distributed architecture. At a price of bandwidth usage and network dependency, the overall energy efficiency and cost are optimized. ### _Other Inference Acceleration techniques_ There exist other ways for accelerating inference in the literature. These have been categorized in a separate category as they are not as popular as the previously discussed techniques. These include: (i) Early Exit of Inference (EEoI), (ii) Inference Cache, and (iii) Model-Specific Inference Acceleration. We briefly review them in the following. #### Iii-C1 Early Exit of Inference (EEoI) The Early Exit of Inference (EEoI) is powered by a deep network architecture augmented with additional side branch classifiers [84]. This allows prediction results for a large portion of test samples to exit the network early via these branches when samples can already be inferred with high confidence. BranchyNet, proposed in [84], is based on the observation that features learned at an early layer of a network may often be sufficient for the classification of many data points. By adding branch structures and exit criteria to neural networks, _BranchyNet_ is trained by solving a joint optimization problem on the weighted sum of the loss functions associated with the exit points. During the inference, _BranchyNet_ uses the entropy of a classification result as a measure of confidence in the prediction at each exit point and allows the input sample to exit early if the model is confident in the prediction. Evaluations have been conducted with LeNet [56], AlexNet, ResNet on MNIST, CIFAR-10 datasets, showing _BranchyNet_ can improve accuracy and significantly reduce the inference time of the network by 2x-6x. To improve the modularity of the _EEoI_ methods, a plug-and-play technique as Patience-based Early Exit is proposed in [85] for single branch models (e.g., ResNet, Transformer). The work couples an internal classifier with each layer of a pre-trained language model and dynamically stops inference when the intermediate predictions of the internal classifiers remain unchanged for a pre-defined number of steps. Experimental results with the ALBERT model [86] show that the technique can reduce the task latency by up to 2.42x and slightly improve the model accuracy by preventing it from overthinking and exploiting multiple classifiers for prediction. EEoI can statistically improve the latency of inference tasks by reducing the inference workload at a price of a decrease in the accuracy. The side branch classifiers slightly increase the memory use during inference, while the task computational efficiency is higher as in most of cases where side branch classifiers can stop the inference earlier. Generally, a correctly designed and trained _EEoI_ technique is able to improve energy efficiency and optimize cost. #### Iv-B2 Inference Cache Inference Cache saves models or models' inference results to facilitate future inferences of similar interest. This is motivated by the fact that ML tasks requested by nearby users within the coverage of an edge node may exhibit spatio-temporal locality [87]. For example, users within the same area might request recognition tasks for the same object of interest, which introduces redundant computation of deep learning inference. Besides the _Cachier_[87], which caches ML models with edge server for recognition applications and shows 3x speedup in task latency, _DeepCache_[88] targets the cache challenge for a continuous vision task. Given input video streams, _DeepCache_ firstly discovers the similarity between consecutive frames and identifies reusable image regions. During inference, _DeepCache_ maps the matched reusable regions on feature maps and fills the reusable regions with cached feature map values instead of real Convolutional Neural Network (CNN) execution. Experiments show that _DeepCache_ saves up to 47% inference execution time and reduces system energy consumption by 20% on average. A hybrid approach, semantic memory design (SMTM), is proposed in [89], combining inference cache with EEoI. In this work, low-dimensional caches are compressed with an encoder from high-dimensional feature maps of hot-spot classes. During the inference, SMTM extracts the intermediate features per layer and matches them with the cached features in fast memory: once matched, SMTM skips the rest of the layers and directly outputs the results. Experiments with AlexNet, GoogLeNet [90], ResNet50, MobileNet V2 [91] shows that SMTM can speed up the model inference over standard approaches with up to 2x and prior cache designs with up to 1.5x with only 1% to 3% point accuracy loss. Inference cache methods show their advantages of reducing task latency on continuous inference tasks or task batch. Since the prediction is usually made together with current input and previous caches, the accuracy can drop slightly. On the computational efficiency front, the cache lookup increases computing workload and memory usage, while the global computational efficiency is improved across tasks, as the inference computation for each data sample does not start from scratch. Energy consumption and cost are reduced in the context of tasks sharing spatio-temporal similarity. #### Iv-B3 Model-Specific Inference Acceleration Besides the above mentioned edge inference techniques that can, in theory, be applied to most of ML model structures, other research efforts aim at accelerating the inference process for specific model structures. We briefly review the representative methods of inference acceleration for three mainstream neural network structures: (i) CNN, (ii) Recurrent Neural Network (RNN), and (iii) Transformers. For CNN models, _MobileNets_[33] constructs small and low latency models based on depth-wise separable convolution. This factorizes a standard convolution into a depth-wise convolution and a 1\(\updownarrow\) convolution, as a trade off between latency and accuracy during inference. The latest version of _MobileNets_ V3 [92] adds squeeze and excitation layers [93] to the expansion-filtering-compression block in _MobileNets_ V2 [91]. As a result, it gives unequal weights to different channels from the input when creating the output feature maps. Combined with later neural architecture search and NetAdapt [94], _MobileNets_ V3-Large reaches 75.2% accuracy and 156ms inference latency on ImageNet classification with single-threaded core on Google Pixel 1 phone. _GhostNet_[95] also uses a depth-wise convolution to reduce the required high parameters and FLOPs induced by normal convolution: given an input image, instead of applying the filters on all the channels to generate one channel of the output, the input tensors are sliced into individual channels and the convolution is then applied only on one slice. During inference, \(\updownarrow\)% of the input is processed by standard convolution and the output of this is then passed to the second depth-wise convolution to generate the final output. Experiments demonstrate that _GhostNet_ can achieve higher recognition performance, i.e., 75.7% top-1 accuracy than _MobileNets_ V3 with similar computational cost on the ImageNet dataset. However, follow-up evaluations show that depth-wise convolution is more suitable for ARM/CPU and not friendly for GPU, thus does not provide a significant inference speedup in practice. A real-time RNN acceleration framework is introduced in [96] to accelerate RNN inference for automatic speech recognition. The framework consists of a block-based structured pruning and several specific compiler optimization techniques including matrix reorder, load redundant elimination, and a compact data format for pruned model storage. Experiments achieve real-time RNN inference with Gated Recurrent Unit(GRU) model on Adreno 640 embedded GPU and show no accuracy degradation when the compression rate is not higher than 10x. Motivated by the way how we pay visual attention to different regions of an image or correlate words in one sentence, a transformer is proposed in [97] showing encouraging results in various machine learning domains [98, 99]. On the downside, transformer models are usually slower than competitive CNN models [100] in terms of task latency due to the massive number of parameters, quadratic-increasing computation complexity with respect to token length, non-foldable normalization layers, and lack of compiler level optimizations. Current research efforts, such as [101, 102], mainly focus on simplifying the transformer architecture to fundamentally improve inference latency, among which the recent EfficientFormer [103] achieves 79.2% top-1 accuracy on ImageNet-1K with only 1.6ms inference latency on iPhone 12. In this work, a latency analysis is conducted to identify the inference bottleneck on different layers of vision transformer, and the EfficientFormer relies on a dimension consistent structure design paradigm that leverages hardware-friendly 4D MetaBlocks and powerful 3D multi-scale hierarchical framework blocks along with a latency-driven slimming method to deliver real-time inference at MobileNet speed. Generally, model specific inference acceleration techniques lower the workload of an inference task and thus reduce the task latency within the same edge environment. Though computational resources usage can vary among techniques, most work reports an acceptable accuracy loss in exchange of a considerable decrease in resources usage. In the case of model over-fitting, inference acceleration can improve the accelerated model accuracy. The total energy consumption and cost are therefore reduced. ## V Edge Learning Edge learning techniques directly build _ML_ models on native edge devices with local data. Distributed learning, transfer learning, meta-learning, self-supervised learning and other learning paradigms fitting into Edge ML are reviewed in this section to tackle different aspects of Edge ML requirements. ### _Distributed Learning_ Compared to cloud-based learning in which raw or pre-processed data are transmitted to cloud for model training, distributed learning (DL) in the edge divides the model training workload onto the edge nodes, i.e., edge servers and/or edge clients, to jointly train models with a cloud server by taking advantage of individual edge computational resources. Modern distributed learning approaches tend to only transmit locally updated model parameters or locally calculated outputs to the aggregation servers, i.e., cloud or edge, or the next edge node: in the server-client configuration, the aggregation server constructs the global model with all shared local updates [104]. On the other hand, in the peer-to-peer distributed learning setup, the model construction is achieved in an incremental manner along with the participating edge nodes together [105]. Distributed learning can be applied to all three basic ML paradigms, namely: supervised learning, unsupervised learning, and reinforcement learning. Instead of learning from one optimization procedure \(g_{\omega}\), distributed learning constructs the global model by aggregating the optimization results of all participant nodes, as formalized by Equation 8: \[\theta\approx\underset{i=1}{\text{\emph{g}}_{\omega}}g_{\omega}(\text{D}^{i },L_{\text{O}}) \tag{8}\] where \(g_{\omega}\) is the optimization procedure driven by the meta-knowledge \(\omega^{i}\) of the participant node \(i\), \(\dot{\text{\emph{h}}}\in\text{\emph{n}}\), and \(\text{\emph{n}}\) is the number of distributed learning nodes. D stands for the data used for learning, which can be for example the labelled data-set \(D\) for supervised learning, the unlabelled data-set \(D\) for unsupervised learning, or the MDP \(M\) for reinforcement learning. \(L_{\text{D}}^{i}\) is the corresponding loss on the given data D and is the aggregation algorithm (e.g., FedAvg [106] in the case of Federated Learning) to update the model by the use of all participants' optimization results (e.g., model parameters, gradients, outputs, etc.). The edge distributed learning results into two major advantages: * **Enhanced privacy and security:** edge data often contains sensitive information related to personal or organizational matters that the data owners are reluctant to share. By transmitting only updated model parameters instead of the data, the distributed learning on the edge trains ML models in a privacy-preserving manner. Moreover, the reduced frequency of data transmission enhances the data security by restraining sensitive data only to the edge environment. * **Communication and bandwidth optimization:** Uploading data to the cloud leads to a large transmission overhead and is the bottleneck of current learning paradigm [107]. Significant amount of communication is reduced by processing data in the edge nodes, and bandwidth usage optimized via edge distributed learning. From the architectural perspective, mainly three organizational architectures [13, 14] exist to achieve distributed learning in the server-client configuration, as illustrated in Figure 3 and introduced as follows: * **Cloud-enabled DL.** Given a number of distributed and interconnected edge nodes, cloud-enabled DL (see Figure 3(a)) constructs the global model by aggregating in the cloud the local models' parameters. These parameters are computed directly in each edge device. Periodically, the cloud server shares the global model parameters to all edge nodes so that the upcoming local model updates are made on the latest global model. * **Edge-enabled DL.** In contrast to cloud-enabled DL, Edge-enabled DL (see Figure 3(b)) uses a local an edge server to aggregate model updates from its managed edge devices. Edge devices, with the management range of an edge server, contribute to the global model training on the edge aggregation server. Since the edge aggregation server is located near the edge devices, edge-enabled DL does not necessitate communications between the edge and the cloud, which thus reduces the communication latency and brings task offline capability. On the other hand, edge-enabled DL is often resource-constrained and can only support a limited number of clients. This usually results in a degradation in the task's performance over time. * **Hierarchical DL.** Hierarchical DL employs both cloud and edge aggregation servers to build the global model. Generally, edge devices within the range of a same edge server transmit local data to the corresponding edge aggregation server to individually train local models, and then local models' parameters are shared with the cloud aggregation server to construct the global model. Periodically, the cloud server shares the global model parameters to all edge nodes (i.e., servers and devices), so that the upcoming local model updates are made on the latest global model. By this means, several challenges of distributed learning, such as Non-Identically Distributed Data (Non-IID) [108], imbalanced class [109], the heterogeneity of edge devices [110] with divers computation capabilities and network environments, can be targeted in the learning design. In fact, as each edge aggregation server is only responsible to train the local model with the collected data, the cloud aggregation server does not need to deal with data diversity and device heterogeneity across the edge nodes. In the following, we review two distributed learning paradigms in the context of Edge ML: (i) federated learning, and (ii) split learning. #### Iii-A1 Federated Learning Federated Learning (FL) [104] enables edge nodes to collaboratively learn a shared model while keeping all the training data on edge nodes, decoupling the ability to do machine learning from the need to store the data in the cloud. In each communication round, the aggregation server distributes the global model's parameters to edge training nodes, and each node trains its local model instance with newly received parameters and local data. The updated model parameters are then transmitted to the aggregation server to update the global model. The aggregation is commonly realized via federated average (FedAvg) [106] or Quantized Stochastic Gradient Descent (QSGD) [111] for neural networks, involving multiple local Stochastic Gradient Descent (SGD) updates and one aggregation by the server in each communication round. FL is being widely studied in the literature. In particular, the survey in [13] summarizes and compares more than forty existing surveys on FL and edge computing regarding the covered topics. According to the distribution of training data and features among edge nodes, federated learning can be divided into three categories [112]: (i) Horizontal Federated Learning (HFL), (ii) Vertical Federated Learning (VFL), and (iii) Federated Transfer Learning (FTL). HFL refers to the federating learning paradigm where training data across edge nodes share the feature space but different in samples. VFL federates models trained from data sharing the sample IDs but different feature space across edge nodes. Finally, FTL refers to the paradigm where data across edge nodes are correlated but differ in both samples and feature space. HFL is widely used to handle homogeneous feature spaces across distributed data. In addition to the initial work of Fig. 3: The distributed learning architectures available in the literature. FL [104], showing considerable latency and throughput when performing query suggestion task in mobile environments. HFL is highly popular in the healthcare domain [113] where it is, for instance, used to learn from different electronic health records across medical organizations without violating patients' privacy and improve the effectiveness of data-hungry analytical approaches. To tackle the limitation that HFL does not handle heterogeneous feature spaces, the continual horizontal federated learning (CHFL) approach [114] splits models into two columns corresponding to common features and unique features, respectively, and jointly trains the first column by using common features through HFL and locally trains the second column by using unique features. Evaluations demonstrate that CHFL can handle uncommon features across edge nodes and outperform the HFL models with are only based on common features. As a more challenging subject than HFL, VFL is studied in [115] to answer the entity resolution question, which aims at finding the correspondence between samples of the datasets and learning from the union of all features. Since loss functions are normally not separable over features, a token-based greedy entity-resolution algorithm is proposed in [115] to integrate the constraint of carrying out entity resolution within classes on a logistic regression model. Furthermore, most studies of VFL only support two participants and focus on binary class logistic regression problems. A Multi-participant Multi-class Vertical Federated Learning (MMVFL) framework is proposed in [116]. MMVFL enables label sharing from its owner to other VFL participants in a privacy preserving manner. Experiment results on two benchmark multi-view learning datasets, i.e., Handwritten and Caltech7 [117], show that MMVFL can effectively share label information among multiple VFL participants and match multi-class classification performance of existing approaches. As an extension of the federated learning paradigm, FTL deals with the learning problem of correlated data from different sample space and feature space. FedHealth [118] is a framework for wearable healthcare targeting the FTL as a union of FL and transfer learning. The framework performs data aggregation through federated learning to preserve data privacy and builds relatively personalized models by transfer learning to provide adapted experiences in edge devices. To address the data scarcity in FL, a FTL framework for cross-domain prediction is presented in [119]. The idea of the framework is to share existing applications' knowledge via a central server as a base model, and new models can be constructed by converting a base model to their target- domain models with limited application-specific data using a transfer learning technique. Meanwhile, the federated learning is implemented within a group to further enhance the accuracy of the application-specific model. The simulation results on COCO and PETS2009 [120] datasets show that the proposed method outperforms two state-of-the-art machine learning approaches by achieving better training efficiency and prediction accuracy. Besides the privacy preserving nature of FL [121], and in addition to the research efforts on HFL, VFL, and FTL, challenges have been raised in federated learning oriented to security [122], communication [123], and limited computing resources [124]. This is important as edge devices usually have higher task and communication latency and are in vulnerable environments. In fact, low-cost IoT and Cyber-Physical System (CPS) devices are generally vulnerable to attacks due to the lack of fortified system security mechanisms. Recent advances on cyber-security for federated learning [125] reviewed several security attacks targeting FL systems and the distributed security models to protect locally residual data and shared model parameters. With respect to the parameter aggregation algorithm, the commonly used FedAvg employs the aggregation server to centralize model parameters, and thus attacking the central server breaks the FL's security and privacy. Decentralized FedAvg with momentum (DFedAvgM) [126] is presented on edge nodes that are connected by an undirected graph. In DFedAvgM, all clients perform stochastic gradient descent with momentum and communicate with their neighbors only. The convergence is proved under trivial assumptions, and evaluations with ResNet-20 on CIFAR-10 dataset demonstrate no significant accuracy loss when local epoch is set to 1. From a communication perspective, although FL evades transmitting training data over network, the communication latency and bandwidth usage for weights or gradients share among edge nodes are inevitably introduced. The trade-off between communication optimization and the aggregation convergence rate is studied in [127]. A communication-efficient federated learning method with Periodic Averaging and Quantization (FedPAQ) is introduced. In FedPAQ, models are updated locally at edge devices and only periodically averaged at the aggregation server. In each communication round between edge training devices and aggregation server, only a fraction of devices participate in the parameters aggregation. Finally, a quantization method is applied to quantize local model parameters before sharing with the server. Experiments demonstrate a communication-computation trade-off to improve communication bottleneck and FL scalability. Furthermore, knowledge distillation is used in communication-efficient federated learning technique FedKD [128]. In FedKD, a small metnee model and a large mentor model learn and distill knowledge from each other. It should be noted that only the metnee model is shared by different edge nodes and learns collaboratively to reduce the communication cost. In such configuration, different training nodes have different local mentor models, which can better adapt to the characteristics of local data-sets to achieve personalized model learning. Experiments with datasets on personalized news recommendation, text detection, and medical named entity recognition show that FedKD maximally can reduce 94.89% of communication cost and achieve competitive results with centralized model learning. Federated learning on resource-constrained devices limit both communication and learning efficiency. The balance between convergence rate and allocated resource in FL is studied in [129], where a FL algorithm FEDL is introduced to treat the resource allocation as an optimization problem. In FEDL, each node solves its local training approximately till a local accuracy level is achieved. The optimization is based on Pareto efficiency model [130] to capture the trade-off between the wall-clock training time and edge nodes energy consumption. Experimental results show that FEDL outperforms the vanilla FedAvg algorithm in terms of convergence rate and test accuracy. Moreover, computing resources can be not only limited but also heterogeneous at edge devices. A heterogeneity-aware federated learning method, Helios, is proposed in [131] to tackle the computational straggler issue. This implies that the edge devices with weak computational capacities among heterogeneous devices may significantly delay the synchronous parameter aggregation. Helios identifies each device's training capability and defines the corresponding neural network model training volumes. For straggling devices, a soft-training method is proposed to dynamically compress the original identical training model into the expected volume through a rotating neuron training approach. Thus, the stragglers can be accelerated while retaining the convergence for local training as well as federated collaboration. Experiments show that Helios can provide up to \(\mathbf{2.5\times}\) training acceleration and maximum 4.64% convergence accuracy improvement in various collaboration settings. Table I summarized the reviewed works related to FL topics and challenges. Overall, FL is designed primarily to protect data privacy during model training. Sharing models and performing distributed training increases the computation parallelism and reduces the communication cost, and thus reduces both the end-to-end training task latency and the communication latency. Moreover, specific FL design can provide enhanced security, optimized bandwidth usage and efficient computing resource usage. The edge-enabled FL as an instance of the edge-enabled DL can further bring offline capability to ML models. #### Iii-B2 Split Learning As another distributed collaborative training paradigm of ML models for data privacy, Split Learning (SpL) [132] divides neural networks into multiple sections. Each section is trained on a different node, either a server or a client. During the training phase, the forward process firstly computes the input data within each section and transmits the outputs of the last layer of each section to the next section. Once the forward process reaches the last layer of the last section, a loss is computed on the given input. The backward propagation shares the gradients reversely within each section and from the first layer of the last section to the previous sections. During the backward propagation, the model parameters are updated in the meantime. The data used during the training process is stored across servers or clients which take part in the collaborative training. However, none of the involved edge nodes can review data from other sections. The neural network split into sections and trained via SpL is called Split Neural Network (SNN). The SpL method proposed in [132] splits the training between high performance servers and edge clients, and orchestrates the training over sections into three steps: (i) training request, (ii) tensor transmission, and (iii) weights update. Evaluations with VGG and Resnet-50 models on MNIST, CIFAR-10 and ImageNet datasets show a significant reduction in the required computation operations and communication bandwidth by edge clients. This is because only the first few layers of SNN are computed on the client side, and only the gradients of few layers are transmitted during backward propagation. When a large number of clients are involved, the validation accuracy and convergence rate of SpL are higher than FL, as general non-convex optimization averaging models in parameter space could produce an arbitrarily bad model [133]. The configuration choice to split a neural network across servers and clients are subject to design requirements and available computational resources. The work in [134] presents several configurations of SNN catering to different data modalities, of which Figure 4 illustrates three representative configurations: (i) in vanilla SpL, each client trains a partial deep network up to a specific layer known as the cut layer, and the outputs at the cut layer are sent to a server which completes the rest of the training. During parameters update, the gradients are back propagated at the server from its last layer until the cut layer. The rest of back propagation is completed by the clients. (ii) In the configuration of SpL without label sharing, the SNN is wrapped around at the end layers of the servers. The outputs of the server layers are sent back to clients to obtain the gradients. During backward propagation, the gradients are sent from the clients to servers and then back again to clients to update the corresponding sections of the SNN. (iii) SpL for vertically partitioned data allows multiple clients holding different modalities of training data. In this configuration, each client holding one data modality trains a partial model up to the cut layer, and the cut layer from all the clients are then concatenated and sent to the server to train the rest of the model. This process is continued back and forth to complete the forward and backward propagation. Although the configurations show some versatile applications for SNN, other configurations remain to be explored. Comparing to FL, the SNN makes SpL a better option for resource-constrained environments. On the other hand, SpL performs slower than FL due to the relay-based training across multiple clients. To complement both learning paradigms, Split Federated Learning (SFL) [135] aims at bringing FL and SpL together for model privacy and robustness. SFL offers model privacy by network splitting and client-side model updates based on SpL, as well as shorter training latency by performing parallel processing across clients. Experiments demonstrate that SFL provides similar test accuracy and communication efficiency as SL, while significantly decreasing its computation time per global epoch than in SpL for multiple clients. Overall, SpL largely improves training task latency by taking advantage of both server-side and edge-side computational resources. Comparing to FL where all model gradients or weights are transmitted over network, SpL only shares gradients of few layers of SNN and thus further optimizes the bandwidth usage. The SNN model performance is better comparing to FL by avoiding FedAvg or QSGD during training. In addition to data privacy that is enhanced by all distributed learning paradigms, SpL is excellent at preserving model privacy as both data and model structure are opaque across sections. Energy consumption and cost are thus reduced as a result of these SpL advantages. ### _Transfer Learning_ Transfer Learning (TL) is inspired by humans' ability to transfer knowledge across domains. Instead of training models from scratch, TL aims at creating high-performance models on a target domain by transferring the knowledge from models of a different but correlated source domain [136]. The knowledge transfer in the context of transfer learning can be in the following three levels according to the discrepancy between domains: * **Data Distribution.** The training data obtained in specific spatial or temporal point can have different distribution as the testing data in edge environment. The different data distribution, due to different facts such as co-variate shift [137], selection bias [138], and context feature bias [139], could lead to the degradation of model performance in a testing environment. The knowledge transfer between two different data distributions is a subtopic of transfer learning as Domain Adaptation (DA) [140]. * **Feature Space.** Contrary to the homogeneous transfer learning [12] which assumes that the source domain and the target domain consist of the same feature spaces, heterogeneous transfer learning tackles the (TL) case where the source and target domains have different feature spaces [141]. The heterogeneous transfer learning applies a feature space adaptation process to ease the difficulty to collect data within a target domain and expands the transfer learning to broader applications. * **Learning Task Space.** Transfer learning also transfers knowledge between two specific learning tasks by use of the inductive biases of the source task to help perform the target task [142]. In this level, the data of the source and target task can have a same or different distribution and feature space. However, the specific source and target tasks are supposed to be similarly correlated either in a parallel manner, e.g., in the tasks of objects identification and person identification, or in a downstream manner, e.g., from a pretext learning task of image representation to a downstream task of an object detection task. It is worth mentioning that the knowledge generalization in an upstream manner from downstream tasks to out-of-distribution data is Domain Generalization (DG) [143]. As a learning paradigm focusing on the techniques to transfer knowledge between domains, the transfer learning can be applied into all three basic learning categories, i.e., supervised learning, and reinforcement learning, for knowledge transfer between domains [142]. Based on the knowledge transfer process, two transfer learning techniques exist to build neural networks for the target domain: (i) layer freezing, and (ii) model tuning. Layer Freezing is generally applied to transfer knowledge between domains that are correlated in a parallel manner and/or in situations where a target domain requests low training latency and has few training data. Fig. 4: Split Learning Configurations [134]. The process is summarized as follows. 1. _Model Collection:_ an existing trained model on the source domain is acquired. 2. _Layer Freezing:_ the first several layers from a source model are frozen to keep the previously learned representation, and the exact layers to freeze are determined by the source model layers which has learned the source data representation [144], i.e., usually the data encoding part of a model. 3. _Model Adjustment:_ the last few layers of the source model are deleted, and again the exact layers to delete are determined by the source model structure [145]. New trainable layers are added after the last layer of the modified source model to learn to turn the previous learned representation into outputs on the target domain. 4. _Model Training:_ the updated model is trained with new data from the target domain. 5. _Model Tuning:_ at last, an optional step is the tuning process usually based on model fine-tuning [146]. During this step, the entire newly trained model from the previous step is unfrozen and re-trained on the new data from the target domain with a low learning rate. The tuning process potentially further improves the model performance by adapting the newly trained representation to the new data. On the other hand, Model Tuning is generally applied to transfer knowledge among domains that are correlated in a downstream manner and/or in situations where a target domain has sufficient training data. The process of tuning based transfer learning can be summarized as follows. 1. _Model Pre-training:_ A model is pre-trained on the source domain to learn representations from a source domain data. 2. _Model Adjustment:_ As an optional step in tuning process, the last few layers of the source model are deleted, and new trainable layers are added after the last layer of the modified source model. 3. _Model Tuning:_ The entire pre-trained model is trained on the new data from the target domain to map the learned representation to the target output. During the two transfer learning processes, the parameters of the original model \(\theta\) are updated to the new model parameters \(\theta\) with the dataset \(\mathsf{D}^{{}^{\prime}}\) from the target domain through an optimization procedure \(g_{\omega}^{{}^{\prime}}\): \[\theta:=g_{\omega}(\mathsf{D}^{{}^{\prime}},L_{\mathsf{D}^{{}^{\prime}}}) \tag{9}\] On the target domain, the meta-knowledge \(\omega^{{}^{\prime}}\) and the optimization procedure \(g_{\omega}^{{}^{\prime}}\) can be derived from the source domain during the transfer process; however, the focus of transfer learning is the knowledge transfer of model parameters from \(\theta\) to \(\theta^{{}^{\prime}}\). Transfer learning building models based on previously learned knowledge in a correlated domain brings the following benefits. 1. _Training Efficiency._ The speed of training new models is largely accelerated and uses much less computational resources comparing to model training from scratch. 2. _Less Training Data._ The model training or tuning process on the target model requires less training data, and this is especially useful in the case where a lot of data available from the source domain and relatively less data for target domain. 3. _Model Personalization._ Transfer learning can quickly specialize pre-trained models to a specific environment and improve accuracy when the original pre-trained model cannot generalize well. Transfer learning techniques are studied and compared in several surveys: an early study [141] associates the definition of transfer learning to the reasoning based categories, and divides transfer learning into: (i) inductive transfer learning, (ii) transductive learning, and (iii) unsupervised learning, w.r.t. the source and target task spaces. To handle source and target feature space, homogeneous transfer learning is reviewed in [142, 12], and heterogeneous transfer learning is analyzed in [141, 142]. Regarding the domain adaptation for different data distributions, the state-of-the-art methods are summarized based on training loss in [147] for computer vision applications. In particular, recent research efforts tend to extend the scope of vanilla Domain Adaptation (DA) for different data distribution to different feature spaces or task spaces. The term "deep domain adaptation" is used in [147] to designate the methods that leverage deep neural networks and DA to solve both distribution shift and feature space differences. A Universal Domain Adaptation (UDA) method is described in [148] as a more general approach of transfer learning across task space. UDA targets the supervised model transfer between domains where source and target have overlapped but different label spaces. Without prior knowledge on the label sets from both domains, UDA is capable to classify its samples correctly if it belongs to any class in the source label set or mark it as "unknown" otherwise. To address the unknown label classification, a Universal Adaptation Network (UAN) is introduced to quantify the transferability of each sample into a sample-level weighting mechanism based on both the domain similarity and the prediction uncertainty of each sample. Empirical results show that UAN works stably across different UDA settings and outperforms the state-of-the-art closed set, partial and open set domain adaptation methods. Regarding the layer freezing, one of the most popular application domain is healthcare, as the training data related to specific disease can be difficult to obtain due to the rarity and the privacy. Transfer Learning is applied in [149] to detect Parkinson's disease from speech symptom with layer freezing. In this work, the classification of patients with Parkinson's disease is realized with a CNN to analyze Mel-scale spectrograms in three different languages, i.e., Spanish, German, and Czech, via a transfer learning process. During the knowledge transfer, several consecutive layers are frozen to identify the layer topology characterizing the disease and others in the language. Results indicate that the fine-tuning of the neural network does not provide good performance in all languages, while fine-tuning of individual layers improves the accuracy by up to 7%. Moreover, transfer Learning among languages improves up to 18% the accuracy compared to a model training from scratch. Concerning the model-tuning, fine-tuning large pre-trained models is an effective transfer mechanism in both CV [150] and NLP [151] domains. As the general fine-tuning creates an entire new model for each downstream task, the method is not efficient when facing multiple downstream tasks. In fact, it results in the reproduction of the same sized model multiple times. An adapter module based tuning method is introduced in [152], where adapter modules extend the pre-trained models by only adding a few trainable parameters per task. The parameters of the original network remain fixed, yielding to a high degree of parameter sharing. The experiment transferring BERT transformer to 26 diverse text classification tasks attain near state-of-the-art performance: on GLUE benchmark, the proposed method shows only 0.4% degradation comparing to fine-tuned results, while adding only 3.6% parameters per task comparing to the 100% parameter retraining of fine-tuning. Moreover, prompt tuning [153] is a simple yet effective method to learn prompts to perform specific downstream tasks without modifying models, which is especially useful when handling large language models and vision-language models. The study in [153] shows that prompt tuning becomes more competitive with scale: as models exceed billions of parameters, the proposed method matches the strong performance of model fine-tuning, and largely outperforms the few-shots learning of Generative Pre-trained Transformer 3 (GPT-3) [154]. As the prompt plays an important role in the model output, an interesting discovery is made in [155] to perform reasoning tasks with pre-trained Large Language Models (LLMs) by simply adding "Let's think step by step" before each output. The zero-shot accuracy is increased from 17.7% to 78.7% on MultiArith technmark [156] and from 10.4% to 40.7% on GSM8K benchmark [157] with an off-the-shelf 175B parameter model. As explored by the work, the versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs. This suggests high-level and multi-task broad cognitive capabilities may be extracted through simple prompting. At last, the tuning process is also applied to find optimal values for model hyper-parameters [158], which is however out of the scope of transfer learning. Although transfer learning depends on the correlation between source and target domains to be effective, the similarities between domains are not always beneficial but can be misleading to the learning. Negative transfer [159] is the transfer process in which the target model is negatively affected by the transferred knowledge. It can be caused by several factors such as the domain relevance and the learner's capacity to find the transferable and beneficial part of the knowledge across domains. The work in [159] proposes a method relying on an adversarial network to circumvent negative transfer by filtering out unrelated source data. The harmful source data are filtered by a discriminator estimating both marginal and joint distributions to reduce the bias between source and target risks. The experiments involving four benchmarks demonstrate the effectiveness of filtering negative transfer and the improvement of model performance under negative transfer conditions. Transfer Learning avoids building models from scratch and largely reduces the workload of training new models, which leads to the low training task latency and efficient computation. In parallel, the required training data in the case of supervised learning is much less than training models from scratch. Thus transfer learning can save expensive data-labeling efforts and drives conventional supervised learning more independent of labelled data. Regarding the edge requirements of model performance, transfer learning facilitates the construction of personalized models specific to individual edge environments and are expected to maintain a high model accuracy comparing to generalized model. However, in practice, the model performance is determined by the quality of the source model, the training data in a target domain, and the correlation between the source and the target domains. Thus, the performance varies according to the specific configurations. ### _Meta-Learning_ Taking the philosophy one step higher, and focusing on learning the learning process rather than specific tasks, meta-learning [160] is an advanced learning paradigm that observes and "remembers" previous learning experiences on multiple learning tasks, and then quickly learns new tasks from previous meta-data by analyzing the relation between tasks and solutions. The meta-learning solution for ML tasks is is realized in two levels [161]: (i) a base learner for each task, and (ii) a global meta-learner. The base learner solves task-specific problems and focuses on a single task, while the meta-learner integrates using previous learned concepts to quickly learn the associated tasks. For a new task, meta-learning directly applies or updates the solution of the most similar task. In the case where no similar task is registered, meta-learning exploits the relation between tasks and solutions to propose an initial reference solution. Meta-learning can also be applied to all three basic machine learning paradigms: supervised learning, unsupervised, and reinforcement learning. Regular supervised learning and unsupervised learning do not assume any given or predefined meta-knowledge. In contrary, in supervised and unsupervised meta-learning, the goal is not only to realize a specific task but also to find the best meta-knowledge set, enabling the base learner to learn new tasks as quickly as possible. Regular reinforcement learning maximizes the expected reward on a single MDP, while meta reinforcement learning intention is to maximize the expected reward over various MDPs by learning meta-knowledge. To summarize, instead of learning separately model parameters \(\theta\) for all base learners, meta-learning actually focuses on learning the optimal or sub-optimal meta-knowledge \(\omega\)* for the global meta-learner, as formalized in Equation 10. \[\omega^{*}:=\underset{\omega}{arg\,min}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ training, the meta-testing phase classifies unseen labels from the meta-training phase (i.e., in Figure 5, images of unseen dog breeds are given during meta-testing) by use of the new support set. The work [173] proposes an Long Short-Term Memory (LSTM) based meta-learner model in the few-shot regime. This is done to learn the exact optimization algorithm used to train another neural network classifier as the base learner: the meta-learner is trained to capture both short-term knowledge within a task and long-term common knowledge among all the tasks. This way, the meta-learner is able to rapidly converge a base learner to a locally optimal solution on each task and in the meantime learn a task-common initialization as the base learner. As a step further, zero-shot learning [174] does not require any example data as support set to perform new tasks or classify new classes which the model has not observed during the training phase. A simple zero-shot learning approach is introduced in [175] to model the relationships among features, attributes, and classes as a two linear layers network, where the weights of the second layer are not learned but are given by the environment. During the inference phase with new classes, the second layer is directly given so that the model can make predictions on the new labels. Despite of being simple, the experiment results outperformed the state-of-the-art approaches on the datasets of Animals with Attributes dataset (AwA) [176], SUN attributes (SUN) [177], and aPascal/aYahoo objects (aPY) [178] by up to 17% at the publication time. Unlike [175] representing classes as fixed embeddings in a feature space, Verma et al. [179] represent each class as a probability distribution. The parameters of the distribution of each seen and unseen class are defined as functions of the respective observed class attributes. This allows to leverage additional unlabeled data from unseen classes and improve the estimates of their class-conditional distributions via transductive or semi-supervised learning. Evaluations demonstrate superior results in the same datasets comparing to [175]. In parallel to CV, the pre-trained large language models (LLMs) have proven to be excellent few-shot learner [154] and zero-shot learner [155]. Furthermore, Contrastive Language-Image Pre-training (CLIP) [180] learns computer vision models directly from raw text describing images, which leverages a much boarder source of supervision instead specific data labels. The pre-training of predicting "which caption goes with which image?" is realized on a dataset of 400 million image and text pairs from the Internet. After pre-training, natural language is used to reference learned visual concepts and describe new ones enabling zero-shot transfer of the model to downstream tasks. The work matches the accuracy of the ResNet-50 model on ImageNet zero-shot without dataset specific training, and benchmarks on over 30 CV datasets produce competitive results with fully supervised baselines. As to model-based meta-learning, Memory-Augmented Neural Network (MANN) [165] contains a model-based controller, either feed-forward network or LSTM, to interact with an external memory component for memory retrieval and update. During training, the model learns to bind data representations to their labels regardless of the actual content of the data representation or label, and then the model maps these bound representations to appropriate classes for prediction. The memory writing and reading are powered by the proposed Least Recently Used Access (LRUA) method, and the MANN displays a performance superior to an LSTM in two meta-learning tasks on Omniglot classification dataset [181] and sampled functions from a Gaussian process for regression. A more concrete use case is illustrated in [182] to adapt drones to flight with unknown payloads, in which drones are expected to autonomously determine the payload parameters and adjust the flight control accordingly. During the training, a dynamics model with shared dynamics parameters and adaptation parameters are trained over \(K\) different payloads. During the testing, the robot infers the optimal latent variable representing the unknown payload by use of the learned dynamics parameters and the new sensed data. A model-predictive controller (MPC) then uses the trained dynamic model to plan and execute drone actions that follow the specified flight path. Experiments demonstrate the performance improvement of the proposed method comparing to non-adaptive methods on several suspended payload transportation tasks. With respect to optimization-based meta-learning, MAML [162] is a general optimization algorithm, compatible with any model that learns through gradient descent. In MAML, model specific updates are made by one or more gradient descent steps. Instead of using second derivatives for meta-optimization of models, the meta-optimization proposes the First-Order MAML (FOMAML) to ignore the second derivative during MAML gradient computation to be less computation expensive. MAML has obtained much attention due to its simplicity and general applicability. In the meantime, ignoring higher-order derivatives potentially decreases the model performance, and thus the iMAML [183] approximates these derivatives in a way that is less memory-consuming. While the iMAML is more robust for larger Fig. 5: N-way K-shot learning setup [172]. optimization paths, the computational costs roughly stay the same compared to MAML. Furthermore, online MAML [184] extends the MAML to online learning scenarios where models continuously learn in a potentially infinite time horizon from newly generated data and adapt to environmental changes. Being strong in model specialization, the computation cost however keeps growing over time. Overall, meta-learning reduces supervised learning's dependency on labelled data by enabling models to learn new concepts quickly, which makes meta-learning particularly suitable for the edge side in the sense that it accelerates the training task. Another major advantage of meta-learning is the generalization capability that it brings to models to solve diverse tasks and the potential to realize general ML. Computational resource efficiency is higher for multiple model training, which leads to optimized energy consumption and cost optimization. Nevertheless, the global optimization procedure of optimization-based meta-learning may yet lead to expensive computation workload according to the number of base learners. Additional computation on the support dataset for metric-based meta-learning introduces extra workload during inference according to the dataset size (in such case, the use of metric-based meta-learning is usually avoided.). ### _Self-Supervised Learning_ In contrast to supervised learning or reinforcement learning, human beings' learning paradigm is barely supervised and rarely reinforced. Self-Supervised Learning (SSL) is an unsupervised learning paradigm that uses self-supervision from original data and extracts higher-level generalizable features through unsupervised pre-training or optimization of contrastive loss objectives [161]. These learned feature representations are generalized and transferable, and thus can be tuned later to realize downstream tasks, and the pre-trained models are used as initial models to avoid training from scratch. During self-supervised learning, data augmentation techniques [185, 186] are widely applied for contrast or generation purposes, and data labels are not required since pseudo labels can be estimated from trained models on similar tasks. According to the loss objectives driving the training process, self-supervised learning can be summarized into three categories [187]: (i) generative learning, (ii) contrastive learning, and (iii) adversarial learning, as a combination of generative and contrastive learning. The architectures of the three categories are illustrated in 6. * **Generative Learning:** generative learning trains an encoder to encode the input into an explicit vector and a decoder to reconstruct the input from the explicit vector. The training simulates pseudo labels for unlabeled data and is guided by the reconstruction loss between the real input and the reconstructed input. * **Contrastive learning:** the contrastive learning trains an encoder to respectively encode inputs into explicit vectors and measure similarity among inputs. The contrastive similarity metric is employed as the contrastive loss for model training. During the training, the contrastive learning calibrates label-free data against themselves to learn high-level generalizable representations. * **Adversarial Learning:** adversarial learning trains an encoder-decoder to generate fake samples and a discriminator to distinguish them from real samples in an adversarial manner. In other words, it learns to reconstruct the original data distribution rather than the samples themselves, and the distributional divergence between original and reconstructed divergence is the loss function to minimize during the training phase. The point-wise (e.g., word in texts) objective of the generative SSL is sensitive to rare examples and contrary to the high-level objective (e.g., texts) in classification tasks, which may result in inherent results with out of distribution data. Adversarial SSL abandons the point-wise objective and uses the distributional matching objectives for high-level abstraction learning. In the meantime, adversarial preserves the decoder component abandoned by the contrastive SSL to stable the convergence with more expressiveness. As an emerging field, self-supervised learning has received significant research attention. A comprehensive survey of the three above mentioned SSL categories is presented in [187] including existing methods and representative works. Research works across several modalities of image, text, speech, and graphs are reviewed and compared in [188]. Digging in specific application domains, SSL works for visual feature learning and NLP representation learning are respectively analyzed in [189] and [190]; since graph-structured data are widely used and available over network, efforts on SSL of graph representation are compared in [191] to facilitate downstream tasks based on graph neural networks. Generative SSL often applies the masked prediction method [192] to train the model to fill in the intentionally removed and missing data. For instance in the work [154], generative learning generates words in sentences in NLP by masking the words to generate in each step and updates the model parameters by minimizing the distance between the generated word and the masked word in the text. The same masking methods have proven to be effective to build pre Fig. 6: Self-Supervised Learning Architecture [187]. trained models by hiding speech time slices [193], image regions [194], and graph edges [195] in speech recognition. In a multi-modal setting context, a more general framework is introduced in [192] as dat2vec for speech, NLP and CV data. The idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens, or units of human speech, data2vec predicts contextualized and multi-modal latent representations. Experiments on the major benchmarks of speech recognition [196], image classification ImageNet-1K, and natural language understanding GLUE demonstrate a competitive performance to predominant approaches. Generative SSL is the mainstream method in NLP to train LLMs with texts from the Internet, while on the other hand SSL reveals less competitive results than contrastive SSL in CV domains of which the classification is the main objective. Contrastive SSL creates multiple views of inputs [197] and compares them in the representation space to solve discrimination problems. During the learning, the distance between multi-views of the same data sample is minimized and the distance between different data samples is maximized. Negative sampling is a common for contrastive learning, but this process is often biased and time-consuming. Momentum Contrast (MoCo) [198] uses two encoders, an encoder and a momentum encoder, to encode two augmented versions of the same input images into queries and keys, respectively. During the training, positive pairs are constructed from queries of keys of current mini-batch, while negative pairs are constructed from queries of current mini-batch and keys from previous mini-batches to minimize the contrastive loss function InfoNCE [199]. In the experiments, MoCo outperforms its supervised pre-training counterpart in seven CV tasks on datasets including PASCAL and COCO. To avoid explicitly using negative examples and prevent feature collapse, several data augmentation operations for images (e.g., original, crop, resize, color distort, gaussian noise and blur, etc.) are introduced in [200] as a simple framework for contrastive learning (SimCLR) of visual representations. The learning with regularization and contrastive cross entropy loss benefits from a larger batch size and a longer training compared to the supervised counterpart: SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. Contrastive learning is found to be useful for almost all visual classification tasks due to the class-invariance modeling between different image instances but does not present a convincing result in the NLP benchmarks. The theory and applications of contrastive SSL to the domains such as NLP and graph learning where data are discrete and abstract is still challenging. Inspired by the Generative Adversarial Networks (GAN) [201], adversarial SSL either focuses on generating with the learned complete representation of data or reconstructing the whole inputs with partial ones. Instead of learning from latent distribution of task related data distributions, Bidirectional Generative Adversarial Networks (BiGANs) [202] projects data back into the latent space to boost auxiliary supervised discrimination tasks. The learned distribution does not make any assumption about the data and thus captures the difference in the semantic level. BigBiGAN [203] discovers that a GAN with deeper and larger structures produces better results on downstream task and extends the BigGAN model on representation learning by adding an encoder and correspondingly updating the discriminator. Evaluations of the representation learning and generation capabilities of the BigBiGAN models achieve the state-of-the-art in both unsupervised representation learning on ImageNet, and unconditional image generation. Adversarial SSL proves to be successful in image generation and processing, while still limited in NLP [204] and graph learning [205]. Alternatively, in-painting is a common use case for Adversarial SSL to reconstruct the entire inputs by filling in target regions with a relevant content, which allows the model to learn representations of different regions as well in order to process specific objects in images, detect anomalies in regions or reconstruct 3D images from 2D. A method of image completion is presented in [206] to complete images of arbitrary resolutions by filling in missing regions of any shape. A global discriminator and a local context discriminator are trained to distinguish real images from completed ones. The global discriminator assesses if the image is coherent as a whole, while the local discriminator ensures the local consistency of the generated patches at the completed region. The image completion network is then trained to fool both context discriminator networks. A similar work is reported in [207] to generate regions in masked chest X-ray images to facilitate the abnormality detection in the healthcare domain. As the key method to alleviate the data labelling and annotation dependency, SSL demonstrates the boosting capability to power other learning paradigms, and the resulting solutions absorb merits from SLL and its incorporating learning paradigms. Federated SSL is empirically studied in [208] for both privacy preserving and representation learning with unlabeled data. A framework is also introduced to tackle the non-IID data challenge of FL. The intersection between SSL and meta-learning is reviewed in [161] showing models can best contribute to the improvement of model generalization capability. The models trained by SSL for pretext tasks with unlabeled data can be used by transfer learning to build state-of-the-art results. The self-supervised learning methods and their applications within the transfer learning framework is reviewed and summarized in [209]. Overall, the essential advantage of SSL is the capability to leverage the tremendous amount of unlabeled data to learn latent representations, and thus, the labelled data dependency is largely alleviated during the learning process. The learned data representation via pretext task is in high-level generalization and can be easily used by downstream tasks to provide higher performance in various benchmarks. Although the arithmetic operations required by the training and task latency rises in certain learning setups with larger batch and more epochs, the testing performance is boosted as well. The final cost of training task with SSL is much less comparing to the same task requiring manual labelling of data. ### _Other Learning Paradigms_ Besides the four major learning techniques fitting to Edge ML, introduced in previously, in this section we briefly review relevant ML paradigms that potentially improve Edge ML solutions by satisfying a subset of its requirements. #### Iii-E1 Multi-Task Learning Instead of building \(n\) models for \(n\) tasks, Multi-Task Learning (MTL) aims at using one ML model to realize multiple correlated tasks at the same time [210]. This is commonly achieved by training an entire model for all tasks, consisting of a commonly shared part among all tasks and a task independent part. The commonly shared part of the model learns the common representation and task relations from all tasks' inputs, while the task independent part computes and generates the final output for each task individually. During the multi-task learning, the model is trained in a way that data are mutualized among tasks to discover implicit task correlations. The learning process helps the model better find relevant features for each task and reduces the risk of over-fitting, so that all tasks' performance is improved via relevant features and tasks correlation [211]. Among the multiple tasks, each task can be a general learning task such as supervised tasks (e.g., classification or regression problems), unsupervised tasks (e.g., clustering problems), or reinforcement learning. From the modelling perspective, MTL can be divided into: (i) hard parameter sharing and (ii) soft parameter sharing [212]. The hard parameter sharing generally shares the hidden layers among all tasks, while keeping several task-specific output layers. On the other hand, soft parameter sharing creates a set of parameters for each task of the similar structure, and the distance among the task parameters is then regularized during training [213] in order to encourage the parameters to be similar. The modelling structure is illustrated in Figure 7. The choice of the two modelling depends on the similarity among input data and task relation. A number of works of MTL are surveyed and compared in [210, 212, 214], illustrating the overview of the literature and recent advances. One important research challenge of MTL lies in the multi-task modelling to take into account task and data relations for parameter structure sharing. A MTL model directly at the edge of the network is introduced in [215] for traffic classification and prediction. Based on autoencoders as the key building blocks for learning common features, the model anticipates information on the type of traffic to be served and the resource allocation pattern requested by each service during its execution. Simulation results produce higher accuracy and lower prediction loss comparing to a single-task schema. The on-edge multi-task transfer learning is studied in [216], tackling data scarcity and resource constraints for task allocation. Instead of treating individual tasks equally, the work proposes to measure the impact of tasks on the overall decision performance improvement and quantify task importance with a Data-driven Cooperative Task Allocation (DCTA) approach. Experiments show that DCTA reduces 3.2% of task latency, and saves 48.4% energy consumption compared with the state-of-the-art when solving the task allocation with task importance for MTL. Via common layers sharing among tasks, model parameters in MTL are largely decreased comparing to multiple individual task models, and thus the computational workload is lower for the multiple task model. This leads to an improvement in task latency and computation efficiency. Via the learning of more relevant features and task correlations, the performance for correlated tasks is boosted. Overall, in the context where multiple correlated tasks need to be performed, the MTL brings an efficient way for energy and cost optimization, making it suitable for the edge. #### Iii-E2 Instance-based Learning Instance-based Learning (IBL) [217], also called memory-based learning or lazy learning, compares new instances with already seen instances to perform supervised learning tasks. Instead of learning an explicit representation mapping between features and instance labels, and predicting based on the learned representation, the key idea of IBL is to uniquely rely on seen instances to predict new instances. A commonly applied techniques of IBL are kNN, Radial Basis Function (RBF) [218], and Case Based Reasoning (CBR) [219]. Among these techniques, kNN is widely used as a non-parametric model which simply retains all of the training instances and uses all of them to predict new instances based on a similarity distance between instances. In contrast to the metric based meta-learning which generalizes the learned representation to unseen classes or tasks, IBL is suitable to rapidly realize supervised learning tasks without generalization when the number of labels and retrained instances are small. Moreover, the technique can be easily extended to predict previously unseen instances by simply adding unseen instances in the prediction process. On the other hand, the computational complexity of IBL grows exponentially with the number of retained instances and the number of available labels, making the learning paradigm not suitable for performing large supervised tasks. A Distributed storage and computation kNN algorithm (D-kNN) is introduced in [220]. It is based on cloud-edge computing for cyber-physical-social systems. The main contribution of the work lies in the optimization of distributed computation and storage of kNN and the efficient searching at Fig. 7: MTL modelling structure. distributed nodes to reduce the complexity of the algorithm. A CBR approach is described in [221] to optimize energy consumption in smart buildings. The approach is based on a multi-agent architecture deployed in a cloud environment with a wireless sensor network, where the agents learn human behaviors through CBR enabled neural networks and manage device usage. Experiments in office buildings achieve an average energy savings of 41%. IBL alleviates the labelled data dependency by reducing the number of required labelled data to perform supervised learning tasks. Since the computational complexity of IBL scales with the problem complexity. The task latency, computation efficiency, cost and energy consumption vary according to the specific task setup. The final performance of a model depends on the representativeness and the distribution of the instances as well. #### Iii-B3 Weakly Supervised Learning Weakly Supervised Learning (WSL) comprises a family of learning techniques that train models to perform supervised tasks with noisy, limited, or imprecise labelled data from limited data labelling capacity [222]. Although thorough labelling of edge data is not realistic to achieve by edge users in a continuous basis, the assumption can be made that users or edge applications can casually provide data labelling assistance under consensus. The casual data labelling in such context may produce noisy, imprecise, or insufficient number of labelled data for supervised learning, and correspondingly requires specific learning paradigms to tackle the weak supervision problem. According to the weakness of the labelled data quality, the problem of WSL can be divided into three categories [223]: (i) incomplete supervision, (ii) inexact supervision, and (iii) inaccurate supervision. * **Incomplete supervision** refers to the problem that a predictive model needs to be trained from the ensemble of labelled and unlabeled data, where only a small amount of data is labelled, while other available data remain unlabeled. * **Inexact supervision** refers to the problem that a predictive model needs to be trained from data with only coarse-grained label information. The multi-instance learning [224] is a typical learning problem of incomplete supervision where training data are arranged in sets, and a label is provided for the entire set instead of the data themselves. * **Inaccurate supervision** concerns the problem that a predictive model needs to be trained from data that are not always labelled with ground-truth. A typical problem of inaccurate supervision is label noise [225], where mislabeled data are expected to be corrected or removed before model training. Aiming the three problems of labelled data, weakly supervised learning brings techniques able to train models from data with low quality labels and perform supervised tasks. Existing work on WSL is introduced and summarized in [223] and then further developed in [226] by leveraging the data quantity and adaptability. In what relates to the incomplete supervision problems, active learning [227], inductive semi-supervised learning [228], and transductive learning [229] are three typical solutions for supplement data labelling. The process of the three learning paradigms for incomplete supervision is illustrated in Figure 8. Active learning is a technique where the learner interactively collects training data, typically by querying an oracle to request labels for new data in order to resolve ambiguity during the learning process [227]. Instead of querying all collected data points, the active learning goal is to only query the most representative data and use them for model training. The number of data used to train a model this way is often much smaller than the number required in conventional supervised learning, while the key idea behind is that a learning paradigm can achieve higher accuracy with fewer training labels, if it is allowed to choose the data from which it learns [230]. Without queries, inductive semi-supervised learning labels the data with the help of the available labelled data and then trains the model [228]. The general process of semi-supervised learning is to firstly train a small model with the available labelled data to classify the unlabeled data, and then trains the final model with all data. Such an idea is driven by the assumption that similar data produce similar outputs in supervision tasks, and unlabeled data can be helpful to disclose which data are similar. Instead of training a small model to predict the unlabeled data, transductive learning [229] derives the values of the unknown data with unsupervised learning algorithms and label the unlabeled data according to the clusters to which they belong. Then a model is trained by use of both the previously available and the newly labeled data. Comparing to inductive semi-supervised learning, transductive learning considers all data when performing the data labeling that potentially improve the data labeling results. On the other hand, due to the fact no model is built for labelling, an update in the dataset will result in the repetition of the whole learning process. Active learning, inductive semi-supervised learning, and transductive learning are efficient in the situation where the acquisition of unlabeled data is relatively cheap while labeling is expensive. Fig. 8: Incomplete Supervised Learning Process [223] Regarding the inexact supervision, multi-instance learning has been successfully applied to various tasks such as image classification [231], relation extraction [232], localization [233], and healthcare [234]. The main idea behind is to adapt single instance supervised learning algorithms for instance discrimination to the multi-instance representation for set discrimination. For the label noise problem, label smoothing [235] is a regularization technique that introduces noise for the labels and can improve both predictive performance and model calibration. The effect of label smoothing on model training with label noise is studied in [236, 237], showing that the label smoothing approach incorporating labeled instance centroid and its covariance reduces the influence of noisy labels during training [236]. Label smoothing is also competitive with loss-correction under label noise [237]. Moreover, loss correction is studied in [238] using a two-component mixture model as an unsupervised generative model of sample loss values during training to allow an online estimation of the probability that a sample is mislabelled, and the loss is corrected relying on the network prediction. Overall, targeting the learning problems where labelled data are scarce or imperfect, WSL mitigates the labelled data dependency. Focusing on the data labelling part, the task latency, cost and energy consumption are optimized comparing to manual labelling process. #### V-B4 Incremental Learning Incremental learning [239], also called continual learning, is a machine learning paradigm that regularly processes periodically collected data and continuously integrates newly learned information to models in order to keep models up to date to the evolving data representation or task. Contrary to conventional offline learning, where all training data are available at the beginning of the learning process, and models are firstly built by learning all data batches or samples through epochs for prediction, incremental learning is suitable for learning problems where data are collected over time. In this case, the data distribution, the feature space, or even the task evolve over time. Thus, the trained model is expected to be periodically updated in order to capture and adapt to the new evaluations. Incremental learning takes advantage of higher quality of data, close to the testing environment, and continuously personalizes the pre-trained model with new classes. This learning paradigm can maintain and improve task accuracy when an original pre-trained model cannot generalize well. Moreover, the incremental learning updates model locally and thus preserves the privacy in the case of local deployment. With respect to the incremental learning setup, online learning, as an instantiation of incremental learning in an online scenario [240], continuously learns from data provided in sequence from a data stream and produces a series of versions of the same model for prediction. This is performed without having the complete training dataset available at the beginning. The model is deployed online to continuously realize intervened updates and predictions. In particular, as new data are usually generated very fast from the data stream such as in the case of Twitter data [241], online learning typically uses data samples for only one epoch training and then switches for newer samples. Furthermore, lifelong learning [242] is another incremental learning branch that is characterized by the time span of the learning process and refers to the incremental learning in an infinite time span, to accumulate the learned knowledge for future learning and problem solving. One major challenge of incremental learning is the continuous model adaptation and efficient paradigm design of learning from new data. One typical cause is the concept drift [243] which occurs over time leading to a change in the functional relationship between the model inputs and outputs. Furthermore, learning data of new classes, the model can forget previously learned knowledge. This refers to another cause as the catastrophic forgetting [244]. An early work [245] incorporates the incremental learning with partial instance memory of data samples from the boundaries of the induced concepts. The model updates are based on both previous and new samples. The online learning [240] employs a cross-distillation loss together with a two-step learning technique respectively for the new class data learning and the exemplar data learning to tackle catastrophic forgetting. Furthermore, it counts on the feature based exemplary set update to mitigate the concept drift. This method outperforms the results of current state-of-the-art offline incremental learning methods on the CIFAR-100 and ImageNet-1000 datasets in online scenarios. To perform lifelong learning on edge devices with limited computation resources, a dynamically growing neural network architecture is introduced in [246] based on self-organization neural network (SONN) [247]. In the architecture, a CNN backbone is used as the encoder and the SONN is applied after as the classifier with the capability to grow the network when required to performance lifelong object detection on FPGA. Incremental learning is excellent at autonomously adapting models to continuously changing environments of data, features, and task spaces. By learning from data closer to the prediction environment, the model performance on real environments is improved as well. In particular, the incremental learning fits well to the edge environment with limited computing resources, as data can be fetched for learning in a piecemeal manner and then discarded right after the training. In an online setting, incremental learning consumes more network bandwidth and computation resources in exchange of higher model performance and adaptation capability. The cost and energy consumption are increased. ## VI Technique Review Summary In this section, we summarize in Table II all reviewed techniques with regard to the Edge ML requirements. The three left columns illustrate the individual techniques, or technique groups, while the top two rows list the Edge ML requirements. The following notations are used to facilitate the relationship descriptions between techniques and requirements. * "+": the reviewed technique improves the corresponding Edge ML solution regarding the specific Edge ML requirement. For instance, quantization techniques reduce the inference task latency by simplifying the computation complexity. * "-": the reviewed technique negatively impacts the corresponding Edge ML solution regarding the specific Edge ML requirement. For instance, quantization techniques lead to accuracy loss during inference due to the low precision representation of data. "*": the impact of the reviewed technique on the corresponding Edge ML solution varies according to the specific configurations or setup. For instance, transfer learning techniques improve the target model performance under the conditions that the source task and the target task are correlated, and the data quantity and quality on the target domain are sufficient. The weakness in data quantity or quality on the target domain can result in unsatisfactory model performance. * "/: the reviewed technique does not directly impact the corresponding Edge ML solution regarding the specific Edge ML requirement. For instance, federated learning techniques do not directly improve or worsen the labelled data independence for a supervised learning process. Moreover, the two following assumptions have been made to assure an objective evaluation of each Edge ML technique regarding the requirements: * Appropriate modelling and learning: all models for ML tasks are designed and trained following the state-of-the-art solution. No serious over-fitting or under-fitting has occurred, so that the models' performance can be compared before and after applying the Edge ML techniques. * Statistic scenario: When performing a task, statistic scenarios instead of the best or the worst scenario are considered for techniques evaluation, as certain technique, e.g., Early Exit of Inference, can produce worse results comparing to the corresponding conventional technique in extreme cases where all the side branch classifiers in a model do not produce high enough confidence and thus fails to stop the inference earlier. However, statistically EEoI technique is able to improve energy efficiency and optimize cost when performing a number of running tasks. From Table II, one can see that most of edge inference techniques focus on reducing inference workload to improve computational efficiency and task latency. Distributed inference makes the inference execution of large models possible on the edge side by introducing more computational and communication workload for coordination and synchronization among edge clients. Regarding the distributed learning, split learning is able to offer a more competitive performance and privacy compared to federated learning, when cloud server is available to cooperate on the training process. Transfer learning mainly focuses on accelerating the training task latency by facilitating knowledge sharing cross domains, whilst meta-learning and self-supervised learning respectively provide an efficient and a consolidate way to learn the data representation instead of specific tasks from labeled and unlabeled data to facilitate the learning of new tasks. Moreover, other learning paradigms, i.e., instance-based learning and weakly supervised learning, provide alternative solutions to directly learn from instances or partially labelled data. Multi-task learning is efficient to reduce model size and discover task correlations for better performance when multiple correlated tasks need to be realized simultaneously. At last, incremental learning improves the model performance by continuously adapting models to the real environment by learning from ever-evolving data. The overall requirements of energy efficiency and cost optimization are met by most of Edge ML techniques from different aspects of ML and EC. ## VII Open Issues Despite the divers methods and paradigms of Edge ML and the initial success of their powered edge solutions, challenges and open issues are not rare in the Edge ML field, slowing down the technological progress. In this section, we summarize some open issues of Edge ML to shed light on its future directions. **Learning Generalization and Adaptation.** Currently ML techniques are going through a transition from the learning of specific labels to the learning of data representations. Meta-learning and self-supervised learning provide intuitive manners to progress in this direction. Nevertheless, meta-learning usually relies on a support dataset to perform any task specific adaptation, and self-supervised learning requires tuning as well for specific tasks. The generalization from representation learning brings the general cognitive abilities to models, while automatic adaptation techniques to specific tasks such as zero-shot learning in NLP need to be further studied and explored so that specific tasks can be solved directly without performing any adaptation process. This is particularly important to Edge ML as human intervention or guidance are not guaranteed comparing to the cloud based solutions. **Theoretical Foundation.** With the rapid emergence of Edge ML techniques, the theoretical foundation related to the emerging techniques for optimal design and empirical validation are not up to date. For example, most model compression and approximations methods do not have mathematical proofs to the optimal compression radio. Federated learning also may not converge in the training process, if the data distribution varies largely from clients. Finally, self-supervised learning continuously seeks optimal contrastive objective functions to optimize learning efficiency. Theoretical foundations are crucial to validate empirical conclusions from emerging fields and provide guidelines for optimal design of Edge ML solutions. **Architectures for Heterogeneity and Scalability.** An Edge ML environment is known to be heterogeneous in distribution of entities such data, device resources, network infrastructures, and even ML tasks and models. And with a large number of participant edge devices, bottlenecks have been identified affecting Edge ML performance. Such bottlenecks include the communication bottleneck in federated learning for gradient communications and the computational bottleneck in meta-learning when the support set is large. Furthermore, all edge devices are not often activated at the same time, and the temporal disparity feature makes it more challenging for the organizational architecture to manage the Edge ML solution. Adding local edge servers can alleviate the problem of the local perimeter, and to reach the global heterogeneity management with a large number of edge devices. Advanced distributed architectures for ML tasks are expected to synchronize and coordinate entities among all heterogeneity levels and deliver robust and scalable solutions for dynamic and adaptive aggregation in distributed setup. **Fortified Privacy.** Privacy preservation is the primary objective in distributed learning and inference paradigms, as no data are shared outside of the local client. However, sensitive information can still be leaked via methods such as reverse deduction of models. Although security- and privacy-oriented methods can improve the situation, a significant computation complexity is introduced in the edge devices in the meantime increasing task latency and energy consumption. Novel and lightweight computing paradigms are expected to protect data and model leakage during information exchange and go from enhanced privacy to fortified privacy. **Hybrid Approach.** With the reviewed techniques tackling different aspects of Edge ML requirements, hybrid strategies with more than one technique is now commonly adopted when designing Edge ML solution. Hybrid ML benefits from several techniques and can achieve better performance than the use of any single method. The integration of two or three techniques are popular in the reviewed literature, while with a given set of design requirements, complete hybrid approaches covering all Edge ML phases, including data preprocessing, learning, and inference, are missing. The hybrid approach with a thorough technical design for each phase can best contribute to the improvement of model capability, and thus is a direction worth exploring. **Data Quality Assurance.** Nowadays, a huge amount of data is created on the edge devices at every second, but most of it cannot be directly used by ML without labeling and preprocessing process. As a step forward, self-supervised learning proves to be good at learning structured and unlabeled data. However, the data quality such as noisy data, non-IID data, imbalanced distribution, or data corruptions and errors, still impacts the learning results and tends to alter the model performance. Although a number of methods are introduced, the selection of suitable methods is determinant to the results and highly relies on expertise. Regular interaction with human for labelling and selection of quality data are not realistic especially for edge users, and thus embedded learning paradigms integrating native data selection for quality control and preprocessing of different input qualities is the future of Edge ML. **Framework Extension.** The number of frameworks keeps increasing for Edge ML. However, due to the resource-constrained nature of the edge environment, existing frameworks tend to be lightweight for resource efficiency and thus limited in their support of ML features and functions: most of native Edge ML frameworks are only designed for edge inference, and involve additional steps and computation for model conversion. Device-specific frameworks often support a subset of neural network layers and activation functions, which requires model re-design and re-training before deployment as well. With the rapid development of computing capability on edge devices, the trade-off between resource efficiency and functionality can be further studied to extend the supporting edge features and functions. **Standardization.** There is a widespread standardization organizations (SDOs) on ML (e.g., ISO/IEC JTC 1/SC 42 Artificial Intelligence [248], ITU-T Focus Groups [249, 250], IEEE SA Artificial Intelligence Systems, only to name a few) contributing to the community development and reference solutions. However, the is clearly very few ongoing activities within initiatives and SDOs (e.g., ETSI ISG EMC [251]) focus on defining native specifications for Edge ML solutions. Along with the uprising development of Edge ML technologies, Edge ML standards and specifications covering MLOps lifecycle in edge environment are expected to fill the gap in Edge ML ecosystem and optimize ML at the edge for reference and guidance. ## VIII Conclusion Due to the specific features of privacy preserving, low-latency experiences, and low energy consumption, edge powered machine learning solutions have been rapidly emerging in end-user devices for services and applications in the domains of CV, NLP, healthcare, UAV, etc. In this paper, we provide a comprehensive review of Edge ML techniques focusing on the two parts of ML solutions: (i) edge inference, and (ii) edge learning. The review offers a panoramic view of the techniques perimeter through a thorough taxonomy. Recent and representative works are presented for each technique with its targeting Edge ML requirements. Open issues are identified for future research directions. To the best of our knowledge, this is the first review covering the entire and detailed technique perimeter of Edge ML learning and inference. This paper can serve as a reference to select adaptive ML paradigms and build corresponding solutions in edge environments. Due to the large perimeter to cover, we adapt the review strategy to prioritize the technique width than the technique depth, and thus further work will focus on surveying more detailed research challenges and methods for target and specific techniques branches. In the meantime, we are also investigating scalable architectures for Edge ML solutions over heterogeneity infrastructural resources, data and tasks.
2306.04864
Extraction of XUV+IR ionization amplitudes from the circular dichroic phase
A strong helicity dependence of reconstruction of attosecond bursts by beating of two-photon transitions (RABBITT) with circularly polarized XUV and IR pulses was reported by Han et al. [Nature Physics 19, 230 and arXiv 2302.04137 (2023)]. They attributed a circular dichroic phase in RABBITT to the helical structure of the photoelectron wave packets in the final state. We exploit this effect to determine the magnitude and phase of two-photon XUV+IR ionization amplitudes. In s-electron targets (H, He, Li), such a determination is fully ab initio and free from any instrumental limitations. In heavier noble gases like Ar, characterization of two-photon ionization amplitudes can be made from the circular dichroic phase with minimal and very realistic assumptions.
Anatoli Kheifets
2023-06-08T01:29:44Z
http://arxiv.org/abs/2306.04864v1
# Extraction of XUV+IR ionization amplitudes from the circular dichroic phase ###### Abstract A strong helicity dependence of reconstruction of attosecond bursts by beating of two-photon transitions (RABBITT) with circularly polarized XUV and IR pulses was reported by Han _et al._ [Nature Physics 19, 230 and arXiv 2302.04137 (2023)]. They attributed a circular dichroic phase in RABBITT to the helical structure of the photoelectron wave packets in the final state. We exploit this effect to determine the magnitude and phase of two-photon XUV+IR ionization amplitudes. In \(s\)-electron targets (H, He, Li), such a determination is fully _ab initio_ and free from any instrumental limitations. In heavier noble gases like Ar, characterization of two-photon ionization amplitudes can be made from the circular dichroic phase with minimal and very realistic assumptions. pacs: 32.80.Rm 32.80.Fb 42.50.Hz Circular dichroism (CD) in atomic and molecular photoionization has been studied intensely in recent years. In chiral molecules, it is attributed to the left and right handedness of molecular enantiomers [1]. In atoms, electron ring current in various magnetic sublevels can be co- or counter-rotating (CO or CR) with the circularly polarized ionizing radiation. This results in different ionization probabilities [2; 3; 4] and time delays [5]. The CD effect is inherent in atomic double photoionization where the momenta of the two receding photoelectrons and the propagation direction of the photon form a chiral triangle [6; 7; 8; 9]. Chirality can be also imprinted on atoms by a synthetic chiral light [10]. Very recently, the CD effect has been observed in the process of reconstruction of attosecond bursts by beating of two-photon transitions (RABBITT) driven by circularly polarized radiation [11; 12]. Depending on the co- and counter-rotating of the extreme ultraviolet (XUV) pump and the infrared (IR) probe pulses, the atoms exhibited different set of RABBITT magnitude and phase parameters. This effect was attributed to the helical structure of the photoelectron wave packets in the final state. While in the CR case, absorption of an IR photon leads to the final state composed of two partial waves, the CO configuration corresponds to the final state with just a single partial wave. This disparity is responsible for the RABBITT CD effect. The same single and dual wave disparity creates a circular dichroic time delay in single XUV photon ionization [5]. However, its observation requires a polarized target atom while in RABBITT such a polarization is not necessary. The RABBITT process with linear polarization is driven by interference of the two dual wave final states and their amplitudes and phases are entangled. Disparity of the circular polarization RABBITT creates an opportunity to disentangle the two interfering final states and to extract the corresponding ionization amplitudes and phases individually. In the present work, we demonstrate such an extraction for the helium atom and show that it is free from any instrumental limitations and inaccuracies. This way a complete experiment can be performed in two-photon XUV+IR ionization similarly to single XUV photon ionization [13; 14]. In heavier noble gases like argon, the full characterization of the two-photon ionization amplitudes can be made from the circular dichroic phase using minimal and very realistic assumptions. In a RABBITT measurement [15; 16], an ionizing XUV attosecond pulse train (APT) is superimposed on an attenuated and variably delayed replica of the driving IR pulse. The XUV photon \(\Omega^{\pm}=(2q\pm 1)\omega\) is absorbed from the initial bound state and then is augmented by an IR photon absorption \(+\omega\) or emission \(-\omega\) leading to formation of the even order sideband (SB) in the photoelectron spectrum. The center of the IR pulse is shifted relative to the APT by a variable delay \(\tau\) such that the magnitude of a SB peak oscillates as \[S_{2q}(\tau)=A+B\cos[2\omega\tau-C]. \tag{1}\] The RABBITT parameters \(A\), \(B\) and \(C\) entering Eq. (1) can be expressed as \[A = \sum_{m}|{\cal M}^{(-)}_{m}({\mathbf{k}})|^{2}+|{\cal M}^{(+)*}_{m}( {\mathbf{k}})|^{2}\] \[B = 2\text{Re}\sum_{m}\left[{\cal M}^{(-)}_{m}({\mathbf{k}}){\cal M}^{(+) *}_{m}({\mathbf{k}})\right]\] \[C = \arg\sum_{m}\left[{\cal M}^{(-)}_{m}({\mathbf{k}}){\cal M}^{(+)*}_{m}( {\mathbf{k}})\right]\equiv 2\omega\tau_{a}. \tag{2}\] Here \({\cal M}^{(\pm)}_{m}({\mathbf{k}})\) are complex and angle-dependent amplitudes of two-photon ionization produced by adding (\(+\)) or subtracting (\(-\)) an IR photon, respectively. An incoherent summation over the angular momentum projection of the initial state \(m\) is explicit in Eq. (2). The atomic time delay \(\tau_{a}\) quantifies the timing of the XUV ionization process. The angular dependence of the amplitudes \({\cal M}^{\pm}({\mathbf{k}})\) can be deduced from an analytic expression [17]: \[{\cal M}^{\pm}_{m}({\mathbf{k}}) \propto \sum_{\lambda=l\pm 1}\sum_{L=\lambda\pm 1}\sum_{|M|\leq L,\mu| \leq\lambda}(-i)^{L}e^{i\eta_{L}}Y_{LM}(\hat{k})\] (3) \[\times\sum\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Here \(\langle nl|,\langle\kappa\lambda|\) and \(\langle kL|\) are the initial, intermediate and final electron states defined by their linear and angular momenta. The linear polarization (the LIN case) corresponds to \(m_{1}=m_{2}=0\) whereas for the circular polarization \(m_{1}=1\) and \(m_{2}=\pm 1\) in the CO/CR cases, respectively. For an \(s\)-electron target, \(l=m=0\) and \(\lambda=1\). The LIN and CR cases correspond to \(M=0\) and \(L=\{0,2\}\) whereas the CR case has \(M=2\) which excludes \(L=0\). By carrying out the angular integration in Eq. (3) we can write \[{\cal M}^{\pm}(\mathbf{k})=\left\{\begin{array}{cc}T_{0}^{\pm}+2P_{2}(\cos\theta )T_{2}^{\pm}&\mbox{LIN}\\ -T_{0}^{\pm}+P_{2}(\cos\theta)T_{2}^{\pm}&\mbox{CR}\\ e^{2i\phi}\sin^{2}\theta&T_{2}^{\pm}&\mbox{CO}\,,\end{array}\right. \tag{4}\] where \(T_{L}^{\pm}\) absorbs the radial parts of Eq. (3). In the LIN case, \(\theta=90^{\circ}\) defines the photon propagation direction whereas in the CO/CR cases this direction corresponds to \(\theta=0/180^{\circ}\). That is why to compare the linear and circular cases, we shift the CO/CR angular scale by \(90^{\circ}\). The polar angle \(\phi\) in Eq. (4) is immaterial because of the rotational symmetry of the ionization process. For the same reason, the directions \(\theta\) and \(180^{\circ}-\theta\) are equivalent. So only half of the azimuthal angular range needs to be analyzed. Eq. (4) defines the RABBITT phase for \(\phi=0\): \[C^{\rm LIN}\ =\ \arg\left[T_{2}^{-}T_{2}^{+*}\right]+ \tag{5}\] \[\arg\left[2P_{2}(\cos\theta)+\frac{T_{0}^{-}}{T_{2}^{-}}\right]+ \arg\left[2P_{2}(\cos\theta)+\left(\frac{T_{0}^{+}}{T_{2}^{+}}\right)^{*}\right]\] \[C^{\rm CR/CO}\ =\ \arg\left[T_{2}^{-}T_{2}^{+*}\right]+\arg\left[P_{2}( \cos\theta)-\frac{T_{0}^{+/-}}{T_{2}^{+/-}}\right]\] We observe in Eq. (5) that the phases of the \(\pm\) transition amplitudes are entangled in the LIN case whereas they stand alone in the CR/CO cases. For further analysis, we rewrite the ratio \(T_{0}^{\pm}/T_{2}^{\pm}=R^{\pm}\exp(\pm i\Delta\Phi)\) and note that \(R^{+}<1<R^{-}\) by virtue of the Fano Fano (1966) propensity rule for the continuous-continuous (CC) transitions [19]. We also use the emission/absorption phase identity \(\Delta\Phi\equiv\Phi_{L^{\prime}}^{-}-\Phi_{L}^{+}\approx\Phi_{L^{\prime}}^ {\pm}-\Phi_{L}^{\pm}\) postulated in [20]. Based on this identity and Eq. (5), \(\Delta\Phi=C_{\rm CO}(\theta_{\rm m})-C_{\rm CR}(\theta_{\rm m})\) where the "magic angle" defines the node of the Legendre polynomial \(P_{2}(\theta_{m})=0\). With all these observations, we can conclude that the LIN phase is sandwiched between the CO and CR ones: \[C_{\rm CR}(\theta)<C_{\rm LIN}(\theta)<C_{\rm CO}(\theta)\ \ \forall\theta \tag{6}\] To prove that this relation is indeed satisfied, we conduct a set of simulations by solving numerically the time-dependent Schrodinger equation which describes the helium atom driven by the RABBITT configuration of pulses. Numerical details of these simulations can be found in our recent works [21; 22]. Most essentially, the IR carrier wavelength is set in the 800 nm range and the XUV and IR field intensities are kept within \(1\times 10^{10}\) W/cm\({}^{2}\) range. The latter condition keeps the ionization process within the boundaries of the lowest order perturbation theory (LOPT) which is used to derive Eq. (3). The results of the TDSE calculations on He are exhibited in Fig. 1 for the lowest sidenabds SB16-20. Our numerical results fully support Eq. (6) and are in good agreement with the experimental LIN phase for SB18 reported in [23]. Another important observation is that the boundaries of the CO/CR phases that encompass the LIN phase become narrower as the SB order grows. Simultaneously, the phase drop by \(\sim\pi\) becomes steeper. The latter observation for the LIN phase has already been made and attributed to the convergence \(R^{\pm}\to 1\)[20; 24; 25]. While the two ratios \(R^{\pm}\) enter the LIN phase in Eq. (5), the CO/CR phases contain them separately. Hence, we can fit our numerical CO/CR phases with the analytic expression (5) and obtain the corresponding ratios \(R^{\pm}\) and phase differences \(\Delta\Phi^{\pm}\). Results of this procedure are exhibited in Fig. 2. The top panel shows the ratios \(R^{\pm}=|T_{0}^{\pm}/T_{2}^{\pm}|\) whereas the bottom panel displays the phase differences \(\Delta\Phi^{\pm}=\arg[T_{0}^{\pm}/T_{2}^{\pm}]\). Thus extracted ratios \(R^{\pm}\) are compared with the values returned by a hydrogenic model deployed in [20]. As expected \(R^{\pm}\to 1\) and \(\Delta\Phi^{\pm}\to 0\) as the photoelectron energy grows. The Figure 1: Angular dependent RABBITT phase (\(C(\theta)\) parameter) in SB16-20 of helium obtained with CO (red circles) and CR (green squares) circular polarization as well as with linear (LIN) polarization (blue diamonds). The CO and CR fit with Eq. (5) is shown with red and green solid lines, respectively. The experimental linear phase for SB18 is from [23] and CO/CR phases are from [12]. emission/absorption phase identity as well as the phase determination at the magic angle are well supported by our numerical results. With our ratio determination, we can estimate an excess of the CO RABBITT magnitude over the CR one by the value of \(+10\%\) for SB32 and \(+7\%\) for SB34 which is very similar to [11] and contrary to [26]. The latter work claims the CR excess over the CO one. The hydrogenic model exploited in [17; 20] looses its validity close to threshold at very low photoelectron energies. In the present case, this model clearly fails for SB16. This sideband is formed by an IR photon absorption via an intermediate discrete bound state. Such an under-threshold (or uRABBITT) process has been studied extensively in He [27; 28; 29; 30] and in heavier noble gases - Ne [31; 32; 33] and Ar [21]. In helium, the discrete phase \(\Phi^{+}\) oscillates with the IR photon energy \(\omega\) when the submerged harmonic peak H15 passes through the discrete \(1s3p\) level. As an illustration of this oscillation, we compare in Fig. 2 the two sets of TDSE calculations performed at the central IR frequency of 1.55 eV and 1.57 eV. Such a minuscule photon energy variation causes a significant change of the ratio and the phase difference for SB16 while these parameters for other sidebands remain barely changed. In \(p\)-electron targets, the parameterization of the two-photon amplitudes depends on the orbital momentum projection \(m\). In the CO case, these \(m\)-specific amplitudes take the form: \[{\cal M}^{-}_{m=0} \propto \cos\theta\ [-T^{-}_{1}+\bar{P}_{3}(\cos\theta)\ T^{-}_{3}] \tag{7}\] \[{\cal M}^{+}_{m=0} \propto e^{2i\phi}\sin^{2}\theta\cos\theta\ T^{+}_{3}\] \[{\cal M}^{-}_{m=+1} \propto \sin\theta\ [-6T^{-}_{1}+\bar{P}^{1}_{3}(\cos\theta)\ T^{-}_{3}]\] \[{\cal M}^{+}_{m=+1} \propto e^{3i\phi}\sin^{3}\theta\ T^{+}_{3}\] \[{\cal M}^{\pm}_{m=-1} \propto \mp\sin\theta\ [-T^{\pm}_{1}+\bar{P}^{1}_{3}(\cos\theta)\ T^{\pm}_{3}]\] The CR amplitudes can be obtained by permutation of the emission/absorption \(+/-\) superscripts. In Eq. (7) we introduce \(\bar{P}_{3}(\cos\theta)=P_{3}/P_{1}=(5\cos^{2}\theta-3)/2\) and \(\bar{P}^{1}_{3}(\cos\theta)=P^{1}_{3}/P^{1}_{3}=3(5\cos^{2}\theta-1)/2\). Eq. (7) defines the RABBITT phases for \(\phi=0\): \[C^{\rm CR/CO}_{m=0} = \arg\left[T^{-}_{3}T^{++}_{3}\right]+\arg\left[\bar{P}_{3}(\cos \theta)-\frac{T^{\pm}_{1}}{T^{\pm}_{3}}\right] \tag{8}\] \[C^{\rm CR/CO}_{m=+1} = \arg\left[T^{-}_{3}T^{+*}_{3}\right]+\arg\left[\frac{1}{6}\bar{P} ^{1}_{3}(\cos\theta)-\frac{T^{\pm}_{1}}{T^{\pm}_{3}}\right]\] We do not present here the explicit expressions for the RABBITT phases in the \(m=-1\) case for brevity. We only note that it is entangled with the \(\pm\) amplitudes and that \(C^{\rm CR}_{m=-1}=C^{\rm CO}_{m=-1}\). Eqs. (8) offer a convenient parameterization of the RABBITT phases for \(m=0\) and \(m=+1\) in terms of the ratios \(R^{\pm}=|T^{\pm}_{1}/T^{\pm}_{3}|\) and the phase differences \(\Delta\Phi^{\pm}=\arg[T^{\pm}_{1}/T^{\pm}_{3}]\). We demonstrate the utility of this parameterization in Fig. 3 where we plot the RABBITT phases for the lowest SB12 in argon. The top and middle panels of this figure display the CO and CR phases, respectively, for \(m\)-resolved and \(m\)-summed cases. The ratios \(R^{\pm}\) and phase differences \(\Delta\Phi^{\pm}\) are used as fitting parameters to fit the corresponding CO/CR phases. These parameters are displayed in Fig. 4. Two separate fits of \(m=0\) and \(m=+1\) produce the two sets of parameters which should, in principle, be identical. Their actual difference serves as an accuracy indication of the fitting procedure. We note that, similarly to helium, \(R^{+}<1<R^{-}\). The only exception is \(R^{+}_{m=0}\) which exceeds unity for highest SB26-28. If this result is not accidental, it may signal a break up of the Fano propensity rule due to the proximity to the Cooper minimum where the discrete transition \(3p\to Es\) gradually takes over the \(3p\to Ed\). While the individual \(m\) parameterization is very accurate and the acquired sets of the ratios and phase differences are sufficiently close between the \(m=0\) and \(m=+1\) projections, such a parameterization is not practical experimentally. Indeed, a RABBITT measurement on an unpolarized target atom corresponds to the incoherent \(m\) summation. To deduce the two-photon ionization amplitude from such a measurement we analyze the CD exhibited in the bottom panel of Fig. 3. Because of the CR/CO phase identity of the \(m=-1\) amplitude, its contribution vanishes from the CD. In addition, the \(m=0\) amplitude makes no contribution in the polarization plane. Thus, close to \(\theta=90^{\circ}\), it is the \(m=+1\) amplitude that brings the dominant contribution. This Figure 2: Top: ratios \(R^{\pm}=|T^{\pm}_{0}/T^{\pm}_{2}|\) as extracted from the fit of Eq. (5) to RABBITT phases of helium \(C(\theta)\) calculated in TDSE and shown in Fig. 1. Comparison is made with a hydrogenic model employed in [20]. Bottom: the phase differences \(\Delta\Phi^{\pm}=\arg[T^{\pm}_{0}/T^{\pm}_{2}]\) as extracted from the same fit as well as the magic angle difference \(\Delta\Phi(\theta_{\rm m})\). The experimental values are from [12]. is illustrated in the bottom panel of Fig. 3 where the CD calculated from the \(m=+1\) amplitudes and the incoherent \(m\) summation are exhibited. By using the proximity of the two sets of results near \(\theta=90^{\circ}\), we apply the \(m=+1\) parameterization to the \(m\)-summed CD results over a restricted angular range. Because the CD contains both the absorption and emission \(\pm\) amplitudes, the number of the fitting parameters should be doubled in comparison with the separate CO and CR fits. Such an extended fit becomes unstable and we need to impose additional restrictions on the fitting parameters to improve its accuracy. To do so, we require that \(R^{+}=1/R^{-}\) and \(\Delta\Phi^{+}=\Delta\Phi^{-}\). These restrictions are well justified as is seen from the results of the separate CO/CR fits for \(m=0\) and \(m=+1\) projections. The results of the restricted \(R\) and \(\Delta\Phi\) fit of the CD are overplotted in Fig. 4 for the lowest SB12-18 and are found in fair agreement with the other four CR/CO and \(m=0,+1\) sets of parameters. Similar to the helium results exhibited in Fig. 2, the hydrogenic approximation to the amplitude ratios is more or less accurate for sufficiently large photoelectron energies. However, close to the threshold, the deviation between the absorption and emission \(\pm\) ratios becomes significantly larger than predicted by the hydrogenic model. In conclusion, we devised a procedure to extract the complex two-photon ionization amplitudes from the circular dichroic phase acquired in a RABBITT measurement with circular polarized XUV and IR pulses. Such measurements have been realized very recently by Han _et al_. [11; 12] and demonstrated distinct sets of RABBITT parameters with the co- and counter-rotating XUV/IR radiations. In the case of helium and other \(s\)-electron targets, the proposed method rests solely on the experimentally accessible dichroic phase and does not require any further approximations or simplifications. Moreover, as the amplitudes are extracted from the angular dependent dichroic phase, the absolute value of the latter is not needed. This is important experimentally as this absolute value can be affected by the XUV harmonic group delay. In the case of \(p\)-electron targets such as outer shells of heavier noble gases, the amplitude extraction can be made fully _ab initio_ from the \(m\)-resolved dichroic phase. In this procedure the proximity of the \(m=0\) and \(m=+1\) results serves as a useful check of the accuracy of the method. In experimental measurements on unpolarized targets, the two-photon amplitudes can be extracted from the angular dependent CD taken as the difference between RABBITT phases acquired with the CO and CR circular polarizations. The CD determi Figure 3: Angular dependent RABBITT phase (\(C(\theta)\) parameter) in SB12 of argon obtained with CO (top panel) and CR (middle panel) circular polarization. The TDSE calculations with the sum over all \(m\) projections are shown with red dots whereas \(m\) specific calculations are displayed with green circles (\(m=0\)), blue asterisks (\(m=1\)) and black squares (\(m=-1\)). The CO calculation with \(m=-1\) is overplotted in the middle panel with black triangles to demonstrate that \(C^{\rm CR}_{m=-1}\simeq C^{\rm CO}_{m=-1}\). The parameterization with Eq. (5) for \(m=0\) and \(m=+1\) is shown with the solid lines of the matching color. Bottom: the CD\(=C^{\rm CO}-C^{\rm CR}\) for SB12-18 The dotted symbols and the solid lines of matching color correspond to the all \(m\) and \(m=+1\) calculations, respectively. The experimental CD values for SB12 are from [11]. Figure 4: Top: the ratios \(R^{\pm}=|T^{\pm}_{1}/T^{\pm}_{3}|\) as extracted from the fit of Eq. (8) to RABBITT CO/CR phases with \(m=0\) and \(m=+1\) as well the fit of the \(m\)-summed CD. Comparison is made with a hydrogenic model employed in [20]. Bottom: the analogous results for the phase differences \(\Delta\Phi^{\pm}=\arg[T^{\pm}_{1}/T^{\pm}_{3}]\). nation of the two-photon ionization amplitudes rests on the assumption that the absorption/emission ratio and the phase difference are about the same in the CR and CO cases. This assumption is well justified by the \(m\)-specific tests of Ar. In a broader context, our results offer an opportunity to conduct a complete experiment on the two-photon XUV+IR ionization whereupon the moduli and phases of all the relevant ionization amplitudes are determined experimentally. So far such experiments could only be conducted in single-photon XUV ionization [13; 14]. An alternative method based on the global fitting of the time- and angle-resolved RABBIT traces [31] allows to extract the two-photon amplitudes in various \(m\)-projected ionization channels [11]. However, these amplitudes are not independent and can be further reduced to the most essential "building blocks" as demonstrated in the present study. These blocks visualize very distinctly the fundamental properties of two-photon ionization such as the Fano propensity rule both for the discrete [18] and the continuous transitions [19] as well as the emisson/absorption phase identity [20]. The proposed method also tests the validity of the hydrogenic model [17; 20] which fails near the threshold and in the vicinity of resonant excitations. _Acknowledgment:_ Resources of the National Computational Infrastructure facility have been used in the present work.
2310.17898
Approximate-At-Most-k Encoding of SAT for Soft Constraints
In the field of Boolean satisfiability problems (SAT), at-most-k constraints, which suppress the number of true target variables at most k, are often used to describe objective problems. At-most-k constraints are used not only for absolutely necessary constraints (hard constraints) but also for challenging constraints (soft constraints) to search for better solutions. To encode at-most-k constraints into Boolean expressions, there is a problem that the number of Boolean expressions basically increases exponentially with the number of target variables, so at-most-k often has difficulties for a large number of variables. To solve this problem, this paper proposes a new encoding method of at-most-k constraints, called approximate-at-most-k, which has totally fewer Boolean expressions than conventional methods on the one hand. On the other hand, it has lost completeness, i.e., some Boolean value assignments that satisfy the original at-most-k are not allowed with approximate-at-most-k; hence, it is called approximate. Without completeness, we still have potential benefits by using them only as soft constraints. For example, approximate-at-most-16 out of 32 variables requires only 15% of a conventional at-most-k on the literal number and covers 44% of the solution space. Thus, approximate-at-most-k can become an alternative encoding method for at-most-k, especially as soft constraints.
Shunji Nishimura
2023-10-27T05:12:00Z
http://arxiv.org/abs/2310.17898v1
# Approximate-At-Most-k Encoding of SAT ###### Abstract In the field of Boolean satisfiability problems (SAT), at-most-\(k\) constraints, which suppress the number of true target variables at most \(k\), are often used to describe objective problems. At-most-k constraints are used not only for absolutely necessary constraints (hard constraints) but also for challenging constraints (soft constraints) to search for better solutions. To encode at-most-k constraints into Boolean expressions, there is a problem that the number of Boolean expressions basically increases exponentially with the number of target variables, so at-most-k often has difficulties for a large number of variables. To solve this problem, this paper proposes a new encoding method of at-most-k constraints, called approximate-at-most-k, which has totally fewer Boolean expressions than conventional methods on the one hand. On the other hand, it has lost completeness, i.e., some Boolean value assignments that satisfy the original at-most-k are not allowed with approximate-at-most-k; hence, it is called approximate. Without completeness, we still have potential benefits by using them only as soft constraints. For example, approximate-at-most-16 out of 32 variables requires only 15% of a conventional at-most-k on the literal number and covers 44% of the solution space. Thus, approximate-at-most-k can become an alternative encoding method for at-most-k, especially as soft constraints. SAT, at-most-k, encodings, soft constraints + Footnote †: 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 40 International (CC BY 4.0). ## 1 Introduction SAT, or the Boolean satisfiability problem, demonstrates its availability for real-world problems in many areas. To tackle a real-world problem, we have to describe it as a combination of constraints compatible with SAT, and there is a commonly used constraint called _at-most-\(k\)_, along with some Boolean variables, which is satisfied if the variables have at most \(k\) number of trues in total. One of the problems with at-most-k constraints is the combinatorial explosion; the number of encoded Boolean expressions for an at-most-k constraint will explode along with increasing the target variables. To alleviate the restriction around the explosion problem, several encodings [1, 2] have been proposed such as binary encoding, sequential counter encoding, commander encoding, product encoding, etc. While all of these are genuinely at-most-k constraints, of course, this paper provides a different approach to the problem that attempts to drastically reduce the number of Boolean expressions, in exchange for losing an accurate count of trues. The encoding method we propose, called _approximate-at-most-k_, is no longer genuine at-most-k because some parts of solutions for the original at-most-k may not be included in the solution space of our approximate-at-most-k. For example, assignment \((X_{1},X_{2},X_{3},X_{4},X_{5})=(true,true,false,false,true)\) has three trues, so satisfies at-most-\(3\), but may not be satisfied with approximate-at-most-\(3\), depending on the model implemented at that time. In terms of proof theory, where the judgment of a SAT solver is regarded as the existence of proof, we can say that approximate-at-most loses _completeness_. Despite the lack of completeness, approximate-at-most can still be useful in some cases, such as using them as _soft constraints_[3, 4], or preferences, in other words, that can be used to describe optional desires around the objective problem. For an example of university timetabling [5, 2], on the one hand, it is necessary that the same teacher not teach two different classes at the same time (this is called a hard constraint). On the other hand, a university policy is suitable for soft constraints; it may be preferable that only 5 teachers have continuous classes, rather than 10 teachers. For soft constraints, we assume that it is not necessary to exactly evaluate satisfiability, and we can compare the benefit of the solution coverage with the cost of their Boolean expressions. The fundamental idea of approximate-at-most-k is common to fractional encoding [6]. While fractional encoding has completeness, approximate-at-most-k does not, as mentioned above, and focuses only on reducing the number of Boolean expressions. ## 2 Approximate At-Most-k Encoding ### Fundamental idea An example is shown in Fig. 1, which illustrates the idea of approximate-at-most-k encoding. First, set \(A\) of four variables (depicted as circles) must be constrained by at-most-\(2\). Next, the number of trues in \(A_{1}\), the left half of \(A\), constrains variables \(B_{1}\), the left half of the set of variables \(B\). Specifically, as follows: * when 0 trues in \(A_{1}\), \(B_{1}\) is constrained by at-most-0, * when 1 true in \(A_{1}\), \(B_{1}\) is constrained by at-most-2, * when 2 trues in \(A_{1}\), \(B_{1}\) is constrained by at-most-4 (makes no sense). In general, \(B_{1}\) is dynamically constrained by the number of trues in \(A_{1}\). Since at most two in \(A\) can be true, at most four in \(B\) can be true. Thus, these constraints in total behave as an at-most-4 constraint on \(B\). Note that this is not a proper at-most-4 constraint because some cases of possible solutions are missing. For example, if \(B_{1}\) has three trues and one true in the other variables of \(B\), the right half, then that case satisfies an at-most-4 constraint on \(B\) but does not satisfy our idea given above. Actually, in that case, \(A_{1}\) needs two trues and the right half of \(A\) needs (at least) one true, and that is not possible under at-most-\(2\) constraint on \(A\). Because of this incompleteness, we call the idea _approximate-at-most_. We have to be careful about approximate-at-most not to use for determination of satisfiability, but to use only for searching better solutions along with soft constraints. That approximate-at-most-4-of-8 constraint is composed of a few at-most-2-of-4 constraints. Since the number of Boolean expressions for at-most constraints grows exponentially with the size of target variables, roughly speaking, we may expect the number of Boolean expressions on approximate-at-most-k will be reduced in some cases, as it were "single large or several small." ### \(2\times 2\) models The idea is able to apply to tree structure recursively, as shown in Fig. 2. Two Boolean variables in the same column at a parent node constrain corresponding four Boolean variables at the child node; when there are number \(n\,(0\leq n\leq 2)\) of trues in the two variables of the parent, the four child variables are constrained by at-most-\(2n\). In Boolean expressions, \[\neg\,v_{1}\Rightarrow AtMost_{0}\{u_{11},u_{12},u_{21},u_{22}\}\quad\wedge \quad\neg\,v_{2}\Rightarrow AtMost_{2}\{u_{11},u_{12},u_{21},u_{22}\} \tag{1}\] where \(v_{i}\) and \(u_{ij}\) denote variables in a parent node and child node respectively, and we assume \(v_{i}\) are in order encoding [7], i.e., \(v_{2}\Rightarrow v_{1}\) holds. By giving at-most-\(k\)\((0\leq k\leq 4)\) at the four variables of the top, these models generate approximate-at-most-\((k/4\cdot 2^{m+1})\) of \(2^{m+1}\) at the bottom, where \(m=1,2,\cdots\) denotes the height of the tree. ### generalized \(h\times w\) models More generalized models are shown in Fig. 3, in which each node except the bottoms has a matrix of variables with height \(h\) and width \(w\). On the same hierarchy level, the height and width of nodes are identical. Between a parent column of height \(h_{i}\) and its children of \(h_{i+1}\times w_{i+1}\), we need \(h_{i+1}\cdot w_{i+1}\) to be multiple of \(h_{i}\), i.e. \(h_{i+1}\cdot w_{i+1}\ mod\,h_{i}=0\); when \(h_{i+1}\cdot w_{i+1}=a\cdot h_{i}\) for some \(a\) and \(n\) trues in the parent column, the child variables are constrained by at-most-\((a\cdot n)\). In Boolean expressions, \[\bigwedge_{j=1,\cdots,h}\neg\,v_{j}\Rightarrow AtMost_{h^{\prime}\cdot w^{ \prime}\cdot(j-1)/h}\{\text{child variables of }v\} \tag{2}\] where \(v_{j}\) denotes parent variables of height \(h\) and the child node has \(h^{\prime}\cdot w^{\prime}\) variables. We also assume \(v_{j}\) are in order encoding, i.e., \(v_{j}\Rightarrow v_{j-1}(j=2,\cdots,h)\) holds. For leaf nodes at the bottom, they are simply defined sets of variables of number \(h_{n}\times m\), where \(h_{n}\) is the parent's height and an arbitrary \(m\). By giving at-most-\(k\) at the top node, these models generate approximate-at-most-\((k/(h_{1}\cdot w_{1})\cdot\Pi w_{i}\cdot h_{n}\cdot m)\)-of-\((\Pi w_{i}\cdot h_{n}\cdot m)\). For the sake of ease, let us also use a fraction to denote the number of the constraints as approximate-at-most-\(a/b\)-of-\(n\), when the top node has \(b\) variables and at-most-\(a\) is given, to generate approximate-at-most-\((a/b\cdot n)\)-of-\(n\) in the integer expression. ## 3 Experimental Results All software materials for these experiments are on the GitHub repository [8]. ### \(2\times 2\) models Here are the results of the number of Boolean expressions and coverages of the solution space, about \(2\times 2\) models. The CNF (Conjunctive Normal Form) of approximate-at-most-1/2-of-16 shows the following: Figure 3: \(h\times w\) models have arbitrary (but the same at the same level) numbers of height and width. * auxiliary variables: 12 * clauses: 58 * literals: 168 where every two variables in columns are encoded by order encoding [7] and each at-most-k constraint for small numbers employs binomial (pairwise) encoding. There are 39,203 possible solutions and approximate-at-most covers 68.2% of them, overall, which means fewer numbers are included such as 7\(\sim\)0 true(s). For the solutions of just 8 true variables, there are 12,870 possible solutions, and approximate-at-most covers 38.1% of them. While it will be depended on the objectives of using SAT whether to be focused on overall possible solutions or possible solutions of the maximal number, this paper mainly deals with the former, overall possible solutions from this point. Comparing approximate-at-most to conventional encoding methods, we focused on the literal number and choose counter encoding [9, 1], that demonstrates superior results at the literal number among the other encodings: binomial, binary, commander, and product [1]. Counter encoding of at-most-\(k\) is as follows. \[\bigwedge_{i=1}^{n-1}\neg x_{i}\lor r_{i,1} \tag{3}\] \[\bigwedge_{j=2}^{x}\neg r_{1,j}\] (4) \[\bigwedge_{i=2}^{n-1}\bigwedge_{j=1}^{k}\neg r_{i-1,j}\lor r_{i,j}\] (5) \[\bigwedge_{i=2}^{n-1}\bigwedge_{j=2}^{k}\neg x_{i}\vee\neg r_{i-1, j-1}\lor r_{i,j}\] (6) \[\bigwedge_{i=2}^{n}\neg x_{i}\vee\neg r_{i-1,k} \tag{7}\] where \(x_{1},\cdots,x_{n}\) are target variables and \(r_{i,j}\) denotes auxiliary variables. The black and gray lines in Fig. 4 indicate the literal number of approximate and counter at-most-\(1/2\) respectively. Literal rate is defined as below: \[\text{literal rate}=\text{(approximate literals)}\,/\,\text{(counter literals)}. \tag{8}\] As expected, approximate consumes a lower number than a conventional encoding. About the solution coverages, defined as below, \[\text{coverage}=\text{(solutions by approximate)}/\text{(all solutions)}, \tag{9}\] the black line in Fig. 5 indicates them. Predictably, it becomes lower as target variables increase. With the literal rate of approximate/counter, the dashed line (same as in Fig. 4), the coverage is larger than the literal rate at 8\(\sim\)64 variables but it turns around at 128 variables. Since we naturally hope to gain maximal coverage by fewer literals, the value of coverage per literal rate, the gray line, could be regarded as _efficiency_ from that point of view, i.e., \[\text{efficiency}=\text{coverage}\,/\,\text{(literal rate)}. \tag{10}\] In other words, the efficiency indicates a kind of advantage over counter encoding, with consideration for solution coverage. In the range of Fig. 5, approximate-at-most-1/2 of 32 variables exercises the most efficiently: 44.5% coverage on 15.4% literal rate (approximate: 376 / counter: 2,449). ### \(h\times w\) models For an arbitrary \(k\) and \(n\), where \(k<n\), there are many \(h\times w\) models to implement approximate-at-most-\(k\) of \(n\) in general. For an example of approximate-at-most-5 of 10, the followings are available: * \(h_{1}=2,w_{1}=2,m=3\), at-most-2 on the top, fix 1 false and 1 true on the bottom variables * \(h_{1}=2,w_{1}=2,h_{2}=2,w_{2}=2,m=2\), at-most-2 on the top, fix 3 falses and 3 trues on the bottom variables where \(h_{i}\),\(w_{i}\), and \(m\) are as shown in Fig. 3, and there are 8 other models. Among such models to implement approximate-at-most-\(k\), we are naturally interested in the most efficient model, and Fig. 6 shows the best efficiencies (= coverage / literal rate vs counter) for each approximate-at-most-\(k\), of 10, 20, and 30. For instance, approximate-at-most-5 of 10 has the best efficiency on the model: \[h_{1}=2,w_{1}=3,m=2,\text{at-most-3 on the top, fix 1 false and 1 true on the bottom variables and shows the following. Figure 4: Literal numbers of approximate (\(2\times 2\) models) and counter at-most-1/2. As the number of target variables increases, the difference between approximate and counter is enlarged, and thus the rate of them, the dashed line, decreases. * approximate literals: 140 / counter literals: 216 = 64.8% * solution coverage: 64.9% * efficiency: 1.0 From the graph in totality, we can expect high efficiency when the \(k\) value increases, but larger target variables depress it. Focusing on the around approximate-at-most-25 of 30, the gray line, efficiencies indicate high at 24 and 26 but relatively low at 25. The models at 24 and 26 are based on \(h_{1}=2,w_{1}=4,h_{2}=4,h_{3}=4,h_{4}=5,h_{5}=6,h_{6}=7,h_{7}=8,h_{8}=9,h_{9}= 10,h_{10}=11,h_{11}=13,h_{12}=14,h_{13}=15,h_{14}=16,h_{15}=17,h_{16}=18,h_{17}=19,h_{18}=19,h_{19}=20,h_{10}=21,h_{11}=22,h_{12}=23,h_{13}=24,h_{14}=25,h_{15}=26,h_{16}=27,h_{17}=28,h_{18}=29,h_{19}=30,h_{10}=31,h_{11}=32,h_{12}=33,h_{14}=34,h_{15}=35,h_{16}=36,h_{17}=37,h_{18}=38,h_{19}=40,h_{11}=41,h_{12}=42,h_{13}=43,h_{14}=56,h_{15}=57,h_{16}=60,h_{17}=70,h_{18}=80,h_{19}=81,h_{19}=90,h_{20}=21,h_{21}=22,h_{22}=23,h_{23}=24,h_{24}=25,h_{25}=26,h_{26}=27,h_{27}=28,h_{28}=29,h_{29}=30,h_{2 1}=32,h_{20}=34,h_{21}=35,h_{22}=36,h_{23}=37,h_{24}=38,h_{25}=39,h_{26}=39,h_{27}=3 9,h_{28}=30,h_{29}=31,h_{20}=32,h_{21}=33,h_{22}=34,h_{25}=36,h_{27}=38,h_{28}=39,h_{29}=32,h_{20}=30,h_{21}=32,h_{22}=34,h_{25}=36,h_{27}=38,h_{28}=39,h_{29}=3 9,h_{29}=30,h_{21}=32,h_{22}=34,h_{25}=36,h_{27}=38,h_{28}=39,h_{29}=30,h_{20}=3 9,h_{21}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{28}=39,h_{29}=30,h_{21}=32,h_{23}=3 4,h_{25}=36,h_{27}=38,h_{28}=39,h_{29}=30,h_{21}=32,h_{23}=34,h_{25}=36,h_{27}=3 9,h_{28}=39,h_{29}=30,h_{21}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{29}=39,h_{28}=3 9,h_{29}=30,h_{21}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{28}=39,h_{29}=30,h_{22}=3 9,h_{29}=31,h_{20}=32,h_{21}=33,h_{22}=34,h_{25}=36,h_{27}=38,h_{28}=39,h_{29}=3 9,h_{29}=30,h_{21}=32,h_{22}=34,h_{25}=36,h_{27}=38,h_{28}=39,h_{29}=30,h_{22}=3 9,h_{29}=31,h_{20}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{28}=39,h_{29}=30,h_{2 1}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{29}=30,h_{21}=32,h_{23}=34,h_{25}=36,h_{2 7}=38,h_{28}=39,h_{29}=30,h_{21}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{29}=39,h_{2 1}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{28}=39,h_{29}=30,h_{21}=32,h_{23}=34,h_{2 3}=35,h_{27}=38,h_{28}=39,h_{29}=30,h_{21}=32,h_{22}=34,h_{25}=36,h_{27}=38,h_{2 9}=39,h_{28}=39,h_{29}=30,h_{21}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{28}=39,h_{2 1}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{28}=39,h_{29}=30,h_{21}=32,h_{22}=34,h_{2 3}=35,h_{27}=38,h_{29}=30,h_{22}=30,h_{22}=30,h_{23}=30,h_{24}=30,h_{25}=30,h_{2 3}=31,h_{25}=36,h_{27}=38,h_{28}=39,h_{29}=30,h_{21}=32,h_{23}=34,h_{25}=36,h_{2 7}=38,h_{29}=30,h_{21}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{28}=39,h_{29}=30,h_{2 3}=34,h_{25}=36,h_{27}=38,h_{29}=39,h_{28}=39,h_{29}=30,h_{21}=32,h_{23}=34,h_{2 3}=35,h_{27}=38,h_{29}=30,h_{21}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{29}=39,h_{2 1}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{29}=39,h_{28}=39,h_{29}=30,h_{21}=32,h_{2 3}=34,h_{25}=36,h_{27}=38,h_{29}=39,h_{28}=39,h_{29}=30,h_{21}=32,h_{23}=34,h_{2 9}=35,h_{29}=36,h_{27}=38,h_{29}=39,h_{28}=39,h_{29}=30,h_{21}=32,h_{23}=34,h_{2 5}=36,h_{27}=38,h_{29}=39,h_{28}=39,h_{29}=30,h_{21}=32,h_{23}=34,h_{25}=36,h_{2 7}=38,h_{29}=39,h_{29}=30,h_{21}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{29}=39,h_{2 1}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{29}=39,h_{28}=39,h_{29}=30,h_{21}=32,h_{2 }=34,h_{25}=36,h_{27}=38,h_{29}=39,h_{29}=30,h_{21}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{2 9}=39,h_{28}=39,h_{29}=30,h_{21}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{29}=39,h_{2 1}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{29}=39,h_{29}=30,h_{21}=32,h_{24}=35,h_{2 }=36,h_{27}=38,h_{29}=39,h_{28}=39,h_{29}=30,h_{21}=32,h_{23}=34,h_{25}=36,h_{2 7}=38,h_{29}=39,h_{29}=30,h_{21}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{29}=39,h_{2 30}=30,h_{22}=30,h_{21}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{29}=39,h_{28}=39,h_{2 9}=30,h_{20}=30,h_{21}=32,h_{23}=34,h_{25}=36,h_{27}=38,h_{29}=39,h_{2 }=30,h_{21}=32,h_{23}=34,h_{25} \(2,w_{2}=2,m=2\), and \(25\) is based on \(h_{1}=2,w_{1}=3,h_{2}=2,w_{2}=3,m=2\). The former model seems efficient but approximate-at-most-25 cannot be implemented with the model. More specifically, the model converts at-most-6 on the top into approximate-at-most-24 of 32 on the bottom (the case of 24), and at-most-7 on the top into approximate-at-most-28 of 32 on the bottom (the case of 26). Since fixing the bottom variables to false decreases target variables and fixing to true decreases both count number and target variables, approximate-at-most-25 of 30 is not able to generate by the former efficient model with any adjustments of fixing target variables. This makes approximate-at-most-25 relatively low efficiency. ## 4 Discussion We studied the coverage which indicates how far approximate-at-most-k covers the possible solutions. There are two types of coverage: overall coverage and maximum-count coverage; for example, about at-most-8 of 16 variables, considering overall 0\(\sim\)8 counts of true and considering only 8 count of true, respectively. In this paper, we have mainly focused on the former, because it aims at the entire solution space and seems to be a comprehensive notion. However, using an at-most-k constraint as a soft constraint, we generally want to find the maximum-count true case, such as 8 count of true for at-most-8. Thus the maximum-count coverage will be considered for investigation in future work. If an approximate-at-most-k covers 50% of the possible solutions, then every possible solution is included in the approximate-at-most-k solutions with a probability of 50%. To solve a real-life problem, there is usually a huge space of possible solutions, and assuming 10 solutions there at this time, we can find the solution with a probability of 99.9% (\(1-0.5^{10}\)). From this point of view, we can expect practical utility more than the percentage of the coverage, and require a quantitative evaluation. ## 5 Conclusion This paper proposes a new method for efficiently encoding at-most-k constraints into SAT, called approximate-at-most-k, which consumes fewer Boolean expressions than conventional encodings. Approximate-at-most-k has gained low consumption at the cost of neglecting the completeness of the solution space, and it cannot be used to determine satisfiability. Meanwhile, it is useful to search for better solutions together with soft constraints, in fewer Boolean expressions. The experimental results support that approximate-at-most-k consumes relatively less than conventional counter encoding. Considering the coverage of solution space, we observed the relationship between the reduction rate of literals and the coverage; for example, approximate-at-most-16 of 32 consumes only 15% of counter encoding on the literal number and covers 44% of the solution space. In solving a real-world problem, approximate-at-most-k should be considered when there are massive soft constraints without sufficient computational resources.
2303.06675
LUKE-Graph: A Transformer-based Approach with Gated Relational Graph Attention for Cloze-style Reading Comprehension
Incorporating prior knowledge can improve existing pre-training models in cloze-style machine reading and has become a new trend in recent studies. Notably, most of the existing models have integrated external knowledge graphs (KG) and transformer-based models, such as BERT into a unified data structure. However, selecting the most relevant ambiguous entities in KG and extracting the best subgraph remains a challenge. In this paper, we propose the LUKE-Graph, a model that builds a heterogeneous graph based on the intuitive relationships between entities in a document without using any external KG. We then use a Relational Graph Attention (RGAT) network to fuse the graph's reasoning information and the contextual representation encoded by the pre-trained LUKE model. In this way, we can take advantage of LUKE, to derive an entity-aware representation; and a graph model - to exploit relation-aware representation. Moreover, we propose Gated-RGAT by augmenting RGAT with a gating mechanism that regulates the question information for the graph convolution operation. This is very similar to human reasoning processing because they always choose the best entity candidate based on the question information. Experimental results demonstrate that the LUKE-Graph achieves state-of-the-art performance on the ReCoRD dataset with commonsense reasoning.
Shima Foolad, Kourosh Kiani
2023-03-12T14:31:44Z
http://arxiv.org/abs/2303.06675v1
# LUKE-Graph: A Transformer-based Approach ###### Abstract Incorporating prior knowledge can improve existing pre-training models in cloze-style machine reading and has become a new trend in recent studies. Notably, most of the existing models have integrated external knowledge graphs (KG) and transformer-based models, such as BERT into a unified data structure. However, selecting the most relevant ambiguous entities in KG and extracting the best subgraph remains a challenge. In this paper, we propose the LUKE-Graph, a model that builds a heterogeneous graph based on the intuitive relationships between entities in a document without using any external KG. We then use a Relational Graph Attention (RGAT) network to fuse the graph's reasoning information and the contextual representation encoded by the pre-trained LUKE model. In this way, we can take advantage of LUKE, to derive an entity-aware representation; and a graph model - to exploit relation-aware representation. Moreover, we propose Gated-RGAT by augmenting RGAT with a gating mechanism that regulates the question information for the graph convolution operation. This is very similar to human reasoning processing because they always choose the best entity candidate based on the question information. Experimental results demonstrate that the LUKE-Graph achieves state-of-the-art performance on the ReCoRD dataset with commonsense reasoning. transformer-based model gated relational graph attention model cloze-style machine reading comprehension question answering ## 1 Introduction A common way to evaluate the capability of a computer's language understanding is the machine reading comprehension (MRC) task in natural language processing (NLP). To accomplish this task, the researchers have developed artificial intelligence models that can automatically answer the questions given in a document. Applying the transformer-based pre-trained models, such as BERT [1] and RoBERTa [2], has dramatically improved the performance of these models in public datasets. The intuition behind pre-trained models is that first understand the language and then be able to use it to do any downstream task in that language. Using the pre-trained models is very effective for supervised learning tasks such as MRC as there is not enough annotated data for training. Despite the tremendous success of the pre-trained models in MRC [1, 2, 3, 4, 5], there is still a real challenge to bridge the gap between the human performance and the models in datasets with deep commonsense reasoning, e.g., ReCoRD [6]. Since little supervised information is available during pre-training, such models do not have explicit knowledge of inference concepts. Therefore, the representations from these models fall short in their ability to support reasoning. Often, entities play an important role in such datasets, and, there is reasoning about the relationships between them. Hence, Yamada et al. [7] have developed a language understanding model with knowledge-based embeddings called LUKE. They have extended RoBERTa by appending entities to the input of the transformer model and considering them as independent tokens. It will enable LUKE to utilize an entity-aware self-attention mechanism to explicitly capture the connections between entities [8]. Although LUKE has archived impressive performance compared other transformer-based models, it is still not enough to distinguish between entities with the same strings and those that are unrelated. In other words, it does not take into account the intuitive relationships between entities in the long text. Most of the recent works [9; 10; 11; 12] have integrated external knowledge sources (e.g., WordNet and NELL) and BERT-large as a base model for handling commonsense reasoning in MRC. Therefore, they inject general knowledge graphs (KG) and pre-trained entity embeddings explicitly. This limits their ability to select the most relevant mentioned entities in KG, especially for the ambiguous words that, if not selected correctly, can create knowledge noise. Besides, choosing the best sub-graphs from external knowledge and using its information remains a challenge. To address the above challenge, in this work, we propose a transformer-based approach called LUKE-Graph, which uses pre-trained LUKE as a base model to output an entity-aware representation for each word and entity token in a document. Following BAG [13], we also transform all entities in the document into a graph where the nodes correspond to the entities and edges represent the relationships between them. We then import the entity representations from LUKE into a Relational Graph Attention Network named RGAT [14] to learn a relationship-aware representation of entities. In this way, we solve the aforementioned problem of intuitively relating entities by superimposing the LUKE and RGAT layers. Moreover, similar to [15], we extend RGAT with a gating mechanism that control the question information used in the graph convolution operation. The experimental results demonstrate that the LUKE-Graph achieves excellent performance on MRC with commonsense reasoning. We can summarize our contributions as: * We construct a heterogeneous graph based on the intuitive relationship between entities in a document as well as the reasoning on them without using any external KG. * We choose the LUKE model among transformer-based models such as BERT, RoBERTa, XLNET, etc. Because it can explicitly capture the relationships between entities by treating them as independent tokens. * We enrich the learned entity representation of the pre-trained LUKE model with RGAT, which proved valuable for ReCoRD dataset with commonsense reasoning. * We enhance RGAT to Gated-RGAT by incorporating the question information during reasoning through a question-aware gating mechanism that is very similar to human reasoning processing. ## 2 Related Work In this section, we briefly review recent works from three perspectives: **Transformer-based Models:** In recent years, the development of transformer architecture has created a revolution in NLP tasks by offering accuracy comparable to human performance or even higher. Hence, it becomes a basic architecture in the pre-trained language models such as BERT, RoBERTa, and XLNET [3]. Due to the promising results of the pre-trained models on downstream tasks, researchers [4; 16; 17; 18; 19] have extended them by defining some forms of sparse attention pattern to adopt the model for long documents. Longformer [4] has introduced a fixed-size window of attention surrounding each token to reduce computation. Since some tokens are critical for learning task-specific representations, e.g., query tokens in MRC, Longformer adds global attention to a few pre-selected input tokens. Similar to full attention [1; 2], the global tokens attend to all the input tokens, and all tokens in the input sequence attend to it. Ainslie et al. [18] named Extended Transformers Construction (ETC), which is closely related to Longformer, defining some new extra tokens as global that do not correspond to any input tokens. Zaheer et al. [19] built the BigBird model on the work of ETC, adding random connections between inputs to this structure. Also, Jia et al. [20] extracted keywords from the context to direct the model's focus to crucial details. However, the keywords don't make any sense together. Recently, researchers have proposed methods such as ERNIE [10], Know-BERT [21], and LUKE, which treat entities as independent tokens besides input words. ERNIE and Know-BERT enhanced contextualized word representations by learning static entity embeddings from an external knowledge source. Meanwhile, the LUKE model applies a new pretraining task to learn entity representations and it trains using a substantial corpus of entities annotated data. It also improves the transformer architecture using an entity-aware self-attention mechanism. As shown in recent work, the injection of prior knowledge information can significantly improve the existing pre-training models. Accordingly, we apply the LUKE pre-trained model to extract input features. **Graph-based Models:** It has proven graphs to be an effective way to represent complex relationships between objects and extract relational information [22]. Recently, researchers [13; 23; 24; 25] have studied graph-based models for MRC with commonsense reasoning. Entity-GCN [23] and BAG used Relational Graph Convolutional Networks (R-GCN) to comprehend the relationships of entities in text. Both of them built document-based entity graphs by matching entity candidate strings in the text and using them as nodes in the graph. The models represent nodes without fine-tuning by relying on token-level (GLoVe) and context-level (ELMo) features. Inspired by CFC [25], the HDEGraph model [24] applied both self-attention and co-attention to learn entity node representations. Besides, some studies regard the question context for the graph because it also provides crucial information that shouldn't be lost. Tang et al. [15] developed a path-based reasoning graph and used a gating mechanism to add the question context to RGCN. While Wu et al., [24] joined the subgraph with the entire question as a node and dynamically expanded subgraphs to efficiently utilize the question information and KB. These studies relate to our work because we also use the extended RGCN to capture reasoning chains between entities in a document and add the question information to the graph. Unlike all mentioned models that use a static entity representation, we use transformer-based models to obtain the abstract feature representations for each node. **Combined Graph and Transformer Models:** As mentioned, transformer-based models have achieved striking success in a wide range of NLP tasks. Also, GCN is the most effective architecture and has led to state-of-the-art performance on MRC tasks. Hence, a considerable number of studies such as Graph-BERT [27] and Graph Transformer [28] have generalized the transformer architecture on arbitrary graphs to take advantage of both. They first created subgraphs or original graphs from a given document and then passed the nodes to the transformer layer instead of words in the document. Therefore, they do not consider the information interaction over words in long documents. While our work incorporates the enriched representations of interaction between words of the document using a transformer-based model in the graph reasoning process. Some studies such as SKG-BERT [11], KT-NET [12], and KELM [9] utilize an external KG such as WordNet, ConceptNet and NELL and BERT-large as a base model. While we do not use any additional KG and only transform the document entities into a graph to consider the relationships between them. ## 3 Methods In this section, we describe our method and show its framework in Fig. 1. The LUKE-Graph method consists of two main components, namely a transformer-based module to extract the entity-aware representation, and a graph-based module to use the relation-aware representation. In the transformer-based module, we apply a pre-trained language model called LUKE that treats the words and entities in a given document as independent input tokens and outputs contextualized representations using an entity-aware self-attention mechanism. In the graph-based module, we construct a heterogeneous graph that relates the entities within a sentence and the same entities in different sentences. We then import the contextualized entity representations from the LUKE into two-layer RGAT to resolve the relationships between entities in the document before answering the questions. Moreover, we optimize each RGAT layer with a question-based gating mechanism and call it Gated-RGAT. Finally, we compute a score for each candidate entity using a linear classifier and select the candidate with the highest score as the final answer. ### Transformer-based Module **Embedding Layer:** Since using pre-trained language models is one of the most exciting directions for any task in NLP, we use the transformer-based model of LUKE. The model takes the following input sequence: [CLS] token, the tokenized questions (q1, q2,..., qn) along with a [PLC] special token in the position of missing entity (placeholder), two [SEP] tokens as a separator, the tokenized document (d1, d2,..., dm), [SEP] token. Further, it takes [MASK] token for the question placeholder and each candidate entity appearing in the document. Briefly, the form of input sequence is _"[CLS]\(+\)question\(+\)[SEP]\(+\)[SEP]\(+\)document\(+\)[SEP]\(+\)entities"_. We compute the input embedding vector of each token (word or entity) as the summation of the following three embeddings: token embeddings, position embeddings, and segment embeddings [1]. A token embedding is a numerical representation of the corresponding token learned during pretraining the LUKE model. Following RoBERTa, we generate word position embeddings based on the positions of the tokens in a sequence that provides a sense of order to the input data. While we average the position embeddings of the corresponding word tokens to derive entity position embeddings, as illustrated in Fig. 2. The segment embeddings represent the type of token, whether it is a word or an entity. **Transformer Layers with entity-aware Self-attention:** The main part of the transformer [29] is the self-attention mechanism that compares all input tokens by an attention score between each pair of them. Word and entity tokens alike undergo self-attention computations after layer embedding. It comprises three matrices: query, key, and value by multiplying the token embedding by three weight matrices that are trained during the training process. The only difference between the LUKE attention mechanism and the attention used in the original Transformer [29] is that a different query matrix is used by the LUKE based on token types. Formally, given a sequence of token embeddings \(X_{{}_{1}},X_{{}_{2}},...,X_{{}_{p}}\), where \(X_{{}_{i}}\in\mathbb{R}^{{}^{L}}\), we compute the output contextualized embeddings \(Y_{{}_{1}},Y_{{}_{2}},...,Y_{{}_{p}}\), where \(Y_{{}_{i}}\in\mathbb{R}^{{}^{H}}\) using the weighted sum of the input vectors transformed by the value matrix. \[\begin{split} Y_{{}_{i}}&=\sum_{j=1}^{p}a_{ij}VX_{{} _{i}}\\ a_{ij}&=\text{softmax}(\frac{QX_{{}_{i}}KX_{{}_{j}} ^{T}}{\sqrt{H}})\quad\text{where }Q=\begin{bmatrix}Q_{w2w}&\text{if }X_{{}_{i}}\in w,X_{{}_{j}}\in w\\ Q_{w2e}&\text{if }X_{{}_{i}}\in w,X_{{}_{j}}\in e\\ Q_{e2w}&\text{if }X_{{}_{i}}\in e,X_{{}_{j}}\in w\\ Q_{e2e}&\text{if }X_{{}_{i}}\in e,X_{{}_{j}}\in e\\ \end{bmatrix}\end{split} \tag{1}\] Figure 1: The framework of the LUKE-Graph method. Where \(Q\), \(K\), \(V\)\(\in\)\(\mathbb{R}^{H\times L}\) represent the query, key, and value matrices, respectively. To calculate the attention score \((\alpha_{ij})\) among types of tokens, words \((w)\), or entities \((e)\), there are four query matrices. P and H denote the length of the input sequence, and the dimension of hidden states, respectively. For better understanding, we visualize the multiplication of Q in K in Fig. 3. ### Graph-based Module **Entity Graph Construction:** let an entity graph be denoted as \(\text{G}=\{\text{V},\text{ E}\}\), where V indicates node representations and E is the edges between nodes. All the entities specified in the document along with the missing entity of question are used as nodes. Undirected edges are determined between the pairs of nodes that are located in the same sentence (SENT-BASED edge) and those with the same entity string in different sentences (MATCH edge). Furthermore, the node corresponding to the missing entity of the question is connected to all other nodes (PLC edge). See Fig. 4 for an illustration of graph construction. Inspired by the BAG model [13], we build a graph with two differences as follows: 1. Defining the relationship between entities within and across sentences instead of paragraphs. Due to the paragraph texts being long, most of the entities are incorrectly connected in the BAG model. While generally, the entities that appear in a sentence are related to each other. 2. Assigning a node for the missing entity of the question (placeholder) and its relationship with all nodes in the graph, which is ignored in the BAG model. As proven in experiments, the placeholder node plays a significant role in the graph (as shown in Section 4.4), and important information for commonsense reasoning is lost without it. **Gated-RGAT Layers:** To aggregate the information between the entities on a graph structure, we used one of the most popular GCN architectures named RGAT [14]. It extends the attention mechanisms of the GAT layer in Velickovic et al. [30] to the relational graph domain from Schlichtkrull et al [31]. For this purpose, we import the entity representations of the LUKE model into the RGAT layers. For each layer, we define the \(i\)-th node by a representation of \(Y_{i}\in\mathbb{R}^{L}\) as an input and an updated representation of \(Z_{i}\in\mathbb{R}^{L^{i}}\) as output. Moreover, the edges as mentioned above and their relation types (MATCH, SENT-BASED, or PLC) are considered inputs. In an RGAT layer, an attention score is computed for every edge \((i,j)\) and relation type \(r\) named \(a_{ij}^{(r)}\), which indicates the importance of the features of the neighbor \(j\) to the node \(i\), i.e. Figure 3: Visualizing the multiplication of Q in K in entity-aware self-attention mechanism. Figure 2: Embedding layer. **Document: Puerto Rico on Sunday overwhelmingly voted for statehood. But Congress, the only body that can approve new states, will ultimately decide whether the status of the US commonwealth changes. Ninety-seven percent of the votes in..., official results from the State Electoral Commission show. Today, we the people of Puerto Rico are sending a strong and clear message to the US Congress... and to the world... claiming our equal rights as American citizens, Puerto Rico Gov. Ricardo Rossello said in a news release and Puerto Rico voted Sunday in favor of statehood** **Question: For one, they can truthfully say, "Don't blame me, I didn't vote for them," when discussing the PLC [] presidency.** \[h_{i}^{(r)}=W^{(r)}Y_{i}\] \[e_{\bar{q}}^{(r)}=\text{LeakyReLU}(h_{i}^{(r)}.Q^{(r)}+h_{j}^{(r) }.K^{(r)}) \tag{2}\] \[a_{\bar{q}}^{(r)}=\frac{\exp(e_{\bar{q}}^{(r)})}{\sum_{r^{\prime} \in\Re}\sum_{k\in N_{r}(i)}\exp(e_{\bar{q}}^{r^{\prime}})}\] Where \(W^{(r)}\in\mathbb{R}^{L^{\prime},r},Q^{(r)}\in\mathbb{R}^{L^{\prime}\!\!=\!\! \!D}\), and \(K^{(r)}\in\mathbb{R}^{L^{\prime}\!\!=\!\!\!D}\) are the learnable parameters for linear transformation, query, and key attention kernels, respectively. \(\Re\) represents the set of relations types and \(N(i)\) denotes the number of nodes connected to \(\hat{\mathbf{i}}\). Once acquired, the aggregation step is combined with the normalized attention scores to determine an updated representation for each node as follows: \[Z_{i}^{\prime}=\sigma(\sum_{r\in\Re}\sum_{j\in N(i)}\mathbf{a}_{\bar{q}}^{(r)}\bm {h}_{j}^{(r)}) \tag{3}\] Where \(\sigma\) is a nonlinearity (we use ELU). Similar to [29], RGAT uses multi-head attention to strengthen self-attention. In particular, Equation 3 is applied to the K independent attention mechanisms, which are then concatenated to produce the output representation as follows: \[Z_{i}^{\prime}=\bigoplus_{k=1}^{K}\sigma(\sum_{r\in\Re}\sum_{j\in N(i)}\mathbf{a }_{\bar{q}}^{(r,k)}\mathbf{h}_{j}^{(r,k)}) \tag{4}\] Where \(\oplus\) denotes concatenation, \(\mathbf{a}_{\bar{q}}^{(r,k)}\) are the normalized attention scores under relation r and computed by the \(k\) - th attention mechanism, and \(\mathbf{h}_{\bar{q}}^{(r,k)}=\mathbf{W}^{(r,k)}\mathbf{Y}_{i}^{\prime}\). Since the use of multi-head attention creates of \(\textit{KL}^{\prime}\) features instead of \(L^{\prime}\) for each node, we use averaging instead of concatenation in the last layer of RGAT. Inspired by [15], we add a question-aware gating mechanism to RGAT that adjust the update message based on the question. This gating mechanism, like human reasoning processing, considers the question when choosing the entity information. Fig. 5 illustrates the question-aware gate for each node. For this purpose, first we create a question Figure 4: the right side (a) indicates a constructed heterogeneous graph for the left side sample (b). Each ellipse in the document ellipse illustrates nodes belonging to a sentence. The same-colored nodes denote that they correspond to an identical entity. While the normal dashed lines connecting the node pairs correspond to MATCH edges. The solid lines connecting the node pairs correspond to SENT-BASED edges. The dash-dot lines connect the placeholder node (node corresponding to the missing entity of the question that is yellow in the graph) and all other nodes correspond to PLC edges. Note, we ignore the connection of the placeholder node with all other nodes for good visualization. representation \((q_{i})\) for every node \(i\) by a weighted sum of the question vectors encoded by LUKE, i.e. \[q_{i}=\sum_{j=1}^{n}w_{ij}Y_{qj}\] \[w_{ij}=\sigma(f_{g}([Z^{\prime}_{i};Y_{qj}])) \tag{5}\] Where \(Y_{qj}\) denotes the representation of \(j\)-th question token encoded by LUKE, \(\sigma\) is the sigmoid function, \([Z^{\prime}_{i};Y_{qj}]\) is a concatenation of \(Z^{\prime}_{i}\) and \(Y_{qj}\), and \(f_{g}(.)\) represents a single-layer multilayer perceptron (MLP). Finally, we update the message of \(Z^{\prime}_{i}\) based on the question representation as follows: \[\alpha_{i}=\sigma(f_{s}([Z^{\prime}_{i};q_{i}]))\] \[Z_{i}=\alpha_{i}\odot\tanh(q_{i})+(1-\alpha_{i})\odot Z^{\prime}_{i} \tag{6}\] Where \(f_{g}(.)\) is a single-layer MLP, \(\sigma\) denotes the sigmoid function, and \(\bigodot\) is element-wise multiplication. ### Score Accumulation We concatenate the final representation of the placeholder (PLC) with each candidate entity and then use a linear classifier to compute the candidate score. We use binary cross-entropy loss averaged across all candidates to train the model, and we select the candidate with the highest score as the final answer \(e^{*}\): \[e^{*}=\underset{e\in E}{\text{argmax}}\ f_{o}([Z_{PLC};Z_{e}]) \tag{7}\] Where \(E\) denotes the set of entity candidates, \(f_{o}(.)\) is a linear classifier, and \([Z_{PLC};Z_{e}]\) is a concatenation of the PLC representation \((Z_{PLC})\) and the candidate representation \((Z_{e})\). ## 4 Experiments In this section, we validate the effectiveness of the proposed method on a cloze-style reading comprehension dataset Figure 5: Question-aware gate for node \(i\). and compare it with other state-of-the-art methods. ### Implementation Details Our transformer-based module configuration follows the LUKE model. The hyper-parameters used in LUKE are shown in Table 1. The module structure follows the large LUKE model, which includes 24 hidden layers, 1024 hidden dimensions (\(L=1024\)), 64 attention head dimensions (\(H=64\)), and 16 self-attention heads. Therefore, a word token embedding has 1024 dimensions, while an entity token embedding has 256 dimensions. With a dense layer, the entity token embedding dimension is converted from 256 to 1024. We tokenize the input text using RoBERTa's tokenizer, which has a vocabulary of 50K words. Also, there are 500K common entities in the entity vocabulary along with two special entities, namely [MASK] and [UNK]. When an entity is missing from the vocabulary, the [UNK] entity is used in its place. Each entity's token embedding is initialized during finetuning using the [MASK] entity. Note the [MASK] word for the masked language model and the [MASK] entity for the masked entities in the pretraining task are the two [MASK] tokens that the LUKE employs. When lifting weights from pre-trained LUKE, the original query matrix \(Q\) serves as the initialization matrix for the query matrices (\(Q_{\omega_{z\times 2}},Q_{\omega_{z\times 2}},Q_{\omega_{z\times 2}}\)) in the self-attention mechanism. In our graph-based module, we randomly initialize a two-layer RGAT along with the ELU activation function. The first layer has 8 attention heads with 1024 hidden state size, while the second one has 1 head with 1024 hidden size (\(L^{\prime}=1024\)). Also, the number of relation types in our experiment is set to \(\Re=3\) and the dimensionality of the query and key are both \(\text{D}=1\). We implemented the graph-based module using the PyTorch Geometric library1[32]. We use AdamW to optimize the model based on performance on the development set. Footnote 1: [https://pytorch-geometric.readthedocs.io](https://pytorch-geometric.readthedocs.io) Since we use the LUKE model as a baseline, we apply our method to its official repository2. We run our method using an Amazon EC2 p3.8xlarge instance with four GPUs of Tesla V100 SXM2 16 GB. A single model trained with 2 batch sizes and 2 epochs takes about 2 hours on the ReCoRD dataset. While the LUKE paper conducted the experiments on a server with eight V100 GPUs and two Intel Xeon E5-2698 v4 CPUs. Table 2 shows the different configurations between ours and LUKE. For a fair comparison, we run the official LUKE repository without any change on our server and achieved a lower development score compared to the reported one in the paper (See Table 2, Dev score column). Hence, we compare the LUKE model using the results obtained by the official LUKE code in the following. Footnote 2: [https://github.com/studio-ousia/luke](https://github.com/studio-ousia/luke) ### Dataset Our experiments are conducted on a cloze-style reading comprehension dataset named ReCoRD [6]. It is a challenging dataset that requires commonsense reasoning to answer the cloze-style questions. ReCoRD contains 120k automatically generated questions and documents from 70,000 CNN and Daily Mail news articles which have been validated by experts. Each cloze-style question is formulated as a sentence with the missing entity named placeholder. Therefore, an MRC system is expected to find a correct entity as an answer. In other words, the appropriate answer is chosen from among all entities in the related document to fill the placeholder in the question. Formally, given a document \(D\) describing an event, a cloze-style question \(Q([PLC])\) with a missing entity indicated by \([PLC]\), and a set of entity candidates \(E\) marked in \(D\), an MRC system is expected to choose an entity \(e\in E\) that best fits \([PLC]\), i.e., \[e^{*}=\underset{e\in E}{\text{argmax}}\ p(Q(e)\mid D) \tag{8}\] Where \(\text{argmax}\ p(.)\) determines the most probable entity as \(e^{*}\). Fig. 4(a) shows a ReCoRD example which \(E\) are indicated by bold color text in the document. There are about 100k samples in the training set, 10k samples in the development set, and 10k samples in the test set on ReCoRD. Since in this dataset, the test set is not publicly released, we compare models on the development set. Models are evaluated on the basis of the Exact Match (EM) and F1 criteria. The EM criterion is a binary number that indicates whether the answer produced by the model exactly matches the correct answer. The F1 measure calculates the amount of word overlap between the answer generated by the model and the actual correct answer. Human performance in ReCoRD has reached 91.28 EM and 91.64 F1 in the development set. ### Results We compare our method (LUKE-Graph) to current state-of-the-art (SOTA) transformer-based and graph-based models on the ReCoRD dataset: BERT [1], Graph-BERT [27], SKG-BERT [11], KT-NET [12], XLNet-Verifier [33], KELM [9], RoBERTa [2], and LUKE [7]. We have selected only works related to our method and others are available on the leaderboard3. Their scores are taken directly from the leaderboard and literature [7, 9]. As shown in Table 3, all the models are based on a single model. Footnote 3: [https://sheng-z.github.io/ReCoRD-explorer/](https://sheng-z.github.io/ReCoRD-explorer/) Among transformer-based models, results on the dev set of ReCoRD show that the LUKE-Graph (ours) outperforms the former SOTA by +0.3 F1/+0.6 EM. This improvement demonstrates the effectiveness of adding a relational graph with attention heads and a question-aware gate on the LUKE model. As you can see from the results of transformer-based models, i.e., BERT, XLNET, RoBERTa, and LUKE, in Table 3, LUKE remarkably outperforms BERT-large by +19.2 F1/+19.3 EM and XLNET by +8.5 F1/+9.1 EM. While LUKE has lifted from RoBERTa and the only difference is adding entities to the inputs and using an entity-aware self-attention mechanism, it significantly achieves a gain of +0.6 F1/+0.6 EM over RoBERTa. Hence, due to the impressive results of the LUKE model, we choose it as our transformer-based model. However, it lacks a fair comparison of the transformer-based models, due to the computing and pretraining resources being different [34]. Among graph-based models, i.e., SKG-BERT, KT-NET, and KELM that use an external knowledge graph and BERT-large as a base model, the LUKE-Graph (ours) does not use any additional knowledge graph. However, experimental results demonstrate the LUKE-Graph offers a 18.7/19, 16.7/18.2, and 14.8/15 improvement in F1/EM over SKG-BERT, KT-NET, and KELM, respectively. Nevertheless, we note that the LUKE-Graph (ours) achieves the best performance compared to the other methods. To better visualization of the improvement process of the models based on the reported date in the leaderboard, we have illustrated the dev F1/EM results of the models in the chart forms in Fig. 6. ### Ablation study We perform several ablation studies on the ReCoRD development set to investigate the contribution of each module to the best model. **Effect of graph module:** we investigate the impact of the stacking graph module on the LUKE model, as shown in Table 4. As we argued in the results section, the performance drops 0.4 on F1 and 0.55 on EM without the graph module. This proves the efficacy of taking into account the intuitive relations between entities on the performance of our method. Moreover, unlike other similar methods [13, 23, 35], we apply RGAT with a multi-head attention and a question-aware gating mechanism. However, the performance on F1/EM falls off 0.15/0.14, and 0.1/0.12 without the attention mechanism and without the question-aware gate in the graph network, respectively. Furthermore, we investigate the effect of removing the relational type in the graph and processing it by the RGAT module. If we treat all edges equally without distinguishing them by type (w/o relation type in Table 4), the F1/EM results degrade by 0.2/0.22, indicating that different information encoded by different types of edges is important to maintain good performance. We then inspect the impact of each type of relationship by removing each of them independently. Indeed, the nodes still exist in the graph and we only ablate corresponding edges between them. we either remove edges \begin{table} \begin{tabular}{l|c} \hline **Name** & **Value** \\ \hline Max Seq length & 512 \\ Max question length & 90 \\ Warmup ratio & 0.06 \\ Learning rate decay & linear \\ Weight decay & 0.01 \\ Adam \(\beta\)1 & 0.9 \\ Adam \(\beta\)2 & 0.98 \\ Adam \(\epsilon\) & 1e-6 \\ \hline \end{tabular} \end{table} Table 1: Hyper-parameters of our transformer-based module. \begin{table} \begin{tabular}{l c c c c c c c} \hline **Name** & **Number** & **Training** & **Training** & **Learning** & **Eval** & **Dev Score** \\ **of GPUs** & **epochs** & **time** & **rate** & **batch size** & **F1** & **EM** \\ \hline LUKE [7] & 8 & 2 & 92min & 1e-5 & 32 & 91.4 & 90.8 \\ LUKE with Ours settings & 4 & 2 & 120min & 1e-5 & 32 & 90.96 & 90.4 \\ \hline \end{tabular} \end{table} Table 2: Comparing the details of our configuration with the LUKE paper due to our limitations of computational resources. between those pairs of nodes that are in the same sentences (SENT-BASED edge), connections between those matching exactly (MATCH edge), or edges between the question placeholder and all other nodes (PLC edge). As you can see in Table 4, it seems that the placeholder connections (PLC edges) play a more major role compared to other connections. Because the missing entity (placeholder node) in question is supposed to be filled with candidate nodes and such an important connection is lost without the PLC edge. **Impact of Different Graph Neural Networks:** we investigate the proposed model using the most popular GNN architectures in our graph module: RGAT [14], GATv2 [36], GAT [30], RGCN [31], and GCN [37]. Table 5 shows the effect of these architectures. All the experiments were performed using PyTorch Geometric. GCN is the most commonly used architecture in real-life applications that introduced an efficient layer-wise propagation rule for neural network models. However, it did not perform well in our experiments (-2.71 F1/-3.16 EM) due to not considering the importance between nodes and the type of connection between them. GAT extends GCN by considering a learnable attention mechanism to select the most relevant neighbors of each node. Therefore, GAT offers a 1.94 improvement in F1 over GCN in our model. GATv2 fixes the static attention problem of GAT by only modifying the order of attention operations in GAT. Hence, every node can attend to any other node in GATv2. In our experiment, GATv2 is more accurate than GAT and achieves a gain of +0.6 F1 over GAT. Additionally, Relational GCN (RGCN) and Relational GAT (RGAT) have been proposed as an extension of the previously mentioned models to the relational graph domain. With the difference that, in addition to considering the local relational structure, RGAT benefits from giving importance to different nodes dynamically and outperforms RGCN by +0.08 F1/+0.11 EM. Overall, the comparison of results in Table 5 demonstrates the power of RGAT in our method. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{**Dev**} & \multicolumn{2}{c}{**Test**} \\ \cline{2-5} **Name** & **F1** & **EM** & **F1** & **EM** \\ \hline Human & 91.64 & 91.28 & 91.69 & 91.31 \\ BERT-Base [1] & - & - & 56.1 & 54.0 \\ BERT-Large\({}^{\text{+}}\)[1] & 72.2 & 70.2 & 72.0 & 71.3 \\ Graph-BERT [27] & - & - & 63.0 & 60.8 \\ SKG-BERT\({}^{\text{+}}\)[11] & 71.6 & 70.9 & 72.8 & 72.2 \\ KT-NET\({}^{\text{+}}\)[12] & 73.6 & 71.6 & 74.8 & 73.0 \\ XLNet-Verifier\({}^{\text{+}}\)[33] & 82.1 & 80.6 & 82.7 & 81.5 \\ KELM\({}^{\text{+}}\)[9] & 75.6 & 75.1 & 76.7 & 76.2 \\ RoBERTa\({}^{\text{+}}\)[2] & 89.5 & 89.0 & 90.6 & 90.0 \\ \hline LUKE\({}^{\text{+}}\)[7] & 90.96\({}^{\text{+}}\) & 90.4\({}^{\text{+}}\) & 91.2 & 90.6 \\ LUKE-Graph (ours) & **91.36** & **90.95** & **91.5** & **91.2** \\ \hline \hline \end{tabular} \end{table} Table 3: Results on the ReCoRD dataset. All models are based on a single model. Note, missing results are indicated by “-”. [\(\pm\)] Results reported in KELM paper, [+] Results reported in LUKE paper, [*] Results obtained by official LUKE code for a fair comparison, and others are taken from ReCoRD leaderboard. Figure 6: Visualization of the improvement process of the models based on the reported date in the ReCoRD leaderboard for Dev F1 (a) and EM (b) results. The LUKE-Graph denotes our method. ## 5 Conclusion and Future Work This paper presents the LUKE-Graph for MRC with commonsense reasoning, which leverages LUKE by considering the intuitive relationships between entities in long texts. Recent studies have integrated external knowledge and textual context into a unified data structure for processing commonsense reasoning. However, there is a challenge when it comes to selecting the best subgraph and the most relevant entity in KGs, especially for ambiguous words. Here, the LUKE-Graph differs from these studies; it converts the entities in the document and their relations into a heterogeneous graph. We also use RGAT to improve the entity representation encoded by LUKE using graph information. Furthermore, we optimize RGAT to Gated-RGAT by incorporating the question information during reasoning through a gating mechanism. The enrichment of the contextual representation of the pre-trained LUKE with Gated-RGAT proved valuable for the ReCoRD dataset and resulted in a 0.6% improvement of EM over the baseline LUKE. As Future works, we will verify our model on more challenging datasets with multi-hop reasoning problems, and we would like to incorporate the importance of the relationship between entities in some way rather than using a separate graph module.
2303.09546
Quasi-similarity, entropy and disjointness of ergodic systems
Answering Vershik's question we show that quasi-similarity does not conserve the entropy, proving quasi-similarity of all Bernoulli actions of a countable infinite group. We prove also the following generalization of Pinsker's theorem: the actions with zero Kirillov-Kushnirenko $P$-entropy and the actions with completely positive $P$-entropy are disjoint. Poisson suspensions are used as examples.
Valery V. Ryzhikov, Jean-Paul Thouvenot
2023-03-16T17:56:17Z
http://arxiv.org/abs/2303.09546v3
# Quasi-similarity and joining-stable invariants of ergodic actions ###### Abstract Answering Vershik's question we show that quasi-similarity does not conserve the entropy, proving quasi-similarity of all Bernoulli actions of the groups with an element of infinite order. We produce joining-stable invariants considering zero \(P\)-entropy systems and prove an analog of Pinsker's theorem establishing the disjointness of such systems with the actions of completely positive \(P\)-entropy. Then we apply Poisson suspensions to get a class of the corresponding examples. ## 1 Introduction In ergodic theory, there are several types of equivalences for measure-preserving actions, let us mention some of them in descending order of their strength: _isomorphism, weak isomorphism, quasi-similarity, spectral isomorphism._ The isomorphism means that actions are conjugate by an invertible measure-preserving transformation. The weak isomorphism of the two systems implies that each of them is isomorphic in the above sense to a factor of the other. Recall that the factors are the restrictions of the action to some invariant sigma-algebras. Quasi-similarity of two actions means the existence of an injective Markov operator with a dense image that intertwines these actions. Spectral isomorphism implies the presence of a unitary intertwining operator. For example, all Bernoulli automorphisms are spectrally isomorphic (have countably multiple Lebesgue spectrum), but, as Kolmogorov showed, they can be distinguished by entropy. Sinai proved that Bernoulli automorphisms with the same entropy are weakly isomorphic, and Ornstein proved their isomorphism (for these classical results, see, for example, [1]). Vershik proposed the concept of quasi-similarity [2], in connection with this several problems aroed. One of them is solved by Fraczek and Lemanczyk [3]: they provided two automorphisms that are quasi-similar but not weakly isomorphic. The question of the invariance of entropy under the quasi-similarity remained open. In this note we answer this question in negative showing that all Bernoulli systems are quasi-similar via Bernoulli joinings from [4]. Let an action \(T\) possess an inariant sigma-algebra, the restriction of \(T\) to this algebra is called factor. We say \(S\)-factor, if this factor is isomorphic to an action \(S\). The theory of joinings can be considered as the theory of the representations in a measure-preserving action a system of \(S_{i}\)-factors for a given collection of actions \(S_{i}\). An invariant of a dynamical system is called _joining-stable_ if any action generated by an arbitrary collection of factors with this invariant possesses it too. The property _to have zero entropy_ is joining-stable. Developing ideas from [7] and [8] we consider actions with zero \(P\)-entropy and completely positive \(P\)-entropy, give examples of the latter is the class of deterministic actions. Using joining stability for zero \(P\)-entropy, we prove an analog of Pinsker's theorem on disjointness of an action with zero \(P\)-entropy from any action with completely positive \(P\)-entropy. ## 2 Quasi-similarity of Bernoulli actions **Theorem 2.1**.: _All Bernoulli actions of a given countable group with an element of infinite order are quasi-similar._ Proof. Let \(S\) be the Bernoulli scheme of type \[\left(a,\frac{1}{2}-a,\frac{1}{2}\right),\ \ 0<a<1/2.\] It has the following evident Bernoulli factors \(S_{P}\), \(S_{Q}\), where \(S_{P}\) is of types \((a,1-a)\) and \(Q\) is of types \(\left(\frac{1}{2},\frac{1}{2}\right)\). These factors generate our system \(S\). Let \(E^{Q}\) be the orthogonal projection onto the space of the factor \(S_{Q}\) (this space is denoted by \(L_{2}(Q)\)). We define an operator \(\mathbf{J}\) setting \(\mathbf{J}=E^{Q}\left|{}_{L_{2}(P)}\right.\) Then \(\mathbf{J}\) is a Markov injective operator that intertwines factors \(S_{P}\) and \(S_{Q}\), moreover \(\mathbf{J}L_{2}(P)\) is dense in \(L_{2}(Q)\). So the factors \(S_{P}\), \(S_{Q}\) are quasi-similar. The idea of constructing such \({\bf J}\) was taken from Lemma 4 [4]. Below we present the above for the reader who is not familiar with Bernoulli joinings. Let us consider the partitions \[\xi=\{A,X\setminus A\},\ \ 0<\mu(A)=a<1/2,\] and \[\beta=\{B,X\setminus B\},\ \ \mu(B)=1/2,\] where \[X=[0,1],\ \ A=[0,a],\ \ B=[0,1/2].\] We define a Markov operator \(J:L_{2}(X)\to L_{2}(X)\), which is an integral operator with the following \(\xi\times\beta\)-measurable kernel \(K(x,y)\): \(K(x,y)=0,\ \ (x,y)\in A\times B;\ \ \ \ \ K(x,y)=\frac{1}{1-a},\ \ \ (x,y)\in(X \setminus A)\times B;\) \(K(x,y)=2,\ \ (x,y)\in A\times(X\setminus B);\ \ \ \ K(x,y)=\frac{1-2a}{1-a},\ \ (x,y)\in(X \setminus A)\times(X\setminus B).\) Recall that, by defenition, Markov operators \(J\) preserve the positivity of functions, moreover, under the action of \(J\) and \(J^{*}\) the constants remain fixed. Let \(S\) be the standard shift in the space \(X^{\mathbf{Z}}\), let \(\mathcal{A}\) denote the sigma-algebra generated by the sets \[S^{n}(\cdots\times X\times X\times A\times X\times X\dots),\] and \(\mathcal{B}\) - the sigma-algebra generated by \[S^{n}(\cdots\times X\times X\times B\times X\times X\dots).\] The restriction of \(S\) to \(\mathcal{A}\) is the Bernoulli automorphism \(T_{a}\) with entropy equal to \(H(\xi)=-a\log_{2}a-(1-a)\log_{2}(1-a)\), the restriction of \(S\) to \(\mathcal{B}\) is the Bernoulli automorphism \(T\) with entropy equal to \(H(\beta)=1\). Now we consider a Markov operator \(\mathbf{J}_{a}:L_{2}(X^{\mathbf{Z}},\mu^{Z})\to L_{2}(X^{\mathbf{Z}},\mu^{Z})\) setting \[\mathbf{J}_{a}=\bigotimes_{z\in\mathbf{Z}}J=\cdots\otimes J\otimes J\otimes J \otimes\dots.\] The operator \(\mathbf{J}_{a}\) intertwines \(T_{a}\) with \(T\): \[T\mathbf{J}_{a}=\mathbf{J}_{a}T_{a}.\] The image of \(\mathbf{J}_{a}\) is dense: \[\overline{\mathbf{J}_{a}L_{2}(\mathcal{A})}=L_{2}(\mathcal{B}),\] and \[Ker\mathbf{J}_{a}=0,\] since tensor powers of injective \(J\) are injective as well. Thus, for all \(a\), \(0<a<1/2\), we get that \(T_{a}\) is quasi-similar to \(T\). Since \(T_{a}^{n}\) is quasi-similar to \(T_{a^{\prime}}^{n}\), \(0<a^{\prime}<1/2\), for any \(c,c^{\prime}>0\) we find Bernoulli automorphisms \(T\),\(T^{\prime}\) such that \(h(T)=c\), \(h(T^{\prime})=c^{\prime}\) and \(T\),\(T^{\prime}\) are quasi-similar. Note also that the operators \[{\bf J}_{a}\otimes{\bf J}_{a^{2}}\otimes{\bf J}_{a^{3}}\otimes\ldots\] intertwine finite entropy products of Bernoulli automorphisms with the infinite entropy automorphism \(T\otimes T\otimes T\otimes\ldots\). Due to Ornstein's isomorphism theorem we complete the proof for \({\bf Z}\)-action. For the group actions we literally repeat the above reasoning (considering \((X^{G},\mu^{G})\) instead of \((X^{\bf Z},\mu^{\bf Z})\)) and use Stepin's generalization [5] (Theorem 1) of the mentioned result of Ornstein. The following problem becomes relevant: _are there \(K\)-automorphisms that are not quasi-similar?_ ## 3. Joining-stable invariants and \(P\)-entropy Recall that an invariant of a dynamical system is joining-stable if any dynamical system generated by a collection of factors with this invariant inherits it. Examples of such invariants are zero entropy and zero entropy along a sequence (called the Kushnirenko entropy, see [7]). Note that the \(K\)-property and even the continuity of the spectrum are not hereditarily stable invariants. Two Bernoulli factors can generate a system with a factor isomorphic to a given ergodic system with zero entropy [6]. **Theorem 3.1.**_Let the system \(T\) have a joining-stable property, and the system \(S\) is such that it and any of its non-trivial factors do not have this property. Then \(S\) and \(T\) are disjoint._ Proof. If the factors \(S\) and \(T\) are not disjoint, there is a system generated by a countable family \(T\)-factors that has a non-trivial factor isomorphic to a factor of \(S\). This assertion is term of joining can be found in [9], lemme 3.2. From the conditions of theorem we see that such non-trivial factor cannot exist. Thus, \(S\) and \(T\) must be disjoint. \(P\)-entropy.We consider the following modification [8] of the Kushnirenko entropy. Let \(P=\{P_{j}\}\) be a sequence of finite subsets in a countably infinite group \(G\). For a measure-preserving action \(T=\{T_{g}\}\) of the group \(G\), we define \[h_{j}(T,\xi)=\frac{1}{|P_{j}|}H\left(\bigvee_{p\in P_{j}}T_{p}\xi \right),\] \[h_{P}(T,\xi)=\limsup_{j}\,h_{j}(T,\xi),\] \[h_{P}(T)=\sup_{\xi}h_{P}(T,\xi),\] where \(\xi\) denotes a finite measurable partition of the space \(X\), and \(H(\xi)\) is the entropy of the partition \(\xi\): \[H(\{C_{1},C_{2},\ldots,C_{n}\})=-\sum_{i=1}^{n}\mu(C_{i})\ln\mu(C_{i}).\] We consider below the following particular case \(G=Z\), when \(P_{j}\) are growing progressions \[P_{j}=\{j,2j,\ldots,L(j)j\},\quad L(j)\to\infty.\] **Examples of zero \(P\)-entropy automorphisms.** Let \(T\) be of zero entropy. The powers \(T^{n}\) also have zero entropy. So for any \(\xi\) and \(j\) there is \(L(j)\) such that \[h_{j}(T^{j},\xi)<\frac{1}{j}.\] We can choose such \(j_{k}\to\infty\) that for \(P=\{P_{j_{k}}:k\in\mathbf{N}\}\) and all finite partitions \(\xi\) we get \(\limsup_{k}\,h_{j_{k}}(T,\xi)=0.\), \(h_{P}(T)=0.\) **Lemma 3.2.**_Zero \(P\)-entropy is joinig-stable._ Lemma follows easily from the fact that \(\xi_{1}\vee\cdots\vee\xi_{n}\) is generating partition for \(F_{1}\vee\cdots\lor F_{n}\), if \(\xi_{m}\), \(1\leq m\leq n\), is a generating partition for the factor \(F_{m}\). Poisson suspensions with completely positive \(P\)-entropy We say that an action has completely positive \(P\)-entropy if any non-trivial factor of the action has positive \(P\)-entropy. From Theorem 3.1 and Lemma 3.2 we get the following assertion. **Theorem 4.1.**_Let \(T\) have completely positive \(P\)-entropy and let \(S\) have zero \(P\)-entropy. Then \(T\) and \(S\) are disjoint (\(S\)-factor and \(T\)-factor are always independent)._ To have examples of deterministic completely positive \(P\)-entropy systems we apply infinite transformations \(T\) of rank one and their Poisson suspension \(T_{\circ}\). **Rank one transformation.** Rank one constructions are determined by the parameters \(h_{1}=1\), \(r_{j}\geq 2\) and sets of integers \[\bar{s}_{j}=(s_{j}(1),s_{j}(2),\ldots,s_{j}(r_{j})),\ s_{j}(i)\geq 0,\ j\in{ \bf N}.\] The phase space \(X\) for such constructions is the union of all towers \(X_{j}\), where \[X_{j}=\bigsqcup_{i=0}^{h_{j}-1}T^{i}B_{j},\] \(T^{i}B_{j}\) are intervals called floors. At stage \(j\), the tower \(X_{j}\) is cut into \(r_{j}\) identical narrow subtowers \(X_{j,i}\) (they are called columns), and over each column \(X_{j,i}\) (of height \(h_{j}\)) we add \(s_{j}(i)\) new narrow floors. Then the column number \(i+1\) stack over the column number \(i\) to get one column of height \(h_{j+1}\), \[h_{j+1}=r_{j}h_{j}+\sum_{i=1}r_{j}s_{j}(i).\] This column is considered as the tower \(X_{j+1}\). This process of cutting, adding and stacking continues to infinity and we obtain an invertible transformation of \(X\) that preserves the Lebesgue measure of intervals. See, for example,[10],[11], where numerous applications including ones for Gaussian and Poisson suspensions have been indicated. **The Poisson measure.** Consider the configuration space \(X_{\circ}\), which consists of all infinite countable sets \(x_{\circ}\subset X\) such that each above interval from the spaces \(X\) contains only a finite number of elements of the set \(x_{\circ}\). The space \(X_{\circ}\) is equipped with the Poisson measure. We call its definition. For a subset \(A\subset X\) of a finite \(\mu\)-measure, we define configuration subsets \(C(A,k)\), \(k=0,1,2,\dots\), to \(X_{\circ}\) by the formula \[C(A,k)=\{x_{\circ}\in X_{\circ}\ :\ |x_{\circ}\cap A|=k\}.\] All possible finite intersections of the form \(\cap_{i=1}^{N}C(A_{i},k_{i})\) form a semiring. A Poisson measure \(\mu_{\circ}\) is given on this semiring. Provided that the measurable sets \(A_{1},A_{2},\dots,A_{N}\) do not intersect and have a finite measure, we set \[\mu_{\circ}(\bigcap_{i=1}^{N}C(A_{i},k_{i}))=\prod_{i=1}^{N}\frac{\mu(A_{i})^ {k_{i}}}{k_{i}!}e^{-\mu(A_{i})}.\] The meaning of this formula is as follows: if the sets \(A\), \(B\) do not intersect, then probability \(\mu_{\circ}(C(A,k))\cap C(B,m))\) of \(k\) points of configuration \(x_{\circ}\) in \(A\) simultaneously and \(m\) points of the configuration \(x_{\circ}\) in \(B\) is equal to the product of the probabilities \(\mu_{\circ}(C(A,k))\) and \(\mu_{\circ}(C(B,m))\). In other words, the events \(C(A,k)\) and \(C(B,m)\) are independent. Since the sets \(A_{1},A_{2},\dots,A_{N}\) do not intersect, so the product appears in the formula \((\circ)\). Any element of the semiring is a finite union semiring elements for which the Poisson measure is defined by \((\circ)\). The measure extends from the semiring to the Poisson configuration space \((X_{\circ},\mu_{\circ})\), isomorphic to the standard Lebesgue probability space. An automorphism \(T\) of the space \((X,\mu)\) naturally induces an automorphism \(T_{\circ}\) of the space \((X_{\circ},\mu_{\circ})\), this \(T_{\circ}\) is called Poisson suspension. **Examples of \(T_{\circ}\) with completely positive \(P\)-entropy.** Back to rank one transformations, let \(s_{j}(i)>L(j)h_{j}\), \(L(i)\to\infty\). Then \(\mu(X_{j})\to\infty\) and moreover for the correponding rank one construction \(T\) the sets \[X_{j},\ T^{h_{j}}X_{j},\ T^{2h_{j}}X_{j},\ \dots,\ T^{L(j)h_{j}}X_{j}\] do not intersect. The same is automatically true for all \(A\subset X_{j}\). Let \(C=C(A,k)\), where \(A\subset X_{j_{0}}\), for the Poisson suspension \(T_{\circ}\) we see that the sets \[C,\ T_{\circ}^{h_{j}}C,\ T_{\circ}^{2h_{j}}C\ \dots,\ T_{\circ}^{L(j)h_{j}}C\] are independent with respect to the Poisson measure. Standard reasoning shows that \(T_{\circ}\) has a completely positive \(P\)-entropy, where \[P=\{P_{j}\},\ \ P_{j}=\{h_{j},2h_{j},\dots,L(j)h_{j}\}.\] Note also that the Poisson suspesions over the rank one transformations have zero entropy [12]. Our theorems and examples have analogues for group actions, but more on that later.
2301.07312
Auctions without commitment in the auto-bidding world
Advertisers in online ad auctions are increasingly using auto-bidding mechanisms to bid into auctions instead of directly bidding their value manually. One prominent auto-bidding format is the target cost-per-acquisition (tCPA) which maximizes the volume of conversions subject to a return-of-investment constraint. From an auction theoretic perspective however, this trend seems to go against foundational results that postulate that for profit-maximizing bidders, it is optimal to use a classic bidding system like marginal CPA (mCPA) bidding rather than using strategies like tCPA. In this paper we rationalize the adoption of such seemingly sub-optimal bidding within the canonical quasi-linear framework. The crux of the argument lies in the notion of commitment. We consider a multi-stage game where first the auctioneer declares the auction rules; then bidders select either the tCPA or mCPA bidding format and then, if the auctioneer lacks commitment, it can revisit the rules of the auction (e.g., may readjust reserve prices depending on the observed bids). Our main result is that so long as a bidder believes that the auctioneer lacks commitment to follow the rule of the declared auction then the bidder will make a higher profit by choosing the tCPA format over the mCPA format. We then explore the commitment consequences for the auctioneer. In a simplified version of the model where there is only one bidder, we show that the tCPA subgame admits a credible equilibrium while the mCPA format does not. That is, when the bidder chooses the tCPA format the auctioneer can credibly implement the auction rules announced at the beginning of the game. We also show that, under some mild conditions, the auctioneer's revenue is larger when the bidder uses the tCPA format rather than mCPA. We further quantify the value for the auctioneer to be able to commit to the declared auction rules.
Aranyak Mehta, Andres Perlroth
2023-01-18T05:23:17Z
http://arxiv.org/abs/2301.07312v2
# Auctions without commitment in the auto-bidding world ###### Abstract Advertisers in online ad auctions are increasingly using auto-bidding mechanisms to bid into auctions instead of directly bidding their value manually. One of the prominent auto-bidding formats is that of target cost-per-acquisition (tCPA) which maximizes the volume of conversions subject to a return-of-investment constraint. From an auction theoretic perspective however, this trend seems to go against foundational results that postulate that for profit-maximizing (_aka_ quasi-linear) bidders, it is optimal to use a classic bidding system like marginal CPA (mCPA) bidding rather than using strategies like tCPA. In this paper we rationalize the adoption of such seemingly sub-optimal bidding within the canonical quasi-linear framework. The crux of the argument lies in the notion of _commitment_. We consider a multi-stage game where first the auctioneer declares the auction rules; then bidders select either the tCPA or mCPA bidding format (and submit bids accordingly); and then, if the auctioneer lacks commitment, it can revisit the rules of the auction (e.g., may readjust reserve prices depending on the bids submitted by the bidders). Our main result is that so long as a bidder believes that the auctioneer lacks commitment to follow the rule of the declared auction then the bidder will make a higher profit by choosing the tCPA format compared to the classic mCPA format. We then explore the commitment consequences for the auctioneer. In a simplified version of the model where there is only one bidder, we show that the tCPA subgame admits a _credible_ equilibrium while the mCPA format does not. That is, when the bidder chooses the tCPA format the auctioneer can credibly implement the auction rules announced at the beginning of the game. We also show that, under some mild conditions, the auctioneer's revenue is larger when the bidder to uses the tCPA format rather than mCPA. We further quantify the value for the auctioneer to be able to commit to the declared auction rules. **Keywords:** Auto-bidding, Auction Design, Mechanism Design, Credible mechanisms, Equilibrium, Economics ## 1 Introduction Over the past several years, advertisers have increasingly started to use auto-bidding mechanisms to bid in ad auctions instead of directly bidding their value manually (e.g., bidding per keyword in sponsored search). Among the prominent bidding strategies that have been adopted is the target cost per acquisition (tCPA) strategy (see, e.g., Facebook [2022], Google [2022]) where the goal is to maximize the volume of conversions _aka_ acquisitions, subject to an upper bound on cost per conversion1. Footnote 1: A related strategy, target return-on-ad-spend (tROAS), maximized conversion value subject to a bound on the cost per value. Our results for tCPA extend naturally to tROAS as well. From an auction theoretic perspective however, this trend seems to go against foundational results that postulate that for profit-maximizers (_aka_ quasi-linear) bidders, it is optimal to use a classic bidding system (e.g., marginal CPA bidding, henceforth mCPA) rather than using a tCPA. In other words, for an advertiser with a quasi-linear utility functional, it is optimal to bid the marginal value in a truthful auction, while with tCPA bidding, an advertiser does not allow for a direct control over the marginal cost. So a natural question to ask is whether there is an intrinsic value for profit-maximizing bidders in using tCPA format? Put simply, why do advertisers adopt tCPA bidding? One explanation given for the use of tCPA-like bidding formats is that in many cases bidders may not be intrinsically profit-maximizers which is the classic assumption in economics, but instead care about high-level goal metrics, such as value maximization under return-of-investment constraints (Agggarwal et al. (2019); Balseiro et al. (2021)). Another related explanation is that the cognitive cost to bid and verify the outcome using tCPA-like formats is low compared to other methods like fine-grained bidding or mCPA. In this paper, instead of reconsidering the profit-maximization framework, we show that even within this canonical framework we can explain the adoption of formats like tCPA bidding under a certain model and setting. _In other words, the paper rationalizes profit-maximizers bidders adopting tCPA mechanisms._ We show that so long as a bidder believes that the auctioneer lacks commitment to follow the rule of the declared auction (e.g., readjusts reserve prices after bidders submit their bids), then the bidder prefers to use the tCPA format over the classic mCPA format. **Model:** In our model (Sec. 2 for details), there is a set of queries (ad slots) to be sold, and there is a _single-shot_ game between the auctioneer (auctioneer) and the bidders (advertisers). The auctioneer declares that the auction is a per-query second-price auction with a declared reserve-price, and then the bidders choose a bidding format (whether tCPA or mCPA) and corresponding bids. After this, the auctioneer could then potentially deviate from the declared auction by readjusting the reserve prices, at potentially a per-query, personalized-per-bidder level. Finally, the outcomes are revealed.2 Footnote 2: We note that our model is oblivious to the particular auto-bidding algorithms that auto-bidder uses so long as the optimal outcomes are reached. Prominent in our formulation is the notion of _commitment_. In a model _with commitment_ we assume that there is some mechanism by which the bidders can trust that the auction declared by the auctioneer will indeed be the one that is implemented. This is the standard model and assumes that there is some form of auditing available. In contrast, in a model _without commitment_, there is no such mechanism or guarantee, and both the auctioneer and the bidders strategize their actions. **Results:** At a high level, our main result is that if a bidder believes the auctioneer does not have commitment, then it prefers to use the tCPA bidding format over the mCPA format. 3 Footnote 3: While our model is in a single-shot game, we note that even in the context of repeated interactions where a bidder can monitor the outcome of the auctions over time, our theory can still directionally explain why some bidders may still prefer a tCPA-format. For example, when a bidder is risk-averse about the auctioneer breaking commitment (e.g., bidder expects the auctioneer to slightly change the auction rules which cannot be observable for the bidder); or for a less-sophisticated bidder to whom is too costly to monitor their outcomes. We begin (in Sec. 3) with a basic setting which captures much of the intuition and results in an instructive manner. We consider a model in which there is only one buyer in the auction facing an exogenous price landscape for the queries, e.g., a per-query floor set by the publishers who own the corresponding ad slots. In this model, we are able to derive not only the main result that the bidder prefers tCPA bidding (Theorem 1), but also sharp results about the revenue implications for the auctioneer. We prove that there is a _credible_ equilibrium (as defined in Sec. 3) in which the auctioneer declares a reserve price of 0, and sticks to it, while the bidder chooses the tCPA format. Under a mild assumption, this equilibrium is also shown to be beneficial to the auctioneer in terms of revenue as well as efficiency (Theorem 2). We further study the case where the auctioneer has commitment to a reserve price, but the bidder mistakenly believes it doesn't. We show that, in this case, there is an instance where the revenue loss can be arbitrarily large (Prop. 9). We continue to the general model with multiple bidders in Sec. 4. While we are able to solve for the equilibrium in the model of Sec. 3, solving the tCPA equilibrium in the general model with multiple bidders is hard,4 and studying the auctioneer's revenue implications seems intractable. Nonetheless, we show that a profit-maximizing bidder who believes that the auctioneer does not have commitment prefers the tCPA format to mCPA format (Theorem 3). Our result heavily relies on the auctioneer's flexibility to readjust reserve prices at a per-query and bidder level reserve. For instance, when the set of queries is too large so readjusting at the query level is too expensive, the auctioneer is constrained to set reserve prices uniformly across queries. For this situation, we provide a companion result showing that under some technical assumption on the symmetry of the game (defined in Sec. 4), that again a bidder prefers the tCPA format over the mCPA format (Theorem 4). Footnote 4: Aggarwal et al. (2023) show that it is PPAD-hard to find an equilibrium when there are multiple tCPA bidders. **Intuition:** To understand the intuition of our main result, that a bidder prefers the tCPA format over the mCPA format, let us decompose the auctioneer's objective - to maximize revenue - into two elements: the volume of queries the auctioneer sells and the price at which the queries get sold. These two components translate into two economic forces that the auctioneer considers at the moment of readjusting the reserve prices: (i) to induce a high marginal bid from the bidder (so to increase the volume of queries sold) and (ii) to set a reserve-price on the queries as closed as possible to the bidder's marginal bid. Observe that force (i) is aligned with bidder's incentive while force (ii) goes in the opposite direction in regard to the bidder's utility. In the mCPA format, force (i) disappears as the marginal bid is chosen by the bidder. Thus, the auctioneer's incentive in the no-commitment game is simply to price at the marginal bid. In the tCPA format, as the marginal bid is not fixed but rather is chosen by the auctioneer (given the cost per acquisition constraint), force (i) does not disappear, and hence, it mitigates the second force effect on the bidder's utility. This makes the bidder's utility higher in the tCPA compared to the mCPA format5. Footnote 5: In the specific model of Sec. 3, we show that the former force completely dominates the latter, and the incentives of the auctioneer and the bidder are completely aligned. **Remark 1** (On interpreting the results).: _Our model is that of a stylistic single-shot interaction between the two parties. There are some marketplaces, including ad auctions, where there is a repeated interaction between the bidders and the auctioneer that alleviates the commitment problem that we study here. For example, with transparent reporting and avenues for experimentation, a bidder can monitor and verify or audit the auctioneer's actions and commitments over time even with the mCPA format. Furthermore, in practice, there exist certification processes and third-party audits such as Sarbanes-Oxley (SOX) compliance. Such audits are another method for bidders to have confidence that real-word auction systems are behaving as described. We do not consider the repeated setting in this paper, nor the considerations of audits as a commitment device._ ### Related Work With the importance of auto-bidding in the industry, the topic has become of increasing interest in the literature. In a series of papers, the problem of formulation of tCPA-like auto-bidding formats has been introduced and studied in various aspects. Aggarwal et al. (2019) introduced an optimization framework for auto-bidding, provided optimal bidding algorithms, and also studied the price of anarchy, i.e., the loss in efficiency due to tCPA-like bidding, which was further explored in subsequent work (Balseiro et al., 2021; Deng et al., 2022; Deng et al., 2021; Liaw et al., 2022; Mehta, 2022). There has also been recent work on understanding the optimal mechanism design in Bayesian settings Balseiro et al. (2021); Golrezaei et al. (2021) for ROI constrained bidders. The above papers consider the advertisers as having non-standard ROI utility functions in the main; Balseiro et al. (2021) explicitly distinguish between utility functions which are volume maximizing (ROI) versus profit maximizing. These works do not consider the question of how to rationalize the use of the tCPA format, but rather study additional consequences and auction design given an exogenous choice of the format. We can interpret our work as endogenizing the choice of format within the larger context between auction and bidding. We mention that in a sense tCPA auto-bidding generalizes the more well-known budget constrained or financially-constrained model. This has been a very well studied topic with several lines of work studying this model, both from a fundamental perspective, and in the context of ad auctions, e.g., Balseiro and Gur (2019); Che and Gale (2000); Feldman et al. (2007); Fikioris and Tardos (2022); Gaitonde et al. (2023); Laffont and Robert (1996) among many others. From an auction design perspective, our paper relates to three streams of the literature. The closest papers to our work consider a mechanism design problem with imperfect commitment where the auctioneer (designer) can readjust the rules after observing the agent's report. Bester and Strausz (2000, 2001). Our paper differs from those previous results in that we study a multi-item problem (multiple queries) and focus on two main auction rules rather than a general mechanism design approach. Related to the first stream, Akbarpour and Li (2020); Daskalakis et al. (2020); Ferreira and Weinberg (2020) study the auction rules that are credible. That is, from the auctioneer's perspective, it is optimal to follow the rules of the declared auction. We contribute to this field by showing that, under some conditions, the tCPA-mechanism is a credible mechanism for the multi-item cases (see Prop. 6).6 Footnote 6: Daskalakis et al. (2020) also construct a credible auction mechanism with good revenue guarantees. The third stream of papers study dynamic auctions where the auctioneer cannot commit to the auction that she will choose in future periods Fudenberg and Tirole (1983); Gul et al. (1986); Liu et al. (2019); Skreta (2006, 2015). These papers show that without commitment the auctioneer cannot obtain a revenue larger than the revenue from the efficient auction. While our work does not consider repeated auctions, our results also show that, under some circumstances, the lack of commitment limits the auctioneer revenue to the efficient outcome (see Prop. 6). Model Our model considers a platform (henceforth, the auctioneer) using a second price auction to sell \(x\in X\) queries, where \(X\) belongs to a measurable space \((X,\mathcal{A},\mu)\). Because our goal is to study the buyers' behavior under different auto-bidding mechanisms, we fix one of the buyers, henceforth the bidder, and assume that she has a private valuation \(v\sim F\) for each query. From the bidder's perspective, the cost per conversion on query \(x\), \(p(x)\), has two components: the _intrinsic_ value of the query, which is given by \(p_{0}(x)\) and the reserve price \(r\) chosen by the auctioneer.7 This intrinsic price function is quite general to include the case where the pricing comes from other buyers participating in the auction as well as pricing constraints set by the publisher (see Sec. 4 for more details).8 Therefore, the bidder's payoff when she buys the queries with prices lower than \(p\) is Footnote 7: We assume that the intrinsic value \(p_{0}:X\to\mathbb{R}_{+}\) is measurable function. Footnote 8: Our main result also includes the case where \(p_{0}(x)\) may be unknown to the bidder. \[u(p|v)=\begin{cases}0&\text{if }p<r\\ (v-r)H(r)+\int_{r}^{p}(v-z)dH(z)&\text{if }p\geq r\end{cases},\] where \(H(p)=\mu(\{p_{0}(x)\leq p\})\) is the volume of queries with intrinsic price less than \(p\). We impose the following regularity condition on \(H\). **Assumption 1**.: _The price distribution \(H\) is an integrable function (i.e., \(\int_{0}^{\infty}H(z)dz<\infty\)) and is continuously differentiable with \(h(p)=H^{\prime}(p)>0\) (positive density)._ ### Auto-bidding mechanisms In order to bid for the queries \(X\), the bidder has to choose among the different auto-bidding mechanisms the auctioneer offers to her. We consider two representative and commonly-used mechanisms. **Marginal CPA**: The bidder submits a bid \(b\in\mathbb{R}_{+}\) and the auto-bidding system bids \(b\), on her behalf, on each of the queries \(x\in X\). Thus, when the bid \(b>\max\{r,p_{0}(x)\}\) the bidder receives query \(x\) for a price \(\max\{r,p_{0}(x)\}\). We conclude that the bidder's payoff is \[u(b|v;\text{mCPA})=\begin{cases}0&\text{if }b<r\\ (v-r)H(r)+\int_{r}^{b}(v-z)h(z)dz&\text{if }b\geq r\end{cases}.\] **TCPA:** The bidder submits a target cost per acquisition \(T\geq 0\), and the auto-bidding system bids \(b(T)\) in each of the queries \(x\in X\) so that it maximizes the volume of queries subject to the average cost being no more than \(T\). Thus, for \(T>r\) we have that \[b(T;r)\in\arg\max\Big{\{}H(b(T;r))|\text{ s.t. }rH(r)+ \tag{1}\] \[\int_{r}^{b(T;r)}zh(z)dz\leq T\cdot H(b(T;r))\Big{\}}.\] The following lemma, whose proof is deferred to the appendix, shows that the above problem has a unique solution and the tCPA constraint is always binding. **Lemma 1**.: _There is a unique solution \(b(T;r)\) to Problem (1) which is the unique solution to equation_ \[H(b)(b-T)=\int_{r}^{b}H(z)dz. \tag{2}\] _Furthermore, for \(T\geq r\), \(b(T;r)\) is greater or equal than \(T\), increasing in \(T\) and decreasing in \(r\)._ From Lemma 1, we observe that given a target \(T\geq r\), the bidder pays \(TH(b(T;r))\) for the \(H(b(T;r))\) queries she receives. We conclude that the bidder's payoff is \[u(T|v;\text{tCPA})=\begin{cases}0&\text{if }T<r\\ (v-T)H(b(T;r))&\text{if }T\geq r\end{cases}.\] ### Credibility and commitment To model the auctioneer's credibility we consider the following four-stage game. 1. _Announcement:_ the auctioneer announces a reserve price which applies for all queries \(x\in X\). 2. _Bidding:_ the bidder chooses an auto-bidding mechanism and submits the bid to the auto-bidder, either marginal or target, accordingly. 3. _Credibility:_ the auctioneer potentially readjusts the reserve price, at a per-query and advertiser level.9,10 Footnote 9: We restrict the auctioneer to use _reasonable_ mechanisms where it can only modify reserve prices or bids (from competitors), similar to the model studied in Akbarpour and Li (2020). 4. _Auction is realized:_ The auto-bidding system makes the per-query bids, and the final allocations and respective payments accrue. The third stage of the game is the key element for our analysis. We say that the auctioneer is credible when reserve prices do not change at S3. This could either be because the auctioneer has _commitment_, which means that the auctioneer commits not to change reserve prices (i.e. the game does not have S3); or because it has _endogenous credibility_, which means that along the equilibrium path it is optimal for the auctioneer not to change reserve prices. To distinguish between these two reasons, we denominate as the **commitment game** the game without S3 and the **no commitment game** as the game including S3. The solution concept used in this paper is perfect Bayesian equilibrium.11 Footnote 10: For the one bidder model of Sec. 3, it suffices to restrict to uniform readjustment of the reserve across all queries. Footnote 11: Observe that for the auctioneer, once the bidder submits the bid, any belief about the bidder’s type is irrelevant. ## 3 One bidder model The purpose of this section is to distill, in an instructive manner, the main insight of our work by studying the simplest case: the bidder is the only buyer interested in the queries.12 The tractability of this model, which contrasts with the general model, also allows us to study the revenue consequences of lack of commitment on the auctioneer. Footnote 12: Thus, the intrinsic price \(p_{0}(x)\) does not come from other bids, but instead it corresponds to particular constraints that publishers (owners of the queries) set on their queries. In this model, the auctioneer's revenue only comes from the queries sold to the bidder.13 Therefore, given a bid of the bidder in either bidding format, the auctioneer's revenue is Footnote 13: More generally, the auctioneer’s revenue is a share of the transaction (the other fraction goes to the respective publisher). Our results easily extend to this general setting. \[\pi(r|b;\text{mCPA}) =\begin{cases}0&\text{if }b<r\\ rH(r)+\int_{r}^{b}zh(z)dz&\text{if }b\geq r\end{cases},\] \[\pi(r|T;\text{tCPA}) =\begin{cases}0&\text{if }T<r\\ TH(b(T;r))&\text{if }T\geq r\end{cases}.\] ### Commitment Game We first study the model where the auctioneer commits not to change the reserve prices after observing the bid from the bidder. In this situation, the auctioneer's problem resembles the classic optimal auction studied in Myerson (1981). Following the expository spirit of this section, we further simplify the analysis by imposing a standard regularity condition on the distribution \(F\). **Assumption 2**.: _The virtual valuation \(\phi_{F}(v)=v-\frac{1-F(v)}{f(v)}\) is increasing._ **Proposition 1**.: _For the commitment game, the revenue-maximizing mechanism is \((\chi^{*},\tau^{*})\)_ \[\chi^{*}(v)=\begin{cases}0&\text{if }v<r_{\text{\tiny MYE}}\\ v&\text{if }v\geq r_{\text{\tiny MYE}}\end{cases}\qquad\tau^{*}(v)=r_{\text{ \tiny MYE}}H(r_{\text{\tiny MYE}})+\int_{r_{\text{\tiny MYE}}}^{v}zh(z)dz\] _where \(r_{\text{\tiny MYE}}=\phi_{F}^{-1}(0)\). Here, the allocation function \(\chi(v)\) and transfer function \(\tau(v)\) mean that the bidder receives queries \(\{x:p_{0}(x)\leq\chi(v)\}\) for a payment of \(\tau(v)\)._ Proposition 1 characterizes the revenue-maximizing policy among all feasible auction and bidding formats (in particular, including the mCPA and tCPA formats). We defer the proof to Appendix B. The next proposition shows that the auctioneer can implement this optimal mechanism in both formats, and hence, making them equivalent. **Proposition 2**.: _In the commitment model, the mCPA and tCPA mechanisms are equivalent along the equilibrium path. More precisely, in any equilibrium, the auctioneer sets a reserve price \(r_{\textsc{mYE}}\); the bidder either bids \(b=v\) on the mCPA mechanism or \(T\) in tCPA mechanism such that \(b(T;r_{\textsc{mYE}})=v\). In particular, we obtain that in the commitment game:_ 1. _the auctioneer's revenue:_ \[\pi_{C}^{*}=\mathbb{E}_{v}[\mathbf{1}_{\{v\geq r_{\textsc{mYE}}\}}\left(vH(v) -\int_{r_{\textsc{mYE}}}^{v}H(z)\right)dz],\] 2. _the bidder's utility:_ \(u_{C}^{*}=\mathbb{E}_{v}[\mathbf{1}_{\{v\geq r_{\textsc{mYE}}\}}\int_{r_{ \textsc{mYE}}}^{v}H(z)dz]\)_,_ 3. _the welfare:_ \(W_{C}=\mathbb{E}_{v}[\mathbf{1}_{\{v\geq r_{\textsc{mYE}}\}}vH(v)]\)_._ Proof.: First, notice that since the auctioneer does not change the reserve price in the commitment game, the optimal bidding strategy for the bidder is to submit a marginal bid equal to her value. She can do this either directly using the mCPA-format or indirectly, in the tCPA format by submitting a target \(T\) that induces the same marginal bid. Thus, from the bidder's perspective, the mCPA format and the tCPA format are equivalent. Hence, from the auctioneer's perspective he can implement the optimal mechanism of Proposition 1 by announcing a reserve price of \(r_{\textsc{mYE}}\) at S1. From Proposition 1 we have that the bidder pays \(T^{*}(v)=\mathbf{1}_{\{v\geq r_{\textsc{mYE}}\}}\left(vH(v)-\int_{r_{\textsc{ mYE}}}^{v}H(z)\right)\). The remaining claims of the proposition are direct consequence of this characterization. ### No-commitment game This section shows that when the auctioneer lacks commitment, the two auto-bidding mechanisms are no longer equivalent. We show that if the bidder chooses the mCPA format, the final auction turns to be equivalent to a first-price auction (FPA). By contrast, when the bidder opts for the tCPA format, the final auction turns to be equivalent to a second-price auction without reserve (SPA). **The mCPA Subgame** We first characterize the set of equilibria for the subgame where the bidder chooses the mCPA format. We present the following straightforward lemma and a direct consequence of the lemma in Proposition 3. **Lemma 2**.: _Consider the subgame where the bidder chooses the mCPA format and bids \(b\). Then, if the reserve price \(r\) announced at S1 is such that \(r\neq b\), then the optimal decision for the auctioneer is to readjust the reserve price to \(r=b\)._ **Proposition 3** (mCPA equivalent to FPA).: _The auction where the bidder chooses the mCPA format is equivalent to a FPA. Thus, for each valuation type \(v\) the bidder submits a bid \(b^{*}(v)\) solving_ \[\max_{b}(v-b)H(b). \tag{3}\] The previous result describes the negative effect of the lack of commitment from the auctioneer. Under the mCPA format, the auctioneer learns how much the bidder is willing to pay for each query; therefore, the auctioneer's sequential rationality pushes him to charge such value to the bidder. Thus, the auctioneer cannot credibly commit to keeping any reserve price announced at S0. Anticipating this effect, the bidder shades the bid and by consequence, she gets fewer queries allocated to her compared to the commitment case. We formalize this discussion in the following proposition. **Definition 1**.: _We say that an equilibrium is credible if the auctioneer does not change the reserve price after observing the bid.14_ Footnote 14: Our definition is similar to the self-enforcement agreement theory studied in Aumann [1990]. **Proposition 4**.: _There does not exist a credible equilibrium such that the bidder chooses the mCPA format. Furthermore, all equilibria are payoff equivalent inducing expected payoffs \(\pi^{*}(\text{mCPA})=\mathbb{E}_{v}[b^{*}(v)H(b^{*}(v))]\), \(u^{*}(\text{mCPA})=\mathbb{E}_{v}[(v-b^{*}(v))H(b^{*}(v))]\) and expected welfare \(W_{\text{mCPA}}=\mathbb{E}_{v}[vH(b^{*}(v))]\), where \(b^{*}\) is the solution to Problem (3)._ Proof.: From Proposition 2 we have that in any equilibrium, the bidder with valuation-type \(v\) bids \(b^{*}(v)\) and the auctioneer sets a final reserve price so that \(r=b^{*}(v)\). From the envelope theorem we see from Problem (3) that \({b^{*}}^{\prime}(v)=H(b^{*}(v))\) which implies that \(b^{*}(v)\) is increasing on \(v\). This implies that the auctioneer's initial reserve price \(r\) has to be different from at least one of \(b^{*}(v)\) and \(b^{*}(v^{\prime})\) when \(v\neq v^{\prime}\). This implies that in, any equilibrium, the auctioneer readjusts the reserve price for at least one such bid. We conclude that the there is not a credible equilibrium in the mCPA subgame. The payoffs described in the proposition are a consequence of that, in any equilibrium, the bidder bids \(b^{*}(v)\) gets all queries that have intrinsic prices less than \(b^{*}(v)\) for a price \(b^{*}(v)\). #### The tCPA Subgame Similar to the previous analysis, we characterize the set of equilibria for the subgame where the bidder chooses the tCPA format. **Lemma 3**.: _Consider the subgame where the bidder chooses the tCPA format and bids \(T\). Then, if the reserve price \(r\) announced at \(S1\) is such that \(r>0\), then the optimal decision for the auctioneer is to readjust the reserve price to \(r=0\)._ Proof.: Clearly, if the auctioneer would a set a reserve price \(r\leq T\) otherwise he gets zero profits. For reserve price \(r\leq T\), observe that the auctioneer's revenue is \(TH(b(T;r))\). From Lemma 1 we have that \(b(T;r)\) is decreasing in \(r\), and hence, since \(H\) is increasing the optimal reserve price is \(r=0\). The intuition behind this lemma is that the maximum price to pay is not predetermined as in mCPA case. Instead it is chosen by the auctioneer to meet the tCPA constraint. Because the auctioneer's revenue is proportional to the volume of queries allocated to the bidder, to maximize such volume, it is optimal to set reserve price \(r=0\). Therefore, the bidder by letting the auctioneer bid on her behalf, makes the auctioneer internalize the negative effect of rent extraction via a reserve, since the latter leads to the auto-bidder decreasing the final marginal bid \(b(T;r)\). **Proposition 5** (tCPA equivalent to SPA).: _The auction where the bidder chooses the tCPA format is equivalent to a SPA. For each valuation type \(v\) the bidder submits a target \(T^{*}(v)\) such that \(b(T^{*}(v);0)=v\). That is, \(T^{*}(v)\) solves_ \[H(v)(v-T^{*}(v))=\int_{0}^{v}H(z)dz. \tag{4}\] Proof.: Consider any equilibrium of the game. Because in stage S3, the best response of the auctioneer is to set a reserve price \(r=0\) (Lemma 3), the bidder's optimal response at S2 consists in submitting a target so that the marginal bid equals to her valuation \(v\) for a price landscape without reserve price. Hence, she submits a target \(T^{*}\) as described in Equation (4). This proposition starkly contrasts with Proposition 4. In the tCPA format, the bidder believes that the auctioneer will reduce the reserve price while, in the mCPA case, the bidder believes that the auctioneer will increase the reserve price to its bid. This is because in the tCPA format, once the bidder bids the target \(T\), the auctioneer's incentives are fully aligned with the bidder's incentive: the auctioneer only cares about maximizing the volume of sold queries. A second difference with mCPA format is that, in this case, the auctioneer sets a reserve price independently of the bidder's target. In particular, if the auctioneer announces at S0 a reserve price \(r=0\), he can credibly commit to that price. The following proposition summarizes these findings. **Proposition 6**.: _A credible equilibrium exists when the bidder chooses the tCPA format. The seller sets a initial reserve price \(r=0\), the bidder bids \(T^{*}\) (the solution to Equation (4)). Moreover, every equilibrium is payoff equivalent inducing expected payoffs \(\pi^{*}(tCPA)=\mathbb{E}_{v}[T^{*}(v)H(v)]\), \(u^{*}(tCPA)=\mathbb{E}_{v}[(v-T^{*}(v))H(v)]\) and expected welfare \(W_{tCPA}=\mathbb{E}_{v}[vH(v)]\)._ Proof.: The following is a credible equilibrium of the game: the auctioneer sets a reserve price \(r=0\) at S1, the bidder chooses the tCPA format and submits the target \(T^{*}\) which solves Equation 4 at stage S2, and the auctioneer keeps the reserve price \(r=0\) at stage S3. Furthermore, the subgame after S1 is the same for any initial reserve price that auctioneer announces at S1 (see Proposition 5). Therefore, all equilibria are payoff equivalent. And the expected payoff are an immediate consequence from the fact for every type \(v\), the target is \(T^{*}(v)\) is binding in Problem (1) and hence she gets queries \(\{x:p_{0}(x)\leq v\}\) for price \(T^{*}(v)H(v)\) An important consequence of the above results is that without commitment, the auctioneer allocates the queries efficiently (i.e., maximizes welfare).15 Footnote 15: This result is reminiscent of the Coasean literature, which, in a different context (durable good monopolist), studies conditions in which a monopolist does not exercise his monopolistic power and allocates efficiently (see Bulow (1982)). **Corollary 1**.: _In any equilibrium of tCPA subgame, the auctioneer efficiently allocates the queries._ ### Utility, Welfare, and Revenue implications in the No-commitment game We start this section by showing our main result for the specific setting of Section 3: when the auctioneer lacks commitment, it is optimal for the bidder to bid according to the tCPA format. Thus, our result provides a rational explanation as to why quasilinear bidders opt for tCPA auto-bidding mechanisms. **Theorem 1**.: _In any equilibrium, for every type-valuation \(v\), the bidder strictly prefer to choose the tCPA format over the mCPA format. Thus, \(\pi^{\star}_{NC}=\pi^{\star}(tCPA)\), \(u^{\star}_{NC}=u^{\star}(tCPA)\) and \(W_{NC}=\mathbb{E}_{v}[vH(v)]\)._ Proof.: Consider \(b^{\star}(v)\) defined in Equation (3). Then, \[u^{\star}(\text{mCPA}|v) =(v-b^{\star}(v))H(b^{\star}(v))\] \[=(v-b^{\star}(v))H(b^{\star}(b^{\star}(v);b^{\star}(v)))\] \[<(v-b^{\star}(v))H(b(b^{\star}(v);0))\] \[=u(b^{\star}(v)|v;\text{tCPA})\] \[\leq u^{\star}(\text{tCPA}|v).\] The first equality is by definition of \(b^{\star}\) (Proposition 4). The second equality holds because \(H\) is increasing and, hence, with tCPA constraint of \(b^{\star}(v)\) and reserve price \(b^{\star}(v)\) the optimal bid is to buy all queries with price less or equal than \(b^{\star}(v)\) (i.e. \(H(b^{\star}(v))\)). The first inequality holds due to Lemma 1. The next equality is by definition of bidding a target \(T=b^{\star}(v)\). The last inequality is by definition of \(u^{\star}(\text{tCPA})\), the bidder's payoff using the tCPA format with the optimal target. Regarding the welfare implications, Corollary 1 shows that in the tCPA format, the final allocation is is welfare-optimal. On the other hand, because in the mCPA case the bidder shades her bid (i.e., \(b^{\star}(v)<v\)), we have that the allocation is inefficient. We conclude that, from a welfare perspective, the tCPA format is preferable compared to the mCPA format. **Revenue implications** Our previous results show that from the bidder's (and also welfare) perspective, when the auctioneer does not commit, the bidder prefers to use a tCPA mechanism over the mCPA mechanism. We now tackle the question from the auctioneer's angle. Does having the tCPA format cause a loss in revenue for the auctioneer? The following result shows that, under some reasonable assumption on \(H\), the auctioneer itself benefits from offering a tCPA format to the bidder. **Assumption 3**.: \(H\) _satisfies that \(vh(v)\) is non-decreasing._ Assumption 3 implies that the marginal revenue to sell the queries at price \(p\), \(ph(p)\), is non-decreasing on \(p\). This assumption is quite natural and holds in common settings, for instance, when \(H\) is convex, in which case \(h\) is non-decreasing. **Theorem 2** (Revenue Comparison).: _Suppose that Assumption 3 holds. Then for every valuation-type \(v\) we have that \(\pi^{\star}(tCPA|v)>\pi^{\star}(mCPA|v)\). Furthermore, for every \(\gamma>0\), we can find an instance \(\langle F,H\rangle\) such that \(\pi^{\star}(tCPA)>\gamma\cdot\pi^{\star}(mCPA)\)._ Proof.: Consider the auxiliary function \(w(v):=\int_{0}^{v}zh(z)dz\). Using integration by parts we have that \(w(v)=vH(v)-\int_{0}^{v}H(z)dz\), and using the defintion of \(T^{\star}(v)\) in Equation (4), we get that \(w(v)=T^{\star}(v)H(v)\). Assumption 3 implies that \(w\) is a convex function. Thus, for every \(v,v^{\prime}\), we have that \[w(v)\geq w(v^{\prime})+\underbrace{w^{\prime}(v^{\prime})}_{v^{\prime}h(v^{ \prime})}(v-v^{\prime}). \tag{5}\] Take \(v^{\prime}=b^{*}(v)\), where \(b^{*}(v)\) is the optimal bidding in the mCPA format. Taking the first order conditions on Problem (3) (the solution has to be interior in this problem), we have \(h(b^{*}(v))(v-b^{*}(v))=H(b^{*}(v))\). Therefore, replacing \(v^{\prime}\) in Equation (5) we obtain that \[T^{*}(v)H(v)\geq w(b^{*}(v))+b^{*}(v)H(b^{*}(v)).\] Because \(h>0\), we have that \(w>0\). Therefore, we obtain that \(T^{*}(v)H(v)>b^{*}(v)H(b^{*}(v))\), or equivalently, \(\pi^{*}(\text{tCPA}|v)>\pi^{*}(\text{mCPA}|v)\). To finish the proof, consider \(F(v)=v\) with support in \([0,1]\) (uniform distribution) and \(H_{n}(p)=p^{n}\) for \(p\in[0,1]\). Simple computations shows that for every \(v\in[0,1]\), \(b^{*}(v)=\frac{v}{n+1}\), and hence, \[\pi^{*}(\text{mCPA})=\int_{0}^{1}\left(\frac{v}{n+1}\right)^{n}dv\leq\frac{1 }{(n+1)^{n}}.\] On the other hand, the solution to Equation (4) is \(T^{*}(v)=\frac{n}{n+1}v\). This implies that \[\pi^{*}(\text{tCPA})=\int_{0}^{1}\frac{n}{n+1}v\cdot v^{n}dv=\frac{n}{(n+1)(n +2)}.\] Thus, \[\frac{\pi^{*}(\text{tCPA})}{\pi^{*}(\text{mCPA})}\geq\frac{\frac{n}{(n+1)(n+ 2)}}{\frac{1}{(n+1)^{n}}}=\frac{n(n+1)^{n-1}}{n+2}.\] We conclude by taking \(n\) large enough such that \(\frac{n(n+1)^{n-1}}{n+2}>\gamma\). Theorem 2 shows that the revenue on the SPA (the tCPA format) is not equivalent to the revenue obtained in the FPA (the mCPA format). At first glance, this result seems to contradict the well-known Revenue Equivalence Theorem (RET) between these auctions (see Chapter 1.3 of Krishna (2010) for a textbook treatment). While the result is true when the auctioneer only owns one query (in both auctions the revenue is simply \(p_{0}(x)\)), having uniform bidding among heterogeneous queries constraints the bidder to shade the bid so that she receives a suboptimal fraction of queries. Thus, the fraction of queries allocated in the SPA is larger than in the FPA, violating the main condition for the RET16. Footnote 16: If the bidder had knowledge of the pricing \(p_{0}(x)\) and could bid independently in each of the queries, the FPA would allocate exactly the same as in the SPA, and therefore, RET would apply in such a case. Even though the condition imposed in Assumption 3 considers a wide range of cases, the following instance - that does not satisfy Assumption 3 - provides an example where the auctioneer prefers not to offer the tCPA mechanism to the bidder. **Proposition 7**.: _Suppose that the valuation type have support on \([1,\infty)\) and consider the instance_ \[\hat{H}(p)=\begin{cases}0&\text{if }p<1\\ 1-\frac{1}{p}&\text{if }p\geq 1\end{cases}.\] _Then, \(\pi^{*}(\text{tCPA}|v)=\log(v)\) and \(\pi^{*}(\text{mCPA}|v)=\sqrt{v}-1\) which implies that \(\pi^{*}(\text{mCPA}|v)>\pi^{*}(\text{tCPA}|v)\). In particular, for every \(\gamma>0\) we can find a distribution \(F\) so that the instance \(\langle F,\hat{H}\rangle\) is such that \(\pi^{*}(\text{mCPA})>\gamma\cdot\pi^{*}(\text{tCPA})\)._ ### The value of commitment We finish this section by measuring the value to auctioneer of having commitment in the game. First, we measure the revenue loss of the auctioneer when the auctioneer does not have a commitment mechanism to keep the announced reserve prices, compared to the commitment benchmark. The second measure studies the revenue loss of the auctioneer when, even though he can commit not to change the reserve prices, the bidder has a mistaken belief that the auctioneer would actually readjust reserve prices17. All proofs of this section are delegated to Appendix B. **Proposition 8** (The value of commitment).: _Assume that the distribution \(F\) has support in \([\underline{v},\overline{v}]\). Denote by \(\psi=\frac{\pi^{*}_{\text{\tiny{MVE}}}}{\pi^{*}_{\text{\tiny{MVE}}}}\), the relative revenue of selling one item with a SPA with reserve price \(\underline{v}\) compared to sell it using revenue-optimal auction. Then for every instance \(\langle F,H\rangle\) we have that \(\pi^{*}_{\text{\tiny{NC}}}\geq\psi\cdot\pi^{*}_{C}\). Moreover, the bound is tight: an instance \(\langle F,H\rangle\) exists such that \(\pi^{*}_{\text{\tiny{NC}}}=\psi\cdot\pi^{*}_{C}\)_ Proposition 8 shows the more homogeneous impressions are, the higher is the loss of revenue for the auctioneer when he does not have commitment. **Proposition 9** (The value of showing commitment).: _We denote by \(\pi^{*}_{WB}\), the the auctioneer's revenue when the bidder bids the non commitment optimal reserve price \(T^{*}\) (see Equation (4)) but the auctioneer keeps the reserve price to \(r=r_{\text{\tiny{MVE}}}\). That is, \(\pi^{*}_{WB}=\pi^{*}(r_{\text{\tiny{MVE}}}|T^{*};tCPA)\). Then, for every \(\gamma>0\) an instance \(\langle F,H\rangle\) exists such that \(\pi^{*}_{C}>\gamma\cdot\pi^{*}_{WB}\)._ The last proposition shows the cost of having a misguided bidder in the auction. It further shows that if the auctioneer believes that bidder does not trust the auctioneer's commitment, then the rational choice for the auctioneer is to declare a reserve price of \(0\) instead of \(r_{\text{\tiny{MVE}}}\). The following remark re-emphasizes this point. **Remark 2**.: _Our result (including the general version of Section 4) that a bidder prefers the tCPA-format only depends on the bidder's belief about whether the auctioneer can commit to the auction rules he declares. Therefore, even a committed auctioneer may benefit by offering tCPA-like mechanisms as it increases the value for bidders who are skeptical about the auctioneer's commitment._ ## 4 The general model This section generalizes Theorem 1 to a setting where _the bidder_ faces competition by other buyers (we will call them extra-buyers), that are interested in the queries \(X\). We start the section by unfolding the intrinsic price \(p_{0}\) when we have multiple buyers. In particular, we allow the intrinsic price to be a random variable whose uncertainty comes from the extra-buyers bidding strategies that may be private, the different conversion probabilities and the publishers' pricing constraint that may be unknown to the bidder. ### The intrinsic price with multiple buyers We consider \(n\) extra-buyers (aside from our original bidder) participating in the auction. Each extra-buyer \(i\) has a private valuation \(v_{i}\) per conversion on the query, strategically chooses an auto-bidding format and submits a bid according to the format. We denote by \(\boldsymbol{\sigma}(\boldsymbol{v})=(\sigma_{1}(v_{1}),\ldots,\sigma_{n}(v_{ n}))\) the extra-buyers' strategies. Let \(\boldsymbol{b}=(b_{1},\ldots,b_{n})\) be the final marginal bids of the extra-buyers; \(\boldsymbol{q}(x)=(q_{0}(x),q_{j}(x))_{j=1}^{n}\) the probability of conversion on query \(x\in X\) for the bidder and the buyers (respectively); and \(p_{B}(x)\) the pricing constraint set by the publisher owning query \(x\).18 Then, in a second-price auction the realized intrinsic price the bidder faces for a conversion on query \(x\in X\) is Footnote 18: The functions \(\boldsymbol{q},p_{B}\) are assumed to be integrable functions. \[p_{0}(x|\boldsymbol{b},\boldsymbol{q},p_{B})=\max\left\{\frac{\max_{j=1, \ldots,n}b_{j}\cdot q_{j}(x)}{q_{0}(x)},p_{B}(x)\right\}.\] Hence, the realized price distribution is \[H(p|\boldsymbol{b},\boldsymbol{q},p_{B})=\int_{\{x:\,p_{0}(x|\boldsymbol{b}, \boldsymbol{q},p_{B})\leq p\}}q_{0}(x)d\mu(x).\] From the bidder's perspective, we consider the following information structure. The bidder believes that extra-buyers valuations \(\boldsymbol{v}=(v_{i})_{i=1}^{n}\) are drawn independently, with distribution \(F_{i}\) with support \([0,\overline{v}_{i}]\) for \(i=1,\ldots,n\).19 The bidder also assesses that the conversion probabilities \(\boldsymbol{q}\) and the pricing constraint \(p_{B}\) are drawn according to \((\boldsymbol{q},p_{B})\sim G|_{v}\). We do not necessarily impose that extra-buyers are playing equilibrium strategies but instead impose that they are individually rational: an extra-buyer never bids above her valuation. **Assumption 4**.: _For every Extra-Buyer \(i\), we have that \(\mathbb{P}[b_{i}\leq v_{i}\text{ for }i=1,\ldots,n]=1\), where \(b_{i}\) is Extra-Buyer \(i\)'s final marginal bid._ Regarding the intrinsic price the bidder faces, observe that in the presence of extra-buyers using tCPA format, the final marginal of those tCPA-extra buyers depends on the bidder's marginal bid \(b_{0}\). Therefore, the intrinsic price the bidder faces is a random variable \(p_{0}(\cdot|\omega_{v},b_{0})\), where \(\omega_{v}=(\boldsymbol{\sigma}(\boldsymbol{v}),\boldsymbol{q},p_{B})\) is the random variable containing the bidder's unknown terms. Consequently, the price distribution is a random variable \(H(p|\omega_{v},b_{0})\). This is precisely why the full model is harder to analyze than the model of Sec. 3 - the price distribution for the bidder is now a function of its own bid. We assume that \(H\) satisfies the general version of Assumption 1 for random variables.20 Footnote 20: All assumptions holding for \(\omega_{v}\), are stated up to a zero measure set. **Assumption 5**.: \(H(p|\omega_{v},b_{0})\) _satisfies Assumption 1 for every \(b_{0}\), \(\omega_{v}\)._ ## The value of the tCPA format After the previous prelude, we are now in a position to state our main result: in the non-commitment model, the bidder prefers to use the tCPA format. More precisely, we show that if the auctioneer can readjust the auction rule to set a per-query and personalized-per-bidder reserve prices, then the bidder's expected payoff using the tCPA format is larger than the expected payoff using the mCPA format. We also provide a companion result in the supplementary material showing that when the extra-buyers game is _symmetric_ in a certain sense and the auctioneer is constrained to set a uniform reserve price across queries, then again, the bidder prefers to use the tCPA format. Recall the notation that \(u^{*}(\text{tCPA}|v)\) denotes the expected payoffs when the bidder with valuation \(v\) chooses the tCPA format and submits an optimal target. Similarly, \(u^{*}(\text{mCPA}|v)\) is the expected payoff when the bidder chooses the mCPA format and submits an optimal marginal bid. As our main technical result, we show: **Theorem 3**.: _Suppose that the auctioneer can readjust reserve prices to per-query and personalized-per-bidder level. Then, \(u^{*}(tCPA|v)>u^{*}(mCPA|v)\)._ Proof.: Fix the valuation of the bidder to \(v\) and let \(b_{\text{mCPA}}\in(0,v]\) be an arbitrary bid. Suppose that the bidder submits \(b_{\text{mCPA}}\in[0,v]\) using the mCPA format and assume that the auctioneer has chosen the optimal reserve prices (call this scenario "world \(M\)"). We claim that the bidder weakly improves her payoff by bidding a target \(T=b_{\text{mCPA}}\) with the tCPA format (call this scenario "world \(T\)") for every realization \(\omega_{v}\). Furthermore, the inequality is strict for a positive measure of \(\omega_{v}\). This claim proves the theorem, and we show the claim in the following four steps. **Step 1.** Let \(X_{\text{mCPA}}(\omega_{v})\) be the subset of queries that the bidder obtains in world M. Then \[u(b_{\text{mCPA}}|\text{mCPA};v;\omega_{v})=(v-b_{\text{mCPA}})\int_{X_{ \text{mCPA}}(\omega_{v})}q_{0}(x)d\mu(x),\] where \(\mu\) is the measure on the space of queries \(X\). Indeed, by optimality of the auctioneer, the reserve price for the bidder on those queries must be \(r=b_{\text{mCPA}}\). **Step 2.1.** We prove that the revenue of the auctioneer in world \(T\) is at least the revenue in world \(M\). This is because one strategy for the auctioneer in world \(T\) is to simply set a reserve price of \(b_{\text{mCPA}}\) for the bidder (equal to its target). Under this, the situation is identical to world \(M\) for the bidder (due to Step 1) and for every buyer (because the auto-bidder for our bidder bids \(b_{\text{mCPA}}\) on all queries as the reserve is set to the target), yielding the same revenue as in world \(M\)21. Footnote 21: In case of multiplicity of equilibria, we assume that the same bidding equilibrium arises on worlds \(M\) and \(T\) as they are indistinguishable to the agents. **Step 2.2.** We next prove that the revenue that the auctioneer obtains _from the bidder_ in world \(T\) is greater or equal than the revenue from the bidder in world \(M\). Suppose for the sake of a contradiction that this is not true. Due to Step 2.1, this means that the revenue obtained from the extra bidders is higher in world \(T\) than in world \(M\). We now leverage the fact that the auctioneer can set a reserve price at query/bidder level, in order to recreate the situation from world \(T\) in world \(M\). For each extra-buyer, the auctioneer can add a per-query personalized reserve equal to the bid the bidder submits for the query in world \(T\). Moreover, for the queries \(X_{\text{tCPA}}(\omega_{v})\) that the bidder wins in world \(T\), the auctioneer can set a high reserve price on the extra-buyers so that the only feasible candidate is the bidder. With this simulation, the auctioneer can recreate the extra-buyers' bidding behavior from world \(T\) in world \(M\). Thus the auctioneer changed reserves to obtain the same revenue from the extra-buyers in world \(M\) as in world \(T\). In this way (only by changing reserves) one could increase the revenue in world \(M\). This contradicts the optimality of the reserve prices the auctioneer chooses in world \(M\). **Step 3.** It follows from Step 2.2 that the volume of conversions obtained by the bidder is higher in world \(T\) than in world \(M\). This is simply because the revenue from the bidder equals the average cost per conversion times the volume of conversions. The average cost-per-conversion in both the worlds is the same, equal to \(b_{\text{\tiny mCPA}}\) - in world \(T\) because we assume the target is binding (Assumption 5 and Lemma 1), and in world \(M\) because the reserve is set to the same value. Since the revenue from the bidder is higher in world \(T\), we conclude that the volume of conversions is higher in world \(T\). **Step 4.** We assert that \(u(b_{\text{\tiny mCPA}}|\text{tCPA};v;\omega_{v})\geq u(b_{\text{\tiny mCPA} }|\text{mCPA};v;\omega_{v})\). Indeed, observe that \[u(b_{\text{\tiny mCPA}}|\text{tCPA};v;\omega_{v}) =(v-b_{\text{\tiny mCPA}})\int_{X_{\text{tCPA}}(\omega_{v})}q_{ 0}(x)d\mu(x)\] \[\geq(v-b_{\text{\tiny mCPA}})\int_{X_{\text{mCPA}}(\omega_{v})} q_{0}(x)d\mu(x)\] \[=u(b_{\text{\tiny mCPA}}|\text{mCPA};v;\omega_{v})\] where the first equality holds because the target of the bidder in world \(T\) is set to \(b_{\text{\tiny mCPA}}\), and the tCPA constraint is binding (from Assumption 5 and Lemma 1). The first inequality is from Step 3. The final equality is from Step 1. **Step 5.** In step 4, we already proved the weak inequality from the theorem; now we prove the strict inequality. We prove that there exists a positive measure of events \(\omega_{v}\) such that \(u(b_{\text{\tiny mCPA}}|\text{tCPA};v;\omega_{v})>u(b_{\text{\tiny mCPA}}| \text{mCPA};v;\omega_{v})\). Indeed, because \(b_{\text{\tiny mCPA}}>0\), a positive measure of events \(\omega_{v}\) exists such that the extra-buyers' valuations are small enough so that for every query \(x\in X\), \(\max_{i=1,\ldots,n}v_{i}q_{i}(x)<b_{\text{\tiny mCPA}}q_{0}\). Hence, by Assumption 4 the auctioneer never allocates queries to extra-buyers. Thus, in world \(M\), the auctioneer's revenue is \(b_{\text{\tiny mCPA}}\cdot H(b_{\text{\tiny mCPA}}|\omega_{v})\). In world \(T\), if the bidder bids a target \(T=b_{\text{\tiny mCPA}}\), then by reducing the bidder's reserve price to \(r<b_{\text{\tiny mCPA}}\) we have the final marginal bid is \(b(b_{\text{\tiny mCPA}};r)>b_{\text{\tiny mCPA}}\). Thus, the auctioneer obtains a revenue of \(T\cdot H(b(b_{\text{\tiny mCPA}};r)|\omega_{v})=b_{\text{\tiny mCPA}}\cdot H(b (b_{\text{\tiny mCPA}};r)|\omega_{v})\). This is strictly larger than the revenue in world \(M\) due to Assumption 5. We conclude that auctioneer sets a reserve price \(r<b_{\text{\tiny mCPA}}\) in world \(T\). Thus, we obtain \[u(b_{\text{\tiny mCPA}}|\text{tCPA};v;\omega_{v}) =(v-b_{\text{\tiny mCPA}})H(b(b_{\text{\tiny mCPA}};r)|\omega_{v})\] \[>(v-b_{\text{\tiny mCPA}})H(b_{\text{\tiny mCPA}})=u(b_{\text{ \tiny mCPA}}|\text{mCPA};v;\omega_{v}).\] To conclude the proof of the theorem, from Step 4. and Step 5. we derive that \[u(b_{\text{\tiny mCPA}}|\text{mCPA};v) =\mathbb{E}_{\omega_{v}}[u(b_{\text{\tiny mCPA}}|\text{mCPA};v; \omega_{v})]\] \[<\mathbb{E}_{\omega_{v}}[u(b_{\text{\tiny mCPA}}|\text{tCPA};v; \omega_{v})]\leq u^{*}(\text{tCPA}|v).\] Since \(b_{\text{\tiny mCPA}}\) is arbitrary, we get that \(u^{*}(\text{tCPA}|v)<u^{*}(\text{mCPA}|v)\). Theorem 3 strongly relies on the auctioneer's ability to readjust the reserve price at the query and bidder level. However, when the set of queries is large, readjusting the reserve prices for each query may turn out to be too expensive. In this kind of situation, when the auctioneer is constrained to set a uniform reserve price, Theorem 4 shows that the bidder still prefers the tCPA format so long as the extra-buyers game is symmetric. We remark that this result does not rely on how the auctioneer readjusts the extra buyers' reserve prices.22 Footnote 22: This weaker condition on how the auctioneer behaves with the extra-buyers allows to include cases like when some extra-buyers are budgeted constrained, and hence, the auctioneer cannot readjust their reserve prices. When dealing with uniform reserve prices, the key technical challenge compared to Theorem 3 is that the auctioneer cannot replicate the effect of the bidder's bidding on the remaining extra-buyers by setting personalized uniform reserve prices. Thus, when the auctioneer readjusts the bidder's reserve price not only the bidder's marginal bid changes but also the marginal bids of extra-buyers using a tCPA format. To tackle this problem, we assume that game for extra-buyers using the tCPA format is symmetric. **Definition 2**.: _The extra-buyers' game is tCPA-symmetric if for every \(\omega_{v}\) and Extra-Buyers \(i,j\) using the tCPA format, we have that their final marginal bids \(b_{i}\), \(b_{j}\) are the same.23_ Footnote 23: A sufficient condition for those marginal bids to be the same is that (i) both bidders have the same target (\(T_{i}=T_{j}\)) and (ii) that for every query \(x\) there exists a query \(x^{\prime}\) such that \(q_{i}(x)=q_{j}(x^{\prime})\). **Remark 3**.: _When there is only one extra-buyer is in the auction, the game is tCPA-symmetric._ We are now in position to present Theorem 4. **Theorem 4**.: _Suppose that the auctioneer is constrained to set a uniform reserve price to the bidder and that the extra-buyers game is tCPA-symmetric. Then, \(u^{\star}(tCPA|v)>u^{\star}(mCPA|v)\)._ The proof of Theorem 4 is similar in spirit to that of Theorem 3, but now we can not use the power of the personalized per-query reserve prices to perform the step where we"simulate world \(T\) in world \(M\)". Instead of such a simulation, we show that in a tCPA-symmetric game, there is a structural property of the bidding behavior in equilibrium, which allows us to prove the result. We defer the proof to Appendix C. ## 5 Conclusion This paper attempts to explain why rational bidders (with quasi-linear utilities) choose bidding formats which at first glance are not optimal. The crux of the argument lies in the notion of lack of commitment - the auctioneer can change (ex post) the rules of the declared auction - once we treat the auction and bidding setting as a multi-stage game. It turns out that in a game without commitment, it is rational for the quasi-linear bidder to choose the seemingly suboptimal tCPA format over the classical mCPA format. We prove this in two different settings: a simpler setting with one bidder and exogenous prices, and then in the general model with endogenous prices in the auction based on bids of other buyers. In the simpler model, we also prove that the auctioneer's revenue is higher with the tCPA format in the no-commitment game compared to the mCPA format, under certain mild conditions. The general model requires more technically involved proofs but the insight is the same: in a world without commitment or with a lack of belief in the other player's commitment, the tCPA format aligns incentives better. We emphasize that the core issue is in fact not the auctioneer's commitment, but the bidder's belief in the same. In the simpler model, we also provide bounds on the value of commitment, and on the loss due to the lack of belief of a bidder in a committed auctioneer. We also note that the problem of commitment may also be overcome in practice via other mechanisms such as verification in a repeated auction setting and via audits.
2303.16466
Time-, spin-, and angle-resolved photoemission spectroscopy with a 1-MHz 10.7-eV pulse laser
We describe a setup of time-, spin-, and angle-resolved photoemission spectroscopy (tr-SARPES) employing a 10.7-eV ($\lambda$=115.6 nm) pulse laser at 1-MHz repetition rate as a probe photon source. This equipment effectively combines technologies of a high-power Yb:fiber laser, ultraviolet-driven harmonic generation in Xe gas, and a SARPES apparatus equipped with very-low-energy-electron-diffraction (VLEED) spin detectors. A high repetition rate (1 MHz) of the probe laser allows experiments with the photoemission space-charge effects significantly reduced, despite a high flux of 10$^{13}$ photons/s on the sample. The relatively high photon energy (10.7 eV) also brings the capability of observing a wide momentum range that covers the entire Brillouin zone of many materials while ensuring high momentum resolution. The experimental setup overcomes a low efficiency of spin-resolved measurements, which gets even more severe for the pump-probed unoccupied states, and affords for investigating ultrafast electron and spin dynamics of modern quantum materials with energy and time resolutions of 25 meV and 360 fs, respectively.
Kaishu Kawaguchi, Kenta Kuroda, Z. Zhao, S. Tani, A. Harasawa, Y. Fukushima, H. Tanaka, R. Noguchi, T. Iimori, K. Yaji, M. Fujisawa, S. Shin, F. Komori, Y. Kobayashi, Takeshi Kondo
2023-03-29T05:43:47Z
http://arxiv.org/abs/2303.16466v2
# Time-, spin-, and angle-resolved photoemission spectroscopy ###### Abstract We describe a setup of time-, spin-, and angle-resolved photoemission spectroscopy (tr-SARPES) employing a 10.7-eV (\(\lambda\)=115.6 nm) pulse laser at 1-MHz repetition rate as a probe photon source. This equipment effectively combines technologies of a high-power Yb:fiber laser, ultraviolet-driven harmonic generation in Xe gas, and a SARPES apparatus equipped with very-low-energy-electron-diffraction (VLEED) spin detectors. A high repetition rate (1 MHz) of the probe laser allows experiments with the photoemission space-charge effects significantly reduced, despite a high flux of 10\({}^{13}\) photons/s on the sample. The relatively high photon energy (10.7 eV) also brings the capability of observing a wide momentum range that covers the entire Brillouin zone of many materials while ensuring high momentum resolution. The experimental setup overcomes a low efficiency of spin-resolved measurements, which gets even more severe for the pump-probed unoccupied states, and affords for investigating ultrafast electron and spin dynamics of modern quantum materials with energy and time resolutions of 25 meV and 360 fs, respectively. + Footnote †: These authors contributed equally. + Footnote †: These authors contributed equally. ## I Introduction Understanding the temporal evolution not only of electrons in solids optically excited by ultrashort laser pulse but also its spin properties has grown desired as a technological trend toward optical control of electronic spin information, so-called opto-spintronics. The ultrafast spin dynamics have been so far studied mainly by macroscopic optical methods combined with a pump-probe approach using an ultrashort pulse laser at a femtosecond timescale, such as time-resolved Kerr-rotation spectroscopy [1; 2] and circular absorption spectroscopy [3; 4]. On the other hand, time-, spin-, and angle-resolved photoemission emission spectroscopy (tr-SARPES) is a unique and powerful experimental technique, as it allows one to study these physics by directly observing the band structure of solids. It is based on standard angle-resolved photoemission spectroscopy (ARPES) [5; 6] equipped with electron spin detectors [7; 8], directly probing the spin degree of freedom in addition to the electronic band structures [9]. In tr-SARPES, a pump-probe method is further combined, allowing one to trace the temporal evolution of both the photoexcited electronic populations and spin polarizations in the energy-momentum resolved band structure. It has, indeed, been applied for several studies of ultrafast dynamics in ferromagnets [10; 11; 12; 13; 14; 15; 16; 17; 18; 19] as well as spin-orbit coupled materials [20; 21; 22; 23; 24; 25]. Nevertheless, it is still hard to insist that tr-SARPES has been established as a general-purpose experimental tool mainly because of its low efficiency in accumulating data, and thus the further development of this technique is strongly desired. In the past few decades, acute development was seen in tr-ARPES (without spin detection), and it remarkably boosted the research field for ultrafast dynamics in condensed matter physics; for example, this technique was employed to investigate the dynamics of photoexcited electronic states above the Fermi level (\(E_{\rm F}\)) [26; 27; 28] and to control material properties [29; 30; 31; 32; 33; 34]. This situation arose due to continuous technological improvement not only of the electron analyzers but also of new laser sources. The tr-SARPES measurements have been conducted so far with a probe pulse laser either of \(\sim\)6 eV generated by non-linear optical crystals or of extreme ultraviolet (EUV) by high-harmonic generation (HHG) in rare gases [35; 36]. In particular, HHG has recently been getting popular as a photon source of tr-ARPES. While deserving energy and momentum resolution, this brings the capability of observing a wide momentum space [37; 38; 39; 40; 41; 42; 43], in contrast to a 6-eV pulse laser generally used, which cannot observe the entire Brillouin zone of solids. Despite the great successes of tr-ARPES, there are two strong limitations for further developing it to the spin-resolved version, tr-SARPES. The first limitation is that the spin detection efficiency is so low (typically only below \(10^{-2}\) times the standard ARPES) and it gets even lower in the photoexcited occupied states that the measurement is very time-consuming [8]. This demands a large number of photons to acquire a sufficient count of photoelectrons. However, the number of photons per pulse cannot be increased since the dense population of photoelectrons excited at once by a pulse leads to space-charge effects which worsen energy and momentum resolutions [45; 46; 47; 48; 49]. This is the second limitation. Ironically, these two limitations are tied with each other, thus hardly solved at the same time. In particular, a strong pulse is, in principle, necessary for HHG at high photon energies. Hence, a reduced repetition rate is required, but it results in low efficiency, serious for tr-SARPES. While a 6-eV pulse laser has an advantage in gaining a high repetition rate, a problem is raised again, which is that the measurements are restricted to a narrow momentum region; here is another dilemma. In this paper, we describe an approach to overcome the above limitations, realizing tr-SARPES measurements reconciling high efficiency in spin detection, a wide momentum space view, and good energy/momentum resolutions. The system combines the SARPES apparatus with a Yb:fiber laser system at the high-repetition rate 1 MHz with a high-power achieved by a chirped pulse amplification (CPA). A 10.7-eV probe pulses are generated as the third harmonic of the UV-THG driver (3.57 eV) in a Xe gas cell, providing a photon flux exceeding \(10^{13}\) photon/s on the sample at a repetition rate of 1 MHz. The VUV source combined with a SARPES end-station brings the capability of performing tr-SARPES experiments with a time resolution of 360 fs and an energy resolution of 25 meV corresponding to the laser bandwidth. The structure of the paper is as follows: Sec. II overviews our apparatus. In Sec. III, the tr-SARPES beamline will be introduced and characterized. We also present the overall performance of our tr-SARPES system by showing the experimental data for several materials in Sec. IV. ## II Instrument layout Figure. 1 shows an overview of our tr-SARPES apparatus with an ultrashort 10.7-eV pulse laser, comprising three major sections: fundamental Yb:fiber laser source, Xe gas cell chamber, and SARPES apparatus. Combining these sections is crucial for the development of the state-of-the-art tr-SARPES apparatus. In the following, the details of each section will be described. ### 1-MHz Yb:fiber laser source An ideal light source for tr-SARPES needs to fulfill certain conditions regarding photon flux, repetition rate, and bandwidth. In contrast to standard photoemission experiments, tr-SARPES collects multi-dimensional data: the photoelectron intensity and the spin polarization vector as a function not only of energy and momentum but also of pump-probe delay time. Therefore, one has to accumulate the photoelectron signals from a large number of laser pulses to obtain data of sufficient statistics. However, the intensity of each probe pulse Figure 1: Schematic layout of the tr-SARPES system. A Yb:fiber laser (1.19 eV) is used as a high-power pulsed laser source at 1 MHz. Two \(\beta\)-BaB\({}_{2}\)O\({}_{4}\) (BBO) crystals are used for the second- and third-harmonic generations (SHG and THG, respectively). The THG pulses (3.57 eV) are focused onto a Xe gas cell to generate vacuum ultraviolet (VUV) pulses with a photon energy of 10.7-eV. The 10.7-eV pulses are reflected by a dichroic mirror (DM) in the Xe gas cell and focused by a lithium fluoride (LiF) lens-window onto a sample in an ultra-high vacuum (UHV) chamber equipped with a SARPES analyzer. cannot be increased in tr-SARPES because the photoelectron clouds per pulse lead to strong space-charge effects [45; 46; 47; 48; 49]. The best way to mitigate it is to reduce the number of photons per pulse and instead increase the repetition rate of the pulse laser. For this purpose, Yb:fiber laser could be a better light source than Ti:sapphire laser more commonly used. One of the advantages of Ti:sapphire laser is that a strong pulse with many photons can be generated. Therefore, this type of laser is widely used for the EUV generation (higher photon energy than 10 eV) at a kHz level. However, the EUV generation at higher repetition rates of an MHz level is still technical challenges [37; 38; 39; 40; 41; 42; 43]. In contrast, Yb:fiber system has advantages in that one can make the average power very high [50]. Recently, high pulse energy sufficient for the EUV generation even at a high repetition has been achieved by the Yb:fiber laser. In the present work, we design the instrument of tr-SARPES using a 10.7-eV probe laser generated by the home-built Yb:fiber CPA laser system operating at 1-MHz repetition rate [51; 52]. ### Design of Xe gas chamber The vacuum ultraviolet (VUV) probe pulse laser of 10.7 eV (\(\lambda\)=115.6 nm) is generated via the frequency up-conversion in Xe gas [53; 54; 52]. As a driver for it, we use the ultraviolet laser (UV) at 3.57 eV produced by two BBO crystals as the third-harmonic generation (THG) of the Yb:fiber laser. Notably, the UV gives a good phase matching condition to achieve a high efficiency of \(10^{-4}\) even at a high-repetition-rate of the MHz-level [44]. In our design for the beamline, the Xe gas-cell chamber plays key roles in two ways. One is for the 10.7-eV gen Figure 2: (a) Scaled layout of the 10.7-eV pulse laser beamline for tr-SARPES. M1, M2 : dielectric mirrors that are used for 7-eV laser, M3 : silver mirror for pump laser, M4 : silver mirror used for viewing a sample by a camera, and M5 : MgF\({}_{2}\) coated silver mirror used for adjusting the overlap between pump and probe pulses with a photodiode. The details of the setup are given in Sec. III. (b) Magnified view of the Xe gas cell chamber including DM and the LiF lens-window. (c) Incident-angle dependence of DM reflectance of the UV-THG driver (3.57 eV). (d) Intensity of the 10.7-eV pulse laser transmitted through the LiF lens-window with DM (red) and without DM (blue) [44]. (e) The \(p\)-component intensity of 10.7-eV pulse laser that is polarization-converted by the MgF\({}_{2}\) half-wave plate (HWP) plotted as a function of the HWP rotation angle. For proper detection, the \(p\)-component is precisely selected by two Rochon prism beam splitters (PBS) placed in front of and behind HWP as shown in the inset. eration. The second is as a beam separation between the UV-THG driver and VUV pulse. Differently from the set-up for the 11-eV laser previously reported [54, 55, 56], we place an antireflection-coated mirror inside of our gas-cell chamber. It works as a dichroic mirror (DM) separating the UV and VUV lasers. The gas-cell chamber is isolated from the vacuum chamber by a LiF window through which the generated 10.7-eV beam penetrates. LiF can be damaged by intense pulses in the UV range [57, 58], so the transmittance of the 10.7-eV beam would be degraded by the UV driver with the high-power shed on the LiF window. We avoid this issue by placing a DM in front of the LiF window. The idea of placing a DM in the gas cell chamber is the most important in our design of the 10.7-eV beamline for tr-SARPES. ### SARPES apparatus The above optical system is connected to the tr-SARPES end-station [59, 60]. This SARPES system is equipped with the VLEED spin detectors [8], which achieves the efficiency of \(10^{-2}\), 100 times larger than that of the Mott spin detectors [8]. The data quality, therefore, can be much better and the acquisition time gets much shorter. The combination of the high-efficient SARPES end station equipped with the VLEED spin detectors and the photon source of the 10.7-eV pulse laser at the high repetition rate (1 MHz) is a crucial element for realizing our state-of-the-art tr-SARPES apparatus. ## III Beamline setup and specification ### Overview A scaled layout of the pump-probe beamline with the 10.7-eV pulse laser constructed for tr-SARPES is sketched in Fig. 2. As a fundamental light source, we use a home-built Yb:fiber CPA laser set at 1 MHz [52]. It produces 270 fs pulses at 1.19 eV and high-power up to 100 \(\mu\)J/pulse. The whole fiber laser system is placed in a laser booth where temperature and humidity are stabilized. The UV pulse laser at 3.57 eV produced by THG via BBOs is focused onto our Xe gas cell chamber by a UV lens with \(f\)=150 mm (Throlabs, LA4874-UV), generating the 10.7 eV pulses. We use a beam stabilizer (TEM Mestechnik GmbH, Aligna) to compensate for drifts of the UV path before reaching the laser table outside of the laser booth. In the Xe gas-cell chamber (inset of Fig. 2), the generated 10.7-eV pulses are reflected by DM (NTT-AT corp.) at the gracing angle of 10 \({}^{\circ}\) to satisfy the antireflection condition for the \(p\)-polarized UV-THG driver as shown in Fig. 2(c). DM is made of the multilayer coating of SiO\({}_{2}\)/TiO\({}_{2}\) on a fused silica substrate and terminated by a TiO\({}_{2}\) layer on the surface to obtain a high reflection of 10.7 eV. The reflection of 30 % is expected by calculations with the optical constant of TiO\({}_{2}\). The pressure of the inner Xe gas, sealed by the LiF lens-window (Korth Kristalle GmbH) and the fused silica window (Thorlabs, WG42012-UV), is kept at 3.5\(\times\)10\({}^{3}\) Pa to obtain a high efficiency of the 10.7-eV generation. The LiF lens-window is integrated as a piece with a convex lens (\(f\)=150 mm), mildly focusing the 10.7-eV laser onto the sample in the SARPES chamber. Importantly, this LiF lens-window also acts as an "anti-achromatic" lens for the UV-THG driver and 10.7-eV pulse lasers: Because the refractive index of LiF for their wavelength is different, only the 10.7-eV laser is focused onto the samples. Thanks to this property, the density of UV-THG driver slightly reflected by DM (\(\sim\)0.1 % reflection) becomes very small at the sample position to be less than 1 pJ/cm\({}^{2}\), which is negligible for pump-probe experiments. Degradation of the LiF transmission by the irradiation of the UV pulses [57, 58] is also avoided by using DM. Figure 2(c) presents the intensities of the transmitted 10.7-eV laser from the LiF lens-window under two different settings monitored by the phototube (Hamamatsu Corp., R1187) [Fig. 2(d)]: one is with DM as mentioned above, and the other is without DM but instead uses a LiF prism to work as a separator of 10.7-eV pulses from the UV-THG driver laser [44]. Without DM, the LiF lens-window is irradiated by the bright UV-THG pulses, showing rapid degradation in the transmittance of the 10.7-eV laser [blue marks in Fig. 2(d)]. In contrast, the transmittance becomes stable (red marks) when using DM, indicating the LiF lens-window is protected from irradiation damage. One of the advantages of using the 10.7-eV pulse laser as the probe is that one can easily control the light polarization with a birefringence crystal, MgF\({}_{2}\). The evaluation of the half-waveplate (HWP) of MgF\({}_{2}\) (Kogakugiken Corp.) is presented in Fig. 2(e), where the intensity of the transmitted 10.7 eV pulse laser detected by the phototube is plotted as a function of HWP angle. For this test measurement, only the \(p\)-polarization component is detected (see the inset). By changing the HWP angle from 0\({}^{\circ}\) to 45\({}^{\circ}\), the transmitted intensity draws a sinusoidal curve, confirming that light polarization changes from \(p\) to \(s\). The polarization extinction ratio is estimated to be 100:5, which is sufficient for studying the light polarization dependence of photoelectron spin-polarization [61, 62]. The beamline is connected to the SARPES chamber [59, 60] with differential pumping systems keeping the UHV condition in the order of 10\({}^{-9}\) Pa. The hemispherical electron analyzer is ScientaOmicron DA30L equipped with double VLEED spin detectors, which permit a vector analysis of photoelectron spin-polarizations in three dimensions [63]. A molecular beam epitaxy (MBE) chamber is connected to this apparatus, and grown thin films can be transferred _in situ_ to the measurement chamber. In front of the SARPES chamber, a metallic mirror (M5) is placed. This is mounted on a linear translation stage and can be inserted in the beam path to reflect the pump pulse and the residual UV-THG driver passing through the Xe chamber. This setup allows one to adjust for the pump and probe pulses to have a temporal overlap while checked by the photodiode (EOT Corp., ET-2030). For this aim, the light polarization of the UV-THG driver needs to be changed to \(s\)-polarization as being reflected at DM and detected with the photodiode. In or der to achieve a spatial overlap of the pump and probe beams, we use a pinhole of polycrystalline gold (Au) films deposited on an oxygen-free copper plate [64]. The 10.7 eV probe beam is aligned to the pinhole position by manually controlling a mirror mount in front of the gas cell chamber as monitoring with a long-focus camera via M4. Once the 10.7-eV beam is fixed at the pinhole, the pump beam position is optimized, while observed by a camera, by controlling a motorized mirror mount of M3. In addition to the set-up for the pump-probe tr-SARPES, the beamline also connects a chamber installing KBe\({}_{2}\)BO\({}_{3}\)F\({}_{2}\) (KBBF), which is isolated by a CaF\({}_{2}\) window. This enables measurements of the high-resolution SARPES [59, 60] with the 7-eV laser, a second-harmonic of the frequency-tripled Nd:YVO\({}_{4}\) laser [65]. The 7-eV laser is carried to the SARPES chamber by two dielectric mirrors (M1 and M2) and focused onto the samples by the CaF\({}_{2}\) lens (Kogaku Corp.) mounted on the UHV gate valve. The linear translation stage carrying M2 is motorized, which enables switching the light source between 10.7 eV and 7 eV for the measurements. ### Specifications We present the performance of the beamline of the 10.7-eV pulse laser; photon flux, spot size, and energy/time resolutions. To obtain a good performance of these, the LiF lens-window used at the Xe gas cell chamber needs to be carefully designed. The transmittance of LiF crystal for the 10.7 eV is only 40 % mainly due to Fresnel reflection. Thus, one should reduce the number of LiF optics in the overall beamline to secure the photon flux of the transmitted beam. The photon flux gets higher with the all-in-one type LiF lens-window [Fig. 3(b)] than in the case when the LiF window and lens are separated [Fig. 3(a)]. In our beamline [Fig. 2], by using this LiF lens-window [Fig. 3(b)], we achieve to obtain a high photon flux \(\sim\)2.6\(\times\)10\({}^{13}\) photon/s (corresponding to \(\sim\)50 \(\mu\)W) of the 10.7-eV pulses at the sample position in the SARPES chamber. The photon flux was determined by the phototube (Hamamatsu corp., R1187). To achieve this high photon flux, we use 10 W of the fundamental 1040 nm and the accordingly generated 1 W of UV-THG driver. LiF has ionic crystallinity and is, therefore, so weak against physical stress that the LiF window could be easily bent by the Xe gas pressure. In fact, we observe such a distortion occurring for the LiF window thinner than 2 mm. This distortion in the window induces light scattering and hence worsens the beam focus. It is noticeable in ARPES signals demonstrated in Fig. 3: poor quality of a probe laser affected by a LiF thin window [Fig. 3(a)] causes the broadening of the ARPES spectra for the Shockley surface state of Au(111) [66] [Fig. 3(c)]. The beam quality is improved by using the LiF lens-window of 2 mm thickness, and consequently, ARPES spectra become high quality [Fig. 3(d)]. In Fig. 3(e), we examine the beam profile for the setup using the LiF lens-window [Fig. 3(b)] by plotting photoemission intensities for a knife edge of a polycrystalline gold films deposited on an oxygen-free copper plate. The beam spot size is estimated to be 70 \(\mu\)m in the full width at half maximum. In order to evaluate the energy resolution of ARPES and SARPES with the 10.7-eV pulse laser, we measure a polycrystalline bismuth films deposited on an oxygen-free copper plate [Fig. 4(a) and (b)]. The angle-integrated energy distribution curves (EDCs) were fit with a Fermi-Dirac function convoluted with a Gaussian function (black lines). The instrumental energy resolution (\(\Delta E_{\text{inst}}\)) is determined to be 22 meV from the width of the obtained Gaussian function. This resolution is expressed as \(\Delta E_{\text{inst}}\)=\(\sqrt{\Delta E_{\text{ana}}^{2}+\Delta E_{\text{probe}}^{2}}\) with the energy resolution of the electron analyzer (\(\Delta E_{\text{ana}}\)) and a bandwidth of the 10.7 eV laser pulse (\(\Delta E_{\text{probe}}\)). \(\Delta E_{\text{ana}}\) is expected to be rather small (\(\sim\)3 meV) according to \(\Delta E_{\text{ana}}\)= \(E_{p}\)_w7\(2R\)_: \(R\) is the radius of the analyzer (200 mm), \(E_{p}\) is the pass energy (5 eV), Figure 3: Two different setups of LiF lens and window for the Xe gas cell. (a) LiF window and lens are separated. (b) These two are integrated as a piece (LiF lents-window). (c,d) ARPES results of Shockley surface state at (111) surface of gold single crystal and the momentum distribution curves at Fermi energy, measured at \(T\)=28 K for the setups of (a,b), respectively. (e) Angle-integrated photoemission signals obtained in the lens-window setup from the knife edge of polycrystalline gold films deposited on an oxygen-free copper plate. The spot size of the 10.7-eV pulse laser at the sample position is estimated to be 70 \(\mu\)m from the full width at half maximum. and \(w\) is the entrance slit (0.2 mm). Therefore, \(\Delta E_{\text{probe}}\) and the corresponding band width \(\Delta\lambda\) are determined as 22 meV and 0.24 nm respectively. To determine the energy resolution for SARPES, we measure the same bismuth films used for APRES [Fig. 4(b)]. Compared to ARPES [Fig. 4(a)], the photoelectron count rate is much lower in SARPES, since this technique detects photoelectrons reflected (or spin-resolved) by a VLEED spin target. Note that the VLEED spin detector is a highly efficient spin detector, but still, the reflection probability of the target is only \(\sim\)10 % [8]. To compensate for the reduced count rates, we increase the analyzer slit up to 0.8 mm and select an aperture size of 3\(\times\)0.5 mm [59] for spin detection. These, however, do not worsen the energy resolution much. \(\Delta E_{\text{inst}}\) in SARPES is estimated to be 25 meV [Fig. 4(b)], which is comparable to that of ARPES. Even in SARPES, \(\Delta E_{\text{probe}}\) is the primal factor deciding the energy resolution. The Fourier transform limited pulse duration (\(\Delta T_{\text{probe}}\)) of the bandwidth (\(\Delta E_{\text{probe}}\)=22 meV) is estimated as 83 fs according to the following equation:\(\Delta E_{\text{probe}}\Delta t_{\text{probe}}\)=4\(\hbar\) ln 2 (\(\hbar\) is reduced Planck constant). However, LiF crystals used for the lens-window could cause a chirp for the 10.7-eV pulse, decreasing the experimental time resolution. To determine the resolution including the chirp effect, we observe in Fig. 4(c) the pump-probed signal of HOPG (a cleaved surface) which provides a fast response limited by the experimental time resolution [64]. Excited electron signals are seen in a very short time around \(t\)=0 ps when the overlap of the pump and probe pulses is large. The temporal profile at \(\sim\)0.6 eV above \(E_{\text{F}}\) shows a Gaussian shape [Fig. 4(d)], verifying that the fast response is resolution-limited. By fitting the profile with a Gaussian function, the time resolution (\(\Delta t\)) is estimated to be 360 fs. Since the duration of the pump pulse (1.19 eV), determined by an autocorrelation method, is 270 fs, that of the 10.7-eV pulse duration is estimated as 230 fs. ## IV Performance ### Bismuth thin films and grey arsenic: wide \(k\) maps An advantage of using a 10.7-eV laser is that one can access electronic structures over a wide momentum-range while keeping a high momentum resolution. Here, we demonstrate it by the ARPES measurements on bismuth that is known to host a metallic surface state on the (111) surface dispersing over the entire surface Brillouin zone [67]. The bismuth thin films [68; 69] is grown \(\sim\)100 bilayers _in situ_ on a \(n\)-type Si(111) Figure 5: (a) The Fermi surfaces of bismuth thin films grown on a Si(111) substrate obtained at \(T\)=50 K. The data were taken with the sample rotation and using an electron deflector equipped in the photoelectron analyzer [59]. (b) Pump-probed ARPES image of the surface state along \(\bar{\Gamma}\)-\(\bar{\text{M}}\) on the (111) face of grey arsenic at 300 fs after excitation by pump pulse (1.19 eV). The data were measured at \(T\)=18 K. The color scale of the upper image is changed from that of the lower image, to increase the visibility of the unoccupied states above \(E_{\text{F}}\) more clearly. Figure 4: Characterization of the energy and time resolutions. (a) ARPES spectrum for an evaporated polycrystalline bismuth films measured at \(T\)=35 K. The fitting curve (dashed line) estimates the energy resolution to be 22 meV. (b) Spin-ARPES spectrum for the same polycrystalline bismuth films as used in (a), detected by the VLEED spin detector [59]. The fitting curve (dashed line) estimates the energy resolution to be 25 meV. (c) Pump-probed signal of highly oriented pyrolytic graphite (HOPG) acquired at \(T\)=83 K. Here, the intensities of angle-integrated photoelectrons around \(\Gamma\) are presented. The 1.19-eV pulse is used for the pump. (d) Time evolution of pump-probed signals at \(E\)\(-\)\(E_{\text{F}}\)=0.6\(\pm\)0.1 eV extracted from (d). The experimental time resolution is estimated to be 360 fs from the full width at half maximum of the Gauss function fitted to the data (dashed line). This resolution corresponds to the time duration of a temporal cross-correlation between the pump and probe. substrate in the MBE chamber, and it is transferred to the SARPES chamber. Figure 5(a) represents the Fermi surface map obtained by the 10.7-eV laser. Thanks to high photon energy, one can observe the in-plane dispersion up to the zone boundary at M (\(k_{\mathrm{x}}\)\(\simeq\)0.8 A\({}^{-1}\)), which cannot be accessed by a conventional laser such as 6-eV [64] and 7-eV [59] laser, and even far beyond it over a wider momentum space. The capability of observing a wide momentum space with a 10.7-eV laser [55; 56] is also very useful in pump-probe experiments. In Fig. 5(b), we demonstrate it by measuring a cleaved (111) surface of a grey arsenic single crystal [70]. A nearly free-electron parabolic band for the Shockley surface state is observed around \(\bar{\Gamma}\) on the occupied side below \(E_{\mathrm{F}}\) [Fig. 5(b)]. When excited by the intense femtosecond pump pulses (1.19 eV, 0.1 \(\mu\)J/pulse), electrons are redistributed into the states above \(E_{\mathrm{F}}\), imaging the unoccupied part of the surface band in a wide momentum view. It is clearly exhibited that the band dispersion deviates from the simple parabolic shape by going away from \(\bar{\Gamma}\) and eventually disappears by merging into the bulk continuum states. ### Bi\({}_{2}\)Se\({}_{3}\): tr-SARPES Next, we perform tr-SARPES for a prototype topological insulator Bi\({}_{2}\)Se\({}_{3}\) with a spin-polarized metallic surface state forming a Dirac cone on the (111) surface [Fig. 6] [71; 72]. For this experiment, a clean (111) surface of a bulk Bi\({}_{2}\)Se\({}_{3}\) crystal is prepared by cleavage in the UHV chamber. Figure 6(a) plots typical tr-ARPES images taken at several delay times between the pump (1.19 eV, 0.12 \(\mu\)J/pulse) and probe pulses. One can trace ultrafast carrier dynamics of photoexcited surface bands evolving within sub-picoseconds. The Dirac dispersion above \(E_{\mathrm{F}}\) is populated just after the pump excitation, and the populated states gradually decay after a few picoseconds. Now, we turn the measurements to the spin-resolved mode to show the capability of our tr-SARPES setup. Figures 6(b) and (c) present the pump-probed ARPES image and the corresponding spin polarization at 0.5 ps after the arrival of the pump pulse. To clearly show the band dispersions above \(E_{\mathrm{F}}\), these spectral images are divided by the Fermi-Dirac distribution at electronic temperature (\(T_{\mathrm{el}}\)=700 K, \(k_{\mathrm{B}}\)T=60 meV) transiently increased by the pump excitation. The spin-polarization map [Fig. 6(c)] contains 66\(\times\)52 energy-angle points measured at the energy and angular steps of 10 meV and 0.3 \({}^{\circ}\), respectively. The energy and angular resolutions were each set to be 25 meV and 2.8 \({}^{\circ}\), and the acquisition times of the overall image were 8 hours. In Fig. 6(d), we plot the spin-resolved EDC (the top panel) and the corresponding spin-polarization (the bottom panel) measured at the emission angle \(\theta\)=4.2 \({}^{\circ}\). In these panels, the data taken with and without optical excitation by the pump Figure 6: (a) The tr-ARPES images of Bi\({}_{2}\)Se\({}_{3}\) at various delay times between the pump (1.19 eV) and probe (10.7 eV) pulses. The data were measured at \(T\)=23 K, and each map was obtained within 5 min. (b) Representative ARPES map at 0.5 ps after the pump. The image is divided by the Fermi-Dirac function at an electronic temperature (\(k_{\mathrm{B}}\)\(T\)=60 meV) to enhance the visibility above \(E_{\mathrm{F}}\). (c) The spin polarization intensity map corresponding to (b), expressed with two-dimensional color code [62]. This SARPES image was obtained within 8 hours. (d) SARPES spectra (upper panel) and the corresponding spin-polarization spectra (lower panel) taken at \(+4\)\({}^{\circ}\) without (squares) and with (circles) pump excitation. The accumulation times for each data set without/with the pump are 30 min. pulse are overlayed to be compared. In the equilibrium state without pumping (opened marks), the spectral intensity above \(E_{\mathrm{F}}\) is negligibly small because of the Fermi-energy cutoff. Although the spin-polarization is detected slightly above \(E_{\mathrm{F}}\), it is due to the thermally excited electrons and limited up to \(E\)\(-\)\(E_{\mathrm{F}}\)\(\sim\)0.1 eV, corresponding to the measurement temperature (\(T\)=20 K). On the other hand, the spectral weight above \(E_{\mathrm{F}}\) becomes dominant in the pump-probed data owing to transient populations excited up to higher binding energies. This leads to visualizing the spin-polarized band on the unoccupied side [Fig. 6(c)], which cannot be accessed by a conventional SARPES. ### Sb\({}_{2}\)Te\({}_{3}\): spin map of the transient population We perform tr-SARPES for another topological insulator Sb\({}_{2}\)Te\({}_{3}\). This compound has a Dirac cone expected for a spin-polarized topological surface state. In contrast to Bi\({}_{2}\)Se\({}_{3}\), however, the crystal of Sb\({}_{2}\)Te\({}_{3}\) has naturally \(p\)-type characters [73, 74, 75]. Hence, most part of the Dirac dispersion is located above \(E_{\mathrm{F}}\). It has been indeed visualized by tr-ARPES [76, 77]. However, the spin-polarization expected for the topological surface state has not been experimentally revealed. Our tr-SARPES is capable of performing it for the first time. Figures 7(a) and (b) show the band maps without and with pumping (1.19 eV, 0.2 \(\mu\)J/pulse). The pump-probed data is recorded at 0.5 ps after the arrival of the pump pulse when the intensity of the excited electrons reaches maximum [76, 77]. It is demonstrated that the Dirac cone hidden on the unoccupied side at the equilibrium state entirely emerges by the optical excitation. From the data, one can identify the Dirac point to locate at \(\sim\)180 meV above \(E_{\mathrm{F}}\) and the upper Dirac cone to be absorbed into the continuum of the bulk states around 0.4 eV. By switching to the spin-detection mode, here, we present the first visualization of the spin-polarization for the surface Dirac cone of Sb\({}_{2}\)Te\({}_{3}\), unraveling its helical spin texture [71]. Figures 7(d) and (e) plot the spin-resolved EDCs (the upper panels) at fixed emission angles of \(\pm\)3 \({}^{\circ}\) and the corresponding spin-polarization (the lower panels). The data clearly shows a sign inversion of the spin polarization about the normal emission angle (\(\theta\)=0 \({}^{\circ}\)) at \(E\)\(-\)\(E_{\mathrm{F}}\)=0.3 eV and 0.1 eV each corresponding to the lower and upper Dirac cones. Furthermore, the spin direction is opposite between the lower and Figure 7: (a) Pump-probe ARPES image of Sb\({}_{2}\)Te\({}_{3}\) without pump (the left panel) and at 0.5 ps after the pump (the right panel; \(h\alpha_{\mathrm{pump}}\)=1.19 eV). (b) The tr-SARPES image of Sb\({}_{2}\)Te\({}_{3}\) at 500 fs after the arrival of the pump pulse. The data were measured at \(T\)=25 K. (c) Schematic of our tr-SARPES results illustrating bulk valence and conduction bands (BVB and BCB), topological surface state (TSS), Dirac point (DP), and unoccupied surface resonance (SR) [21]. TSS has a spin-helical Dirac cone, and SR is also spin-polarized. BCB and BVB show continuum states projected on the surface (shaded area). (d,e) Spin-resolved EDCs (upper panel) and the corresponding spin-polarization (lower panel) at \(-\)3 \({}^{\circ}\) and \(+\)3 \({}^{\circ}\), respectively. upper Dirac cones. These results represent helical spin textures characteristic of the topological surface states, which flip direction across the Dirac point [see Fig. 7(c)]. Our data of spin polarization also show another sign inversion around \(E-E_{\text{F}}\)=0.6 eV. This is most likely originating from a spin-polarized surface resonance state, as schematically depicted in Fig. 7(c), which is compatible with that recently observed in Bi\({}_{2}\)Se\({}_{3}\)[21]. Our new tr-SARPES equipment, therefore, entirely unveils the unoccupied spin-polarized states in Sb\({}_{2}\)Te\({}_{3}\) for the first time. ## V Conclusion In conclusion, we introduced a setup for tr-SARPES apparatus with the 10.7-eV pulse laser at 1-MHz repetition rate capable of band mapping over a wide momentum space at a high momentum resolution. The light source is generated by the frequency up-conversion to 10.7 eV by the UV-THG driver and Xe-plasma, based on the Yb:fiber CPA laser system operating at 1 MHz. Combining this laser source with high-efficient VLEED spin detectors, tr-SARPES is feasible at the energy resolution of 25 meV and the time resolution of 360 fs. This setup can map the band structure of photoexcited states together with the spin polarization information and follow its temporal evolution on a femtosecond time scale. The newly developed tr-SARPES apparatus will be a powerful experimental tool for future studies of spin-polarized electronic states in modern quantum materials. ## VI Acknowledgments We acknowledge Federico Cilento, Yukiaki Ishida, Suguru Ito for fruitful comments and discussions, and D. Matsumaru, S. Sakuragi T. Suzuki, T. Kurihara for supports of experiments. This work was supported by the JSPS KAKENHI (Grants Numbers. JP21H04439), by MEXT Q-LEAP (Grant No. JPMXS0118068681), and by MEXT as "Program for Promoting Researches on the Supercomputer Fugaku" (Basic Science for Emergence and Functionality in Quantum Matter Innovative Strongly-Correlated Electron Science by Integration of "Fugaku" and Frontier Experiments, JPMXP1020200104) (Project ID: hp200132/hp210163/hp220166).